RxJava concurrent data stream launch too fast how to do

Source: Internet
Author: User

Backpressure

The data stream in Rx is emitted from one place to another. The speed at which each place processes data is different. What happens if the producer launches the data faster than the consumer can handle? In a synchronous operation, this is not a problem, for example:

//produceobservable<integer> producer = Observable.create  (o, {o.onnext  (1 )  O.onnext  (2 )     ; O.oncompleted  () ; })  Consumeproducer.subscribe  (I--{try {thread.sleep  (1000 )  System.out  .println  (i)  } catch (Exception e) {}})   

Although the above consumer processing data is slow, but because it is synchronous call, so when o.onnext (1) execution, has been blocked until the consumer processing finished O.onnext (2).

But it is common for producers and consumers to deal with them asynchronously. What happens if you are in an asynchronous situation?

In the traditional pull model, when the consumer requests the data, if the producer is slow, then the consumer will block the wait. If producers are faster, producers will wait for consumers to reproduce new data after processing.

And Rx is the push model. In Rx, as soon as the producer data is good, it is fired. If the producer is slow, consumers will wait for new data to arrive. If the producer is fast, there will be a lot of data to be emitted to the consumer, regardless of whether the consumer is currently incapable of processing the data. This can cause a problem, such as:

Observable.interval(1, TimeUnit.MILLISECONDS)    .observeOn(Schedulers.newThread())    .subscribe(        i -> {            System.out.println(i);            try {                Thread.sleep(100);            } catch (Exception e) { }        },        System.out::println);

Results:

01rx.exceptions.MissingBackpressureException

The above Missingbackpressureexception tells us that the producer is too fast and our operation function cannot handle this situation.

Remedies for consumers

Some operation functions can reduce the data sent to the consumer.

Filtering data

The sample operation function can specify the maximum speed at which the producer emits data, and the excess data is discarded.

observable.interval  ( 1 , Timeunit) .observeon  (schedulers .newthread  ()) .sample  (100 , Timeunit) .subscribe  (I--{System.out  .println  (i)  try {thread.sleep  (100 )  } catch (Exception e) {}}, System.out ::p rintln)  

Results:

82182283...

Throttle and debounce can also achieve similar results.

Collect

If you don't want to discard data, you can use the Buffer and window action functions to collect data when the consumer is busy. You can use this method if you are processing data in batches faster.

observable.interval  ( 10 , Timeunit) .observeon  (schedulers .newthread  ()) .buffer  (100 , Timeunit) .subscribe  (I--{System.out  .println  (i)  try {thread.sleep  (100 )  } catch (Exception e) {}}, System.out ::p rintln)  

Results:

[0,1,2,3,4,5,6,7][8,9,Ten, One, A, -, -, the, -, -][ -, +, -, +, A, at, -, -, -, -]...
REACTIVE PULL

The above approach sometimes solves the problem, but is not the best way to handle it. Sometimes dealing with producers here may be the best case. Backpressure is a way to reduce the firing speed on the producer side.

RxJava implements a way to notify Observable of data transmission by subscriber. Subscriber has a function request (n), called to notify Observable now subscriber ready to accept the following n data. Invoking the request function in subscriber's OnStart function opens the reactive pull backpressure. This is not a traditional pull model and does not block calls. Just subscriber notifies Observable of the current subscriber processing power. More data can be emitted by invoking the request.

class MySubscriber extends Subscriber<T> {    @Override    public void onStart() {      request(1);    }    @Override    public void onCompleted() {        ...    }    @Override    public void onError(Throwable e) {        ...    }    @Override    public void onNext(T n) {        ...        request(1);    }}

Calling request (1) in the OnStart function opens the backpressure mode, telling Observable to emit only one data at a time. After the data is processed in the OnNext, the next data can be requested. Backpressure mode can be canceled by Quest (Long.max_value).

doonrequested

When discussing the Doon_ function in the section on side effects, we did not discuss the function of doonrequested:

publicfinaldoOnRequest(Action1<java.lang.Long> onRequest)

When subscriber requests more time, Doonrequest is called. The value in the parameter is the number of requests.

The current doonrequest is also a beta version of the API. Therefore, in the development process as far as possible to avoid use. Here's a demonstration of this API:

Observable.range(03)    .doOnRequest(i -> System.out.println("Requested " + i))    .subscribe(System.out::println);

Results:

Requested 9223372036854775807012

You can see that subscriber requested the maximum amount of data at the beginning. This means that the Backpressure model is not used. Subscribe can use this feature only when a Subscriber implements Backpressure. The following is an example of an externally implemented control backpressure:

 Public classControlledpullsubscriber<t> extends Subscriber<t> {PrivateFinal action1<t> onnextaction;PrivateFinal action1<throwable> onerroraction;PrivateFinal Action0 oncompletedaction; Public Controlledpullsubscriber(action1<t> onnextaction, action1<throwable> onerroraction, Action0 oncomple Tedaction) { This. onnextaction = onnextaction; This. onerroraction = onerroraction; This. oncompletedaction = oncompletedaction; } Public Controlledpullsubscriber(action1<t> onnextaction, action1<throwable> onerroraction) { This(Onnextaction, Onerroraction, (), {}); } Public Controlledpullsubscriber(action1<t> onnextaction) { This(Onnextaction, E, {}, (), {}); } @Override Public void OnStart() {Request (0); } @Override Public void oncompleted() {oncompletedaction.call (); } @Override Public void OnError(Throwable e)    {Onerroraction.call (e); } @Override Public void OnNext(T T)    {Onnextaction.call (t); } Public void Requestmore(intN) {request (n); }}

In the above implementation, if the Requestmore function is not actively invoked, then Observable will not emit data.

ControlledPullSubscriber<Integer> puller =         new ControlledPullSubscriber<Integer>(System.out::println);Observable.range(03)    .doOnRequest(i -> System.out.println("Requested " + i))    .subscribe(puller);puller.requestMore(2);puller.requestMore(1);

Results:

Requested 0Requested 201Requested 12

Controlledpullsubscriber in OnStart told Observable not to launch data. Then we request 2 data and one data separately.

The Rx operator function internally uses queues and buffers to implement backpressure, thus avoiding the storage of unlimited amounts of data. Buffering of large amounts of data should be handled using specialized operational functions such as cache, buffer, and so on. The ZIP function is an example, and the first Observable may emit one or more data before the second Observable emits the data. So zip needs a smaller buffer to match two Observable, thus avoiding the failure of the operation. As a result, a small buffer of 128 data is used inside the zip.

Observable.range(0300)    .doOnRequest(i -> System.out.println("Requested " + i))    .zipWith(            Observable.range(10300),            " - " + i2)    .take(300)    .subscribe();

Results:

Requested 128Requested 90Requested 90Requested 90

The zip operation function starts by requesting enough (128) of the data to populate the buffer and process the data. Here the zip operation function specifically buffers the data is not the primary. Readers should keep in mind that in Rx no matter if the developer has actively enabled the feature, some of the operations functions will use it internally. This ensures that the Rx data stream is more stable and extensible.

Backpressure strategy

Many Rx operation functions internally use Backpressure to prevent excessive data from filling up the internal queue. This slow-processing consumer will pass this situation to the previous consumer, before the consumer begins to buffer the data until he also caches the full before telling him before the consumer. Backpressure does not eliminate this situation. Just let the error delay occur, we still need to deal with this situation.
There are operational functions in Rx that can be used to handle situations where the consumer cannot handle it.

Onbackpressurebuffer

Onbackpressurebuffer caches all data that is currently not consumed until Observer can handle it.

You can specify the number of buffers that will cause the data flow to fail if the buffer is full.

Observable.interval(1, TimeUnit.MILLISECONDS)    .onBackpressureBuffer(1000)    .observeOn(Schedulers.newThread())    .subscribe(        i -> {            System.out.println(i);            try {                Thread.sleep(100);            } catch (Exception e) { }        },        System.out::println    );

Results:

01234567891011bufferof1000

In the example above, the producer is 100 times times faster than the consumer. Use 1000 buffers to handle this slow consumer situation. When consumers consume 11 of data, the buffer is full and producers produce 1100 of data. The data flow throws an exception.

Onbackpressuredrop

If the consumer is unable to process the data, Onbackpressuredrop will discard the data.

Observable.interval(1, TimeUnit.MILLISECONDS)    .onBackpressureDrop()    .observeOn(Schedulers.newThread())    .subscribe(        i -> {            System.out.println(i);            try {                Thread.sleep(100);            } catch (Exception e) { }        },        System.out::println);

Results:

012...1261271286112862...

In this example, the previous 128 data is normally handled, which is supposed to be a small buffer of 128 data for Observeon when switching threads.

This article is from the cloud in the Thousand peaks http://blog.chengyunfeng.com/?p=981

RxJava concurrent data stream launch too fast how to do

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.