(12) Custom Data flow (rest API for live docker event push)--Response Spring's Path wizard

Source: Internet
Author: User
Tags emit static class docker run

This series of articles index the "Response Spring's word Wizard"
Previously summary Reactor 3 Quick Start | Spring Webflux Quick Start | Responsive Flow Specification
Test source for this article | Actual combat source

2.2 Custom Data Flow

This section describes how to onNext onError onComplete Create a Flux or Mono by defining the appropriate events (, and). Reactor provides, and, generate create push and handle so on, all of these methods use sink (pools) to generate data streams.

Sink, as the name implies, is the pool, you can imagine the kitchen pool look. As shown in the following:

The methods described below have a sink provided to the method user, usually exposing at least three methods to us, next error and complete . Next and error are equivalent to two drains, and we keep putting the custom data into next, and reactor will string us into a publisher stream until a wrong data is put to the error port, or a button is clicked complete , and the data flow terminates.

2.2.1 Generate

generateis a way to emit data on a synchronous basis. Because the sink it provides is one SynchronousSink , and its next() method can only be called once per callback.

generateThere are three types of signatures for this method:

    public static <T> Flux<T> generate(Consumer<SynchronousSink<T>> generator)    public static <T, S> Flux<T> generate(Callable<S> stateSupplier, BiFunction<S, SynchronousSink<T>, S> generator)     public static <T, S> Flux<T> generate(Callable<S> stateSupplier, BiFunction<S, SynchronousSink<T>, S> generator, Consumer<? super S> stateConsumer)

1) using Synchronoussink to generate data streams

    @Test    public void testGenerate1() {        final AtomicInteger count = new AtomicInteger(1);   // 1        Flux.generate(sink -> {            sink.next(count.get() + " : " + new Date());   // 2            try {                TimeUnit.SECONDS.sleep(1);            } catch (InterruptedException e) {                e.printStackTrace();            }            if (count.getAndIncrement() >= 5) {                sink.complete();     // 3            }        }).subscribe(System.out::println);  // 4    }
    1. Used for counting;
    2. Put custom data into the pool;
    3. Tell generate the method that the custom data has been sent out;
    4. Triggers the data flow.

The output is printed every 1 seconds for a total of 5 printing times.

2) Add a companion state

For the above example, count for the record state, the count is stopped when the value reaches 5. Because it is used inside a lambda, it must be of the final type and cannot be a primitive type (such as int ) or an immutable type (such as Integer ).

If you use the second method signature, the above example can be changed like this:

    @Test    public void testGenerate2() {        Flux.generate(                () -> 1,    // 1                (count, sink) -> {      // 2                    sink.next(count + " : " + new Date());                    try {                        TimeUnit.SECONDS.sleep(1);                    } catch (InterruptedException e) {                        e.printStackTrace();                    }                    if (count >= 5) {                        sink.complete();                    }                    return count + 1;   // 3                }).subscribe(System.out::println);    }
    1. Initializes the state value;
    2. The second argument is BiFunction that the input is state and sink;
    3. Each cycle returns a new status value for the next use.

3) Complete post-processing

The third method signature in addition to the state, sink, there is one, which is Consumer Consumer executed after the data flow has been sent out.

        Flux.generate(                () -> 1,                (count, sink) -> {                    sink.next(count + " : " + new Date());                    try {                        TimeUnit.SECONDS.sleep(1);                    } catch (InterruptedException e) {                        e.printStackTrace();                    }                    if (count >= 5) {                        sink.complete();                    }                    return count + 1;                }, System.out::println)     // 1                .subscribe(System.out::println);    }
    1. Finally, the count value is printed out.

If state uses a database connection or other resource that needs to be cleaned, this Consumer lambda can be used to finalize the resource cleanup task.

2.2.2 Create

createis a more advanced method of creating flux, which can generate data streams either synchronously or asynchronously, and can emit multiple elements at a time.

createUse, the FluxSink latter also provides methods such as Next,error and complete. Unlike generate, create does not require a state value, and on the other hand, it can trigger multiple events in the callback, even if the event occurs at some time in the future.

The common scenario for create is to turn an existing API into a responsive one, such as a listener's async method.

Write an event source first:

  public class Myeventsource {private list<myeventlistener> listeners;        Public Myeventsource () {this.listeners = new arraylist<> ();        } public void register (MyEventListener listener) {//1 listeners.add (listener);                } public void Newevent (MyEvent event) {for (MyEventListener listener:listeners) {     Listener.onnewevent (event);                    2}} public void eventstopped () {for (MyEventListener listener:      Listeners) {listener.oneventstopped (); 3}} @Data @NoArgsConstructor @AllArgsConstructor public static class My            Event {//4 private Date timestemp;        Private String message; }    }
    1. Register the Listener;
    2. Issue a new event to the listener;
    3. Tells the listener that the event source has stopped;
    4. event class, using the Lombok annotation.

Prepare a listener interface that can listen for two events on the top 2nd and 3: (1) New arrivals, and MyEvent (2) event sources to stop. As follows:

    public interface MyEventListener {        void onNewEvent(MyEventSource.MyEvent event);        void onEventStopped();    }

The following test method logic is: Create a listener registered to the event source, the listener receives the event callback again when the Flux.create sink to convert a series of events into an asynchronous stream of events:

    @Test public void Testcreate () throws interruptedexception {Myeventsource EventSource = new Myeventsource ()    ;                        1 flux.create (sink, {eventsource.register) (new MyEventListener () {//2                            @Override public void Onnewevent (Myeventsource.myevent event) {       Sink.next (event);                            3} @Override public void oneventstopped () {        Sink.complete ();                4}});       }). Subscribe (system.out::p rintln);            5 for (int i = 0; i <; i++) {//6 random random = new random ();            TimeUnit.MILLISECONDS.sleep (Random.nextint (1000));          Eventsource.newevent (New Myeventsource.myevent (New Date (), "event-" + i)); } eventsource.eventstopped (); 7}
    1. Event source;
    2. Registers a listener created with an anonymous inner class to the event source;
    3. The listener sends the event back through sink when the event callback is received;
    4. When the listener receives a callback from the source stop, it sends the completion signal through the sink;
    5. Trigger the Subscription (no event has been generated at this time);
    6. The loop produces 20 events, each with a random time of no more than 1 seconds;
    7. Last stop event source.

Run this test method, 20 MyEvent print out in succession.

If the above method is replaced with a create generate method, an exception is reported:

java.lang.IllegalStateException: The generator didn‘t call any of the SynchronousSink method

proves that generate the asynchronous approach is not supported.

createMethod also has a Variant method push that is suitable for generating event streams. With create 类似, push can also be asynchronous and be able to use the various back-pressure strategies above. So the above example can be replaced with a push method. The difference is that, push in a method, the call next , complete or error the same thread must be the same.

In addition to next , complete or error methods, FluxSink There are onRequest methods that can be used to respond to a downstream subscriber's request event. Thus not only can the upstream be pushed downstream when the data is ready, as in the previous example, but downstream can also pull data that is already ready from the upstream. This is a push/pull blending pattern. Like what:

    Flux<String> bridge = Flux.create(sink -> {        myMessageProcessor.register(          new MyMessageListener<String>() {            public void onMessage(List<String> messages) {              for(String s : messages) {                sink.next(s);   // 1              }            }        });        sink.onRequest(n -> {   // 2            List<String> messages = myMessageProcessor.request(n);  // 3            for(String s : message) {               sink.next(s);             }        });        ...    }
    1. Push mode, the initiative to send data downstream;
    2. Called when a downstream request is made;
    3. Responds to downstream requests and queries for a message that is available.
2.2.3 Real-combat Docker event push API

Docker provides a command to listen for events: docker events after running this command, it listens to the events of the Docker daemon and prints them out, and the execution is ongoing, just like top the command described before mongostat . Docker's Java Development Package DockerClient also provides APIs that are callback-based, so we can use the reactor create method to convert this callback-based API into a responsive stream, where the data in the stream is a single Docker event. As shown in the following:

1) Test dockerclient

First, let's start Docker first.

We then continue to use the first chapter of the webflux-demo Maven project module to pom.xml add Docker development-related dependencies:

        <!--docker client begin-->        <dependency>            <groupId>com.github.docker-java</groupId>            <artifactId>docker-java</artifactId>            <version>3.0.14</version>        </dependency>        <dependency>            <groupId>javax.ws.rs</groupId>            <artifactId>javax.ws.rs-api</artifactId>            <version>2.1</version>        </dependency>        <dependency>            <groupId>org.glassfish.jersey.inject</groupId>            <artifactId>jersey-hk2</artifactId>            <version>2.26</version>        </dependency>        <!--docker client end-->

Finally write the test method:

  public class Dockereventtest {@Test public void Dockereventtoflux () throws Interruptedexception {   Collectdockerevents (). Subscribe (system.out::p rintln);  5 TimeUnit.MINUTES.sleep (1); 6} Private flux<event> collectdockerevents () {dockerclient Docker = Dockerclientbuilder.getinstanc    E (). build (); 1 return flux.create (fluxsink<event> sink), {Eventsresultcallback callback = new EVENTSR                    Esultcallback () {//2 @Override public void OnNext (event event) {//3                Sink.next (event);            }            };  Docker.eventscmd (). exec (callback);    4}); }}
    1. Create Dockerclient, connect by default tcp://localhost:2375 , 2375 is docker default port number, you can connect Docker daemon via specified IP and port: DockerClientBuilder.getInstance("tcp://192.168.0.123:2375").build() but be aware of Docker Daemon Listener interface and firewall configuration.
    2. The custom callback class.
    3. When a Docker event occurs, a callback is onNext made, and FluxSink the next method passes the Event object.
    4. Start listening for Docker events.
    5. Print it out by subscribing to it.
    6. The main thread will return immediately, so wait 1 minutes.

OK, check the effect.

To facilitate comparison, we first run the command at the terminal and docker events then perform Docker operations on the other terminal, such as this example:

docker run -it -m 200M --memort-swap=200M progrium/stress --vm 1 --vm-bytes 300M

progrium/stressis a container for stress testing that -m 200M allocates up to 200M of memory for the operation of the container, and then, at the time of the stress test, --vm-bytes 300M attempts to allocate 300M of memory by making it run, an out-of-memory (OOM) error occurs and causes the container to be killed (single 9).

, above is a terminal window running two commands respectively, you can see the docker events command print out a series of events, if it is the first run progrium/stress should be back first a pull mirror event. Below is the output of our test code, which, in addition to some logs, can be seen to be output.

2) REST API push to front end

Below, we further push the event events through the rest API to the browser side, and see the 1th verse 3.3, which should be pro.

(a) First define our own DockerEvent , this step is not necessary ha, but DockerClient the return of the Event field is more, usually the front-end display will be converted to DVO, "play to do enough" well, haha.

Dockerevent.java

@Data@Document(collection = "docker-event")public class DockerEvent {    @Indexed    private String status;    @Id    private String id;    private String from;    private Node node;    private EventType type;    private String action;    private String actorId;    private Long time;    private Long timeNano;}

(b) Then is the DAO layer, create a DockerEventMongoRepository , add three @Tailable query methods, respectively, for querying all, according to the state query and by type + Name query (such as query the event of a container):

Dockereventmongorepository.java

public interface DockerEventMongoRepository extends ReactiveMongoRepository<DockerEvent, String> {    @Tailable    Flux<DockerEvent> findBy();    @Tailable    Flux<DockerEvent> findByStatus(String status);    @Tailable    Flux<DockerEvent> findByTypeAndFrom(String type, String from);}

(iii) Define one CommandLineRunner that starts listening for Docker events after the app starts:

Dockereventscollector.java

@Slf4j @componentpublic class Dockereventscollector implements Commandlinerunner {private Dockereventmongorepository D    Ockereventmongorepository;    Private mongotemplate MONGO;        1 public dockereventscollector (dockereventmongorepository dockereventmongorepository, mongotemplate MONGO) {//1        This.dockereventmongorepository = dockereventmongorepository;    this.mongo= MONGO;    } @Override public void run (String ... args) {mongo.dropcollection (dockerevent.class); 2 mongo.createcollection (Dockerevent.class, Collectionoptions.empty (). Maxdocuments ($). Size (100000). Capped ()) ;  2 Dockereventmongorepository.saveall (Collect ()). Subscribe (); 6} private flux<dockerevent> Collect () {//3 dockerclient docker = Dockerclientbuilder.getinstan        CE (). build (); Return Flux.create (fluxsink<event> sink), {Eventsresultcallback callback = new Eventsresultcallbac      K () {@Override          public void OnNext (event event) {Sink.next (event);            }            };        Docker.eventscmd (). exec (callback); }). Map (This::trans)//4. Doonnext (E-Log.info (E.tostring ()));        5} Private Dockerevent trans (event event) {//4 dockerevent dockerevent = new Dockerevent ();        Dockerevent.setaction (Event.getaction ());        Dockerevent.setactorid (Objects.requirenonnull (Event.getactor ()). GetId ());        Dockerevent.setfrom (event.getfrom () = = null? Null:event.getFrom (). Replace ("//", "_"));        Dockerevent.setid (Uuid.randomuuid (). toString ());        Dockerevent.setnode (Event.getnode ());        Dockerevent.setstatus (Event.getstatus ());        Dockerevent.settime (Event.gettime ());        Dockerevent.settimenano (Event.gettimenano ());        Dockerevent.settype (Event.gettype ());    return dockerevent; }}
    1. In this case, spring MongoTemplate 4.3, if there is a construction method, spring will be injected automatically, no need @Autowired for annotations.
    2. Each launch of the application for the DockerEvent creation of "capped" collection, easy to test, if manually created in advance can not add these two sentences. If the use of//1 in the response ReactiveMongoTemplate , because it is asynchronous, so to use then() or thenMany() will be all the subsequent operation of the connection, such as mongo.dropCollection(...).then(mongo.createCollection(...)).thenMany(dockerEventMongoRepository.saveAll(collect())) the guarantee can be executed sequentially.
    3. The method for monitoring Docker events.
    4. The returned Event conversion is defined by US DockerEvent , where the DockerEvent.from field is the event principal name, such as the container name, possibly / , and therefore a character substitution, otherwise there is a problem in the URL.
    5. Print a log (optional).
    6. Saves the collection DockerEvent to MongoDB and executes with the subscribe() trigger.

(iv) Service layer There is no logic, we directly write controller:

Dockereventcontroller.java

  @Slf4j @restcontroller@requestmapping (value = "/docker/events", produces =  Mediatype.application_stream_json_value)//1public class Dockereventcontroller {private dockereventmongorepository    Dockereventmongorepository; Public Dockereventcontroller (Dockereventmongorepository dockereventmongorepository) {this.dockereventmongoreposit    Ory = dockereventmongorepository; } @GetMapping Public flux<dockerevent> Dockereventstream () {//2 return dockereventmongorepository.fi    Ndby (); } @GetMapping ("/{type}/{from}") Public flux<dockerevent> Dockereventstream (@PathVariable ("type") String type,    @PathVariable ("from") a String from) {//3 return Dockereventmongorepository.findbytypeandfrom (type, from);        } @GetMapping ("/{status}") Public flux<dockerevent> Dockereventstream (@PathVariable String status) {//4    return Dockereventmongorepository.findbystatus (status); }}

OK, start the test:

As you can see, the small icon on the right side of the browser has been rotated to indicate continuous receive push, and when the Docker operation is performed in the terminal, the resulting events appear in the browser immediately. If the request /docker/events/oom will only push the Oom event, the request /docker/events/container/progrium_stress will only push events from the container progrium/stress.

Again, when a single piece of data is not in capped's collection, @Tailable the API returns immediately, so wait until there is at least one piece of data in the database (for example, perform the following pull) before requesting the API in the browser docker/events .

(12) Custom Data flow (rest API for live docker event push)--Response Spring's Path wizard

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.