The Vert.x core is a collection of Java application Interfaces (APIs) that we call the kernel
The Vert.x kernel provides a feature like the following. The client and server that wrote the TCP protocol. Writes the HTTP protocol client to the server and contains the WebSocket support event bus to share data-local mappings and cluster distributed mappings for periodic and deferred execution, where the periodic or deferred Action datagram socket (UDP) DNS (domain Name Service) client file system access High Availability Cluster
The functionality in the kernel is relatively low-level, where you may not find data access, authentication, and high levels of Web functionality, which you will find in the vert.x extension
The Vert.x kernel is small and lightweight, and you only use what you want. It is also all embedded into your existing application. We do not force you to structure your application in a special way to use Vert.x.
You can use the Vert.x kernel in any other vert.x supported programming language, but a bit cooler is that we don't force you to use Java Api,javascript or Ruby directly, after all, different languages have different ways and syntax, And it's strange to force Java syntax to be used in Ruby. In return, we automatically have the same language habits for each language generator and Java API.
From now on, we use the kernel instead of the Vert.x core module
If you are using Maven or Gradle, add the following dependencies to your app's dependencies (dependencies) section for easy access to the Vert.x core process Interface (API) Maven (in Project Pom.xml)
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-core</artifactId>
<version>3.3.2</version>
</dependency> Gradle (in the project Build.gradle file)
compileio.vertx:vertx-core:3.3.2
Let's start with the different concepts and features in the kernel.
There was a vert.x at first.
In Vert.x-land, you can only do very little, unless you can communicate with VERTX objects!
The kernel is the Vert.x control center and you can do a lot of things, including creating clients and servers, getting the event bus references, setting timers to other things like me.
Now, how to get a Vertx instance.
If you introduce Vert.x, then you can create an instance as simple as the following.
Vertx vertx = Vertx.vertx ();
If you are using Verticles
Note: Most applications will not only need a single vert.x instance, but rather create multiple vert.x instances as needed, for example, to isolate the event bus from different groups of servers from the client (not very clear)
specifying options when creating a Vertx object
When creating a Vertx object, you can also specify an option if the default value is not appropriate:
Vertx vertx = Vertx.vertx (Newvertxoptions (). Setworkerpoolsize (40));
The Vertxoptions object has multiple settings that allow you to configure similar clusters, high availability, pool size, and some other settings. Javadoc details all the settings.
Create a Vert.x object for a cluster
If you are creating a cluster of vert.x (for more information about the cluster event bus, see the Event Bus section), then you will normally use asynchronous variables to create Vertx objects.
The reason is that vertx typically takes some time (perhaps a few seconds) to group different vert.x instances in the cluster together. During this time, we do not want to block the thread being called, so we give a result asynchronously.
Did you use the flow?
In the previous poo, you may have noticed the use of streaming APIs. The streaming API refers to multiple method calls chained together. For example:
Request.response (). Putheader ("Content-type", "Text/plain"). Write ("some text"). end ();
This is common throughout the Vert.x API. So we used it so often.
Chained calls like this allow you to write a bit of verbose code. Of course, if you don't like streaming methods and we don't force you to do that, you can happily ignore it if you like and write code like the following:
Httpserverresponse response =request.response ();
Response.putheader ("Content-type", "Text/plain");
Response.Write ("some text");
Response.End ();
Don't tune me, I'll call you
The Vert.x API is primarily event-driven. This means that when something you are interested in is being sent to Vert.x, Vert.x will call you through the event that sent you.
Some examples of events are:
· Timer Trigger
· Some data arrives at the specified socket
· Some numbers have been read from disk
· An exception appears
· An HTTP server has received a request
You mention handling events by providing the processor (handlers) to the Vert.x APIs. For example, to receive a timer event every two seconds, you should do this:
Vertx.setperiodic (+, ID, {
This handler'll get called every second
SYSTEM.OUT.PRINTLN ("Timer fired!");
});
or receive an HTTP request:
Server.requesthandler (Request, {
This handler'll be called every time a HTTP request is received at Theserver
Request.response (). End ("Hello world!");
});
When Vert.x has an event to pass to your processor, Vert.x will call it asynchronously with some delay.
This will allow us to introduce some important concepts of vert.x:
Don't block me.
With almost no exceptions (such as some file system operations to synchronize the node speed), there is no API blocking invocation of Vert.x in the.
I as a result may stand both, it will stand both back, and in addition you will typically provide a processor for receiving subsequent events.
Because there is no vert.x APIs blocking threads, this means that a large number of concurrent threads can be handled by using a province-wide thread.
Because of the custom blocking API, the following scenario may block the calling thread:
· Reading data from the socket
· Write data to disk
· Send a message to the receiver and wait for the feed to return
· And so on some other situation
In the above case, when your thread is waiting for a result, the thread cannot do anything, which is valid and cannot be used.
This means that if you want to use a blocking API with a lot of concurrency, you'll need to work a lot of threads to keep your app from weakening until it hangs.
The thread's flowering board is a limitation on memory requirements (such as line stacks) and context switching.
Given the level of concurrency required in some modern applications, some kind of blocking will not be scalable.
reactors and multiple reactors
As we noted earlier, Vert.x APIs are event-driven, and when there is a processor available, Vert.x passes the event to the processor.
In most cases, Vert.x calls your processor using a thread called the event loop.
Because there is nothing blocking in vert.x or your application, event loops can happily commit incoming events to different processors in sequence.
Because there is nothing to block, an event can potentially handle a large number of events in a short period of time, such as a single event loop that handles tens of thousands of HTTP requests very quickly.
We call this a reactor pattern.
As you might have heard before, for example, node. JS implements this pattern.
In a standard rebate implementation, there is a single event loop thread that continuously submits all incoming events to all processors in a single loop.
The problem with a single thread is that this thread can only run at a single computing core at any time (referred to as CPU cores), so if you want a single threaded reactor application (such as a node. js application) to stretch to your multicore server, you have to start and manage several different processes.
Vert.x work here in different ways. Unlike a single event loop, each Vert.x instance maintains multiple event loops. By default We select the number of compute cores on the machine that can be used, but this can be modified.
This means that a single VERTX process can extend to multiple servers, unlike node. js
We call this mode a multi-reactor mode, which distinguishes it from a single-threaded reactor pattern.
Note:
Although a VERTX instance maintains multiple event loops, some specific processors will never be executed concurrently, and in most cases (along with an error in work verticle), the processor is always called by the same event loop.
Golden Rule-do not block event loops
We already know that the Vert.x APIs are non-blocking and not blocking the event loop, and if you block all the event loops in the Vertx instance, your application will weaken until it hangs.
So don't do that. We've warned you.
Examples of blocking are:
· Thread.Sleep ();
· Waiting for a lock
· Waiting for disassembly or monitoring (for example, synchronizing blocks)
· Perform a long time data operation and wait for the result
· Perform a complex and take a lot of time to calculate
· Dead loop
If you perform some of the actions that preceded significant amounts of time to block the event loop, then you will stand both to hate the steps and wait for further reading.
So... What is the amount of time important.
How long is this time?, depending on your application and the number of concurrency you need.
If you have a single event loop and you want to process 10,000 HTTP requests per second, it is clear that each request cannot be processed more than 0.1 milliseconds. So you can't block more time than it does.
This calculation is not complicated, we leave this to the reader as an exercise.
If your app doesn't respond, it might be because you blocked the event loop somewhere. To help you diagnose such problems, if the Vert.x event loop does not return at a certain time, Vert.x automatically records the alarm. If you see an alarm like this in the log, you should analyze it.
Thread vertx-eventloop-thread-3 have beenblocked for 20458 ms
The Vert.x also provides stack tracking to pinpoint the point at which the blocking occurs.
If you want to turn off these settings or change the settings, set them by Vertxoptions objects when creating vertx pairs.
Running blocking Code
In a perfect world, without war and hunger, all APIs are written asynchronously, and they are interconnected to build the entire application. All APIs would be written asynchronously and bunny rabbits Willskip Hand-in-hand with baby lambs across sunny green meadows .
but ..... That's far from the real world. (Have you seen the latest news?) )
The fact is that some, most of the development libraries, especially in the JVM ecosystem, have synchronous apist and some methods are blocked. A good example of this is the JDBC API, which is inherently synchronous, and no matter how difficult it is, vert.x magic cannot make it asynchronous.
Previously discussed, you cannot invoke blocking operations directly in the event loop because it prevents them from doing some other useful work. So how do we do it.
By calling Executeblocking, the result handler for the blocking code and the asynchronous callback is specified, and the processor is called after the blocking code is executed.
Vertx.executeblocking (future-> {
Call some blocking API, takes asignificant amount of time to return
String result =someapi.blockingmethod ("Hello");
Future.complete (result);
}, Res, {
SYSTEM.OUT.PRINTLN ("The result is:" + res.result ());
});
If executeblocking is called multiple times in the same context (in the same verticle instance). Different executeblocking will be executed serially by default (one by one);
If you do not care about the order in which you call executeblocking, you can batch the ordered parameter to false. In this case, some executeblocking may be executed in parallel in the worker pool (the work pools like the thread pool).
A workaround for executing blocking code is to use the worker verticle
A worker verticle always takes a thread out of the work pool to execute.
With the Setworkerpoolsize configuration, the default blocking code executes before the Vert.x blocks the work pool.
You can create additional pools for different purposes.
Workerexecutorexecutor = Vertx.createsharedworkerexecutor ("My-worker-pool");
Executor.executeblocking (future-> {
Call some blocking API, takes asignificant amount of time to return
String result = Someapi.blockingmethod ("Hello");
Future.complete (result);
}, Res, {
SYSTEM.OUT.PRINTLN ("The result is:" + res.result ());
});
The worker actuator must be closed when it is not needed.
Executor.close ();
When multiple worker threads are created with the same name, they share the same pool. This pool is also destroyed when all the actuators in the worker pool are closed.
When an actuator is created in the Verticle, Vert.x will automatically destroy the period when the verticle is loaded.
The work executor can be configured at the time of creation.
int poolsize = 10;
2 minutes
Long maxexecutetime = 120000;
Workerexecutor executor =vertx.createsharedworkerexecutor ("My-worker-pool", poolsize,maxexecutetime);
Note: parameters are set when the worker thread pool is created.
Asynchronous Orchestration
The coordination of multiple asynchronous results is accomplished through vert.x futures. It supports concurrent combinations (parallel or federated running of multiple asynchronous operations) and can be combined sequentially (chained asynchronous operations)
Compositefuture.all uses up to 6 futures parameters to execute and fail at all futures, returning a successful future.
Future
Httpserver.listen (Httpserverfuture.completer ());
Future<netserver> netserverfuture =future.future ();
Netserver.listen (Netserverfuture.completer ());
Compositefuture.all (httpserverfuture,netserverfuture). SetHandler (AR-and {
if (ar.succeeded ()) {
All servers started
}else {
At least one server failed
}
});
The operation runs in parallel and is connected. The processor is added to the returned future, which is called when the combination is complete. If one of the operations fails (the future of a pass is identified as failed), the result will also be identified as failed. If all the operations succeed, the future will also be completed successfully.
It may be a futrue list as an alternative, the list can be empty.
Compositefuture.all (Arrays.aslist (F1, f2,f3));
All combination (composition) waits until all the future executions are complete
Any combination waits for the first execution of the future execution to complete.
Compositedfuture.any can have multiple future parameters (up to 6) if a future is successfully executed, it is returned.
Future<string> future1 =future.future ();
Future<string> Future2 =future.future ();
Compositefuture.any (Future1,future2). SetHandler (AR-and {
if (ar.succeeded ()) {
At least one is succeeded
}else {
All failed
}
});
Future lists can also be passed in as parameters
Compositefuture.any (Arrays.aslist (F1, f2,f3));
All and any are implementing concurrent groups, and the Compose method can be used to link the future (such as sequential combinations)
FileSystem fs = Vertx.filesystem ();
future<void> fut1 = Future.future ();
Fs.createfile ("/foo", Fut1.completer ());
Fut1.compose (V, {
When the file was created (FUT1), execute this:
future<void> fut2 = Future.future ();
Fs.writefile ("/foo", Buffer.buffer (), Fut2.completer ());
return fut2;
}). Compose (V, {
When the file was written (FUT2), execute this:
Fs.move ("/foo", "/bar", Startfuture.completer ());
},
Mark the start future as completed if all the chain have beencompleted,
Or mark it as failed if any step fails.
Startfuture);
In this example, three operations are linked:
1, one file creation (FUT1)
2. Write something to the file (FUT2)
3, moving files (FUT3)
When the three steps are complete, the final future (Startfuture) is successfully completed. However, if any of these steps fail, the final future will end in failure.
This example uses the following:
· Compose: When the parallel future is complete, run a given function and return a future. This combination is completed when the future of the return is completed (composition)
· Compose: When the concurrent future is complete, run the given processor to complete the given next future.
In this second example, the processor should complete the next future to report its status, success or failure.
You can use Completer thatcompletes a for future with the operation result or failure. It avoids has towrite the traditional:if success then complete the Futureelse fail the future.