(7) Comparison of spring WebClient and resttemplate performance--response spring's Tao

Source: Internet
Author: User
Tags spring initializr

This series of articles index the "Response Spring's word Wizard"
Previously summary Spring Webflux Quick Start | Spring Webflux Performance Test
This article source

1.4.2 Call for service load analysis with delay

Due to the prevalence of microservices architectures, the invocation of services within large systems based on HTTP APIs can be quite frequent. Netflix's system has the experience of a micro-service.

As shown in our test, service a invokes the API of Service B, where a request from service A is sent to the receiving response, and there may be delays, such as network instability, service B instability, or a slightly longer execution time for the requested API itself, and so on. For service A as an HTTP client, the ability to handle requests and responses to service B asynchronously can also lead to significant performance differences. Let's simulate this with a simple scenario:

With the previous test, we have identified WebFlux-with-latency an API that /hello/{latency} can still respond with a stable ~ ms latency at high concurrency, latency latency+5 so it is used as the invoked service B to simulate a service with a delay. This allows you to exclude the cause of service B if there are significant differences in the test results.

In this test we created two service a projects: restTemplate-as-caller and webClient-as-caller . They also provide a URL for /hello/{latency} the API, in the implementation of the API is through HTTP request service a /hello/{latency} , the returned data as its own response. The difference is that it is used as an HTTP client restTemplate-as-caller RestTemplate , webClient-as-caller WebClient as an HTTP client.

1) Resttemplate-as-caller

Using spring INITIALIZR to create a project that relies on "Web" (that is, the WEBMVC project), Pom relies on:

        <dependency>            <groupId>org.springframework.boot</groupId>            <artifactId>spring-boot-starter-web</artifactId>        </dependency>

Set the port number to 8093 and then develop /hello/{latency} :

HelloController.java

  @RestController public class Hellocontroller {private final String target_        HOST = "http://localhost:8092";        Private Resttemplate resttemplate; Public Hellocontroller () {//1 Poolinghttpclientconnectionmanager ConnectionManager = new Poolinghttpclientco            Nnectionmanager ();            Connectionmanager.setdefaultmaxperroute (1000);            Connectionmanager.setmaxtotal (1000); This.resttemplate = new Resttemplate (New Httpcomponentsclienthttprequestfactory (Httpclientbuilder.crea        Te (). Setconnectionmanager (ConnectionManager). Build ())); } @GetMapping ("/hello/{latency}") Public String hello (@PathVariable int latency) {return Resttem        Plate.getforobject (target_host + "/hello/" + latency, string.class); }    }
    1. Because Resttemplate will make a lot of requests during the testing process, we create a resttemplate based on the HTTP connection pool construct in the controller's construction method, otherwise it may run out of the port that the system can give error;
    2. Use Resttemplate to request service B and return the response.

Start WebFlux-with-latency the service and restTemplate-as-caller .

This test we do not need to analyze the 1000~10000 of different user volume scenarios of response time change trend, just to verify the resttemplate blocking, so directly test 6000 users, the test results are as follows:

Throughput is 1651req/sec,95% response duration of 1622ms.

mvc-with-latencysimilar to the results of 6000 users in 1.4.1, it can be seen that resttemplate is really blocking. Well, in fact, write a small @test can be detected is not blocked, but my intention is not limited to this, below we carry out a reactive transformation. First of all, please recall the two topics described in the front:

    1. I do not know if you remember at the end of the 1.3.3.1, with spring Webmvc + Reactor ( spring-boot-starter-web + reactor-core ) can also be implemented as a webflux-based responsive programming;
    2. 1.3.2.5 describes how to use the elastic scheduler to convert blocked calls to asynchronous non-blocking.

Based on this, let's change the code. First, add in the Pom.xml reactor-core :

        <dependency>            <groupId>io.projectreactor</groupId>            <artifactId>reactor-core</artifactId>            <version>3.1.4.RELEASE</version>        </dependency>

Then the resttemplate call becomes asynchronous:

    @GetMapping("/hello/{latency}")    public Mono<String> hello(@PathVariable int latency) {        return Mono.fromCallable(() -> restTemplate.getForObject(TARGET_HOST + "/hello/" + latency, String.class))                .subscribeOn(Schedulers.elastic());    }

Once again, the results have been significantly improved:

Throughput is 2169 req/sec,95% response duration is 121ms.

However, the use Schedulers.elastic() is actually equivalent to each block of Resttemplate calls dispatched to different threads to execute, the effect is as follows:

Because there are not only 200 threads to process the request, but also Schedulers.elastic() to the assigned worker thread, so the total number of threads has soared to more than 1000! However, in a production environment, we typically do not use the elastic thread pool directly, but instead use a thread pool with a manageable number of threads, and when Resttemplate runs out of all the threads, more requests still cause a queue.

This is used Schedulers.newParallel() by the scheduler to be known.

    @RestController public class Hellocontroller {private final String target_host = "http://localhost:8092";        Private Resttemplate resttemplate;        Private Scheduler Fixedpool; Public Hellocontroller () {Poolinghttpclientconnectionmanager ConnectionManager = new Poolinghttpclientconnecti            Onmanager ();            Connectionmanager.setdefaultmaxperroute (1000);            Connectionmanager.setmaxtotal (1000); This.resttemplate = new Resttemplate (New Httpcomponentsclienthttprequestfactory (Httpclientbuilder.crea            Te (). Setconnectionmanager (ConnectionManager). Build ())); Fixedpool = Schedulers.newparallel ("Poolwithmaxsize", 400);        1} @GetMapping ("/hello/{latency}")/Public String hello (@PathVariable int latency) {//    Return Resttemplate.getforobject (target_host + "/hello/" + latency, string.class); } public mono<string> Hello (@PathVariable int LatencY) {return mono.fromcallable ((), Resttemplate.getforobject (Target_host + "/hello/" + latency, String.clas    s). Subscribeon (Fixedpool); 2}}
    1. Create a thread pool with a maximum of 400 threads poolWithMaxSize ;
    2. Dispatched to this thread pool.

To view the number of threads while testing:

As can be seen, there are up to 400 poolWithMaxSize threads named, and Resttemplate is working on those threads one more times than the request processing thread. Take a look at the final test results:

The throughput 2169req/sec is the same as that of the elastic thread pool; The 95% response time is 236ms, although it does not reach the effect of the elastic thread pool, but it is much better than full synchronization blocking (Resttemplate executing in the request processing thread).

Let's see how non-blocking WebClient behaves.

2) Webclient-as-caller

webClient-as-callerBased on Webflux dependency, port number 8094, not much to say, look directly at controller:

    @RestController    public class HelloController {        private final String TARGET_HOST = "http://localhost:8092";        private WebClient webClient;        public HelloController() {            this.webClient = WebClient.builder().baseUrl(TARGET_HOST).build();        }        @GetMapping("/hello/{latency}")        public Mono<String> hello(@PathVariable int latency) {            return webClient                    .get().uri("/hello/" + latency)                    .exchange()                    .flatMap(clientResponse -> clientResponse.bodyToMono(String.class));        }

Run the test for 6000 users:

Throughput 2195 req/sec,95% response duration 109ms.

The key is that WebClient does not need a lot of concurrent threads to get this thing done beautifully:

3) Summary

WebClient can also handle high concurrent HTTP requests with a small number of fixed threads, replacing resttemplate and Asyncresttemplate in HTTP-based inter-service communication.

Asynchronous non-blocking HTTP client, please look for--webclient~

In the next section, we describe a case where Netflix, the world's largest video services platform, uses asynchronous HTTP clients to transform its microservices gateways.

(7) Comparison of spring WebClient and resttemplate performance--response spring's Tao

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.