Original: Https://plumbr.eu/blog/java/how-to-use-asynchronous-servlets-to-improve-performance
The first example below is a random number of 0 to 2000 milliseconds for sleep in each servlet. JMeter test performance is parallel to the request, you can think that these servlet threads in parallel sleep less than 2000ms. So in the second asynchronous example, only the sleep 2000ms was once.
The async here is actually the servlet that gives the request to a thread pool to process, and then the servlet thread ends. The thread pool handles these tasks slowly.
This post was going to describe a performance optimization technique applicable to a common problem related to modern WEBAP PS. Applications nowadays is no longer just passively waiting for browsers to initiate requests, they want to start comm Unication themselves. A Typical example would involve chat applications auction houses etc–the common denominator being the fact that's most of The time the connections with the browser is idle and wait for a certain event being triggered.
This type of applications have developed a problem class of their own, especially when facing heavy load. The symptoms include starving threads, suffering user interaction, staleness issues etc.
Based on recent experience with this type of apps under load, I thought it would is a good time to demonstrate a Lution. After Servlet API 3.0 implementations became mainstream, the solution have become truly simple, standardized and elegant.
But before we jump to demonstrating the solution, we should understand the problem in greater detail. What could is easier for we readers than to explain the problem with the help of some source code:
@WebServlet (urlpatterns = "/blockingservlet") public class Blockingservlet extends HttpServlet { private static Final long serialversionuid = 1L; protected void doget (HttpServletRequest request, HttpServletResponse response) throws Servletexception, IOException { try { long start = System.currenttimemillis (); Thread.Sleep (New Random (). Nextint ()); String name = Thread.CurrentThread (). GetName (); Long Duration = System.currenttimemillis ()-Start; Response.getwriter (). printf ("Thread%s completed the task in%d Ms.", name, duration); } catch (Exception e) { throw new RuntimeException (E.getmessage (), E); } }
The servlet above is a example of how a application described above could look like:
- Every 2 seconds some event happens. e.g. stock quote arrives, chat is updated and so on.
- End-User request arrives, announcing interest to monitor certain events
- The thread is blocked until the next event arrives
- Upon receiving the event, the response is compiled and sent back to the client
Let me explain this waiting aspect. We have some external event, happens every 2 seconds. When the new request from end-user arrives, it had to wait some time between 0 and 2000ms until next event. In order to keep it is simple and we have emulated this waiting part with a call to Thread.Sleep () for random number of Ms between 0 and 2000. So every the request waits on average for 1 second.
Now–you might think this is a perfectly normal servlet. In many cases, you would be completely correct–there are nothing wrong with the code until the application faces signific Ant load.
In order to simulate this load I created a fairly simple test with some help from JMeter, where I launched 2,000 threads, Each running through iterations of bombarding, the application with requests to the /blockedservlet. Running the test with the deployed servlet in an out-of-the-box Tomcat 7.0.42 I got the following results:
- Average response time:9,324 Ms
- Minimum Response Time:5 MS
- Maximum Response time:11,651 MS
- throughput:193 Requests/second
The default TOMCAT configuration has got-threads which, coupled with the fact and the simulated work is Replac Ed with the sleep cycle of average duration of 1000ms, explains nicely the minimum and maximum response times–in each SE Cond the threads should is able to complete a sleep cycles, 1 second on average each. Adding context switch costs on top of this, the achieved throughput of 193 requests/second are pretty close to our Expectat Ions.
The throughput itself would not a look too bad for 99.9% of the applications out there. However, looking at the maximum and especially average response times the problem starts to look more serious. Getting the worst case response in one seconds instead of the expected 2 seconds is a sure it to annoy your users.
Let us now take a look at an alternative implementation, taking advantage of the Servlet API 3.0 asynchronous Support:
@WebServlet (asyncsupported = true, value = "/asyncservlet") public class Asyncservlet extends HttpServlet {private static final long serialversionuid = 1L; protected void doget (HttpServletRequest request, httpservletresponse response) throws Servletexception, IOException {W Ork.add (Request.startasync ()); }}
public class work implements Servletcontextlistener {private static final blockingqueue queue = new Linkedblockingqueue ( ); private volatile thread thread; public static void Add (Asynccontext c) {Queue.add (c); } @Override public void contextinitialized (servletcontextevent servletcontextevent) {thread = new thread (new RUNNABL E () {@Override public void run () {while (true) {try {thread.sleep (2000)}; Asynccontext context; while ((context = Queue.poll ()) = null) {try {servletresponse response = Context.getrespons E (); Response.setcontenttype ("Text/plain"); PrintWriter out = Response.getwriter (); out.printf ("Thread%s completed the task", Thread.CurrentThread (). GetName ()); Out.flush (); } catch (Exception e) {throw new RuntimeException (E.getmessage (), E); } finally {ContexT.complete (); }}} catch (Interruptedexception e) {return; } } } }); Thread.Start (); } @Override public void contextdestroyed (Servletcontextevent servletcontextevent) {thread.interrupt (); }}
This bit of code was a little more complex, so maybe before we start digging into solution details, I can outline the This Solution performed ~75x better Latency-and ~10xbetter throughput-wise. Equipped with the knowledge of such results, you should being more motivated to understand what's actually going on in the S Econd example.
The servlet itself looks truly simple. Facts is worth outlining though, the first of which declares the servlet to support asynchronous method invocations:
@WebServlet (asyncsupported = true, value = "/asyncservlet")
The second important aspect is hidden on the following line
Work.add (Request.startasync ());
In which the whole request processing was delegated to the work class. The context of the request is stored using a asynccontext instance holding the request and response that were PR Ovided by the container.
Now, the second and more complex class–the work implemented as servletcontextlistener– starts looking Simpler. Incoming Requests is just queued in the implementation to wait for the Notification–this could is an updated bid on the Monitored auction or the next message in the group chat with all the requests is waiting for.
Now, the notification arrives every 2 seconds and we had simplified this as just waiting with thread.sleep (). When it arrives all the blocked tasks in the queue is processed by a single worker thread responsible for compiling and s Ending the responses. Instead of blocking hundreds of threads to wait behind the external notification, we achieved this in a lot simpler and CL Eaner way–batching The interest groups together and processing the requests in a single thread.
And the results speak for themselves–the very same test on the very same Tomcat 7.0.42 with default configuration result Ed in the following:
- Average response time:265 Ms
- Minimum Response Time:6 MS
- Maximum Response time:2,058 MS
- throughput:1,965 Requests/second
The specific case was small and synthetic, but similar improvements was achievable in the Real-world applications.
Now, before you run to rewrite all your servlets to the asynchronous servlets–hold your horses for a minute. The solution works perfectly on a subset of usage cases, such as group chats notifications and auction house price alerts. You'll most likely isn't benefit in the cases where the requests is waiting behind unique database queries being complete D. So, as always, I must reiterate my favorite performance-related recommendation–measure everything. Do not guess anything.
Not sure if threads behave or is causing problems? Let PLUMBR monitor your The Java app and the "If you" need to the change your code.
But on the occasions when the problem does fit the solution shape, I can only praise it. Besides the now-obvious improvements on throughput and latency, we have elegantly avoided possible thread starvation issue s under heavy load.
Another important aspect–the approach to asynchronous request processing is finally standardized. Independent of your favorite Servlet API 3.0–compliant application Server such as Tomcat 7, JBoss 6 or Jetty 8–you can Be sure the approach works. No more wrestling with different Comet implementations or platform-dependent solutions, such as the Weblogic futureres Ponseservlet.
How to use asynchronous Servlets to improve perfor