Understanding the play Framework thread pool

Source: Internet
Author: User
Tags http request join min valid cpu usage


The play framework is a self-low asynchronous web framework that uses Iteratee to process data flows asynchronously. Because the IO in the play kernel is not blocked, the play thread pool uses fewer threads than the traditional web framework.



So, if you're going to write blocking IO code, or potentially need to do a lot of CPU-intensive work, you need to know exactly which thread pool is responsible for the workload and need to optimize it accordingly. If this is not considered, blocking IO is likely to cause poor performance on the play framework. For example, you might see only a few requests per second being processed, while CPU usage is only 5%. By comparison, Benchmark, a typical development hardware (such as MacBook Pro), has shown that play is able to handle hundreds of or even thousands of requests per second with no effort on the right tuning. 


know when you're blocking





One of the most common blocking places for a play typical application is when accessing a database. Unfortunately, none of the mainstream databases provide an asynchronous database driver for the JVM, and for most databases, your only option is to use blocking IO. Reactivemongo is a notable exception. This driver uses the play Iteratee library to access the MONGDB.





Other things your code might block include the use of the Rest/webservice API through a third-party client library (for example, an asynchronous WS-API without play) Some messaging technologies only provide synchronization APIs to send messages yourself directly to open files or socket CPU-intensive operations, They need to run for long periods of time, causing blockages.


In general, if you use an API that returns to the future, it is non-blocking, otherwise it is blocked.





Note that you might want to wrap the blocking code into the future. This does not make it non-blocking, it simply blocks the block in other threads. So you still need to make sure that the thread pool you are using has enough threading to handle blocking operations. How to configure your app for the blocking API, refer to the Play sample template on Http://playframework.com/download#examples.





Conversely, the following IO types do not cause blocking: Play WS API Asynchronous database driver, such as Reactivemongo Send (receive) message to (from) Akka actors '



Play thread pool





Play uses many different thread pools for different purposes





internal thread pool -these thread pools are used by the server engine to process IO. Application code should not run in the threads of these thread pools. Play uses the Akka HTTP service backend by default.
play default thread pool-all of your application code in the play framework will run in this thread-pooling. This is a Akka Dispatcher and is used in the application's Actorsystem. You can configure it by configuring Akka. I'll talk about it below. 



using the default thread pool

All action in the play frame uses the default thread pool. As with certain asynchronous operations, such as calling the map or Flatmap method of the future, it may be necessary to provide an implicit execution context to execute the given function. Execution context can be said to be another name of the thread pool.
In most cases, the play default thread pool is the appropriate execution Context. This can be accessed through @inject () (implicit ec:executioncontext). This can be used by injecting into your Scala source files.




Class Samples @Inject () (components:controllercomponents) (implicit ec:executioncontext) extends Abstractcontroller ( components) {
  def someasyncaction = action.async {
    somecalculation (). map {result = =
      OK (S ", the answer is $r Esult ")
    }.recover {case
      e:timeoutexception =
        internalservererror (" Calculation timed out! ")
    }
  }

  def somecalculation (): future[int] = {
    future.successful
  }}


Or use the completionstage with the Httpexecutioncontext in the Java code:




import play.libs.concurrent.httpexecutioncontext;import play.mvc.*; import
Javax.inject.inject;import java.util.concurrent.CompletableFuture;

Import Java.util.concurrent.CompletionStage;

    public class Mycontroller extends Controller {private Httpexecutioncontext httpexecutioncontext;
    @Inject Public Mycontroller (httpexecutioncontext ec) {this.httpexecutioncontext = EC; } public completionstage<result> Index () {//Use a different task with explicit EC return calc Ulateresponse (). Thenapplyasync (answer, {//uses Http.context CTX (). Flash (). put ("info", "Resp
            Onse updated! ");
        Return OK ("answer was" + answer);
    }, Httpexecutioncontext.current ()); } private static Completionstage<string> Calculateresponse () {return completablefuture.completedfuture
    ("42"); }}


This execution context connects directly to the application's Actorsystem and uses the default dispatcher. 


Configure the default thread pool


The default thread pool can be configured using the standard Akka configuration in the Applicaiton.conf Akka namespace. The following is the default configuration of the play thread pool:




Akka {
  actor {
    Default-dispatcher {
      fork-join-executor {
        # Settings this to 1 instead of 3 seems to improv E performance.
        Parallelism-factor = 1.0

        # @richdougherty: Not sure what is set below the Akka
        # Default.
        Parallelism-max = Setting this to

        LIFO changes the Fork-join-executor # to use
        a stack discipline for task s Cheduling. This usually
        # improves throughput at the cost of possibly increasing
        # latency and risking task starvation (whi CH should be rare).
        Task-peeking-mode = LIFO
      }}}
  


This configuration instructs Akka to create a thread for each valid processor. The maximum number of threads for the thread pool is 24.
You can also try using the default Akka configuration:




Akka {
  actor {
    Default-dispatcher {# This] is
      used if you have set "executor =" Fork-join-executor ""
      fork-join-executor {
        # Min number of threads to cap factor-based parallelism number to
        parallelism-min = 8

        # T  He parallelism factor is used to determine thread pool size using the
        # following Formula:ceil (available processors * Factor). Resulting size # is then bounded by the
        parallelism-min and Parallelism-max values.
        Parallelism-factor = 3.0

        # Max number of threads to caps factor-based parallelism number to
        Parallelism-max = 64<  c12/># Setting to ' FIFO ' to use ' queue like peeking mode which ' poll ' or ' LIFO ' to ' use ' stack
        # like peeking mode which "Pop".
        Task-peeking-mode = "FIFO"}}}
  


All configuration options use a different thread pool here


In some cases, you may need to distribute work to other thread pools, such as CPU-intensive work, or IO work such as database access. To do this, you should first create a thread pool. It's easy to do it in Scala:




Val myexecutioncontext:executioncontext = akkaSystem.dispatchers.lookup ("My-context")


For the above example, we use Akka to create the ExecutionContext. But you can also easily use Java executor to create your own executioncontext, or Scala Fork join thread pool. For example, play provides Play.libs.concurrent.CustomExecutionContext and Play.api.libs.concurrent.CustomExecutionContext. These two classes can be used to create your own execution Context. Please refer to Scalaasync and Javaasync for more details.



In order to configure the Akka execution context, you can add the following configuration to your application.conf:




My-context {
  Fork-join-executor {
    parallelism-factor = 20.0
    Parallelism-max = $
  }}


To use this execution context in Scala, you can use the companion object function of the Scala future:




Future {
  //Some blocking or expensive code here} (Myexecutioncontext)
or could just use it implicitly:
Imp Licit val ec = Myexecutioncontext Future

{
  //Some blocking or expensive code here}


Also, refer to the sample template in Http://playframework.com/download#examples as an example of how to configure the application blocking API. 



class loader and thread local variables

Class loaders and thread local variables require special handling in a multithreaded environment like play. 


Apply class loader


In the play application, the thread context class loader may not always be able to load the application class. You should use the Application class loader to load the class as shown.



Java code




Javaclass MyClass = App.classloader (). LoadClass (myClassName);


Scala code




Val myClass = App.classloader.loadClass (myclassname)


Displaying the load analogy in the play development mode is more important in production mode. This is because the play development mode uses a different classloader so it can support the application to reload automatically. Some play threads may bind a class loader that knows only a subset of the application classes



In some cases, you may not be able to use the Application class loader as shown. For example, when using a third-party library, you might need to display the set thread context ClassLoader before calling third-party code. If you do this, once you have finished calling the third-party code, remember to set the context class loader back to its previous value. 


Java thread Local variables


In play, Java code uses thread-local variables to find context-sensitive information, such as the current HTTP request. Scala does not need to use thread-local variables because it can use implicit arguments to pass contexts. Java code needs to use thread-local variables to access context-sensitive information to avoid passing context parameters everywhere.



The problem with threading local variables is that you lose thread-local variable information whenever you switch to another thread. So if you're working on the next stage of Completionstage, using Thenapplyasync, or using thenapply after the completionstage associated future is done, when you try to access the HTTP context (for example, Session or request), will not work. To solve this problem, play provides httpexecutioncontext. This allows you to get the current context in a executor and then pass it to the Completionstage *async method, such as Thenapplyasync (). And when executor executes your callback function, it will ensure that the context where the thread-local variable is saved is accessible. This allows you to access the Request/session/flash/response object.



By injecting httpexecutioncontext into your component, Completionstage can access the current context at any time. For example:




import play.libs.concurrent.httpexecutioncontext;import play.mvc.*; import

Javax.inject.inject;import Java.util.concurrent.completablefuture;import Java.util.concurrent.CompletionStage;

    public class Mycontroller extends Controller {private Httpexecutioncontext httpexecutioncontext;
    @Inject Public Mycontroller (httpexecutioncontext ec) {this.httpexecutioncontext = EC; } public completionstage<result> Index () {//Use a different task with explicit EC return calc Ulateresponse (). Thenapplyasync (answer, {//uses Http.context CTX (). Flash (). put ("info", "Resp
            Onse updated! ");
        Return OK ("answer was" + answer);
    }, Httpexecutioncontext.current ()); } private static Completionstage<string> Calculateresponse () {return completablefuture.completedfuture
    ("42"); }}


When you have a custom executor, you can wrap it in httpexecutioncontext. It is easy to do this by passing it to the Httpexecutioncontext constructor. 



Best Practices

The best way to distribute the work of an application between different thread pools depends largely on the type of work your application is doing and how much work needs to be done in parallel.



There is no unified solution to this problem. You need to identify the blocking IO requirements of your application and their impact on the thread pool before making the best decisions. Load test your application to help tune and validate your configuration.



Note: In a blocking environment, Thread-pool-executor is better than Fork-join because work-stealing does not occur. The fixed-pool-size should be used and set to the maximum size of the underlying resource.
Assuming that a thread pool is only used for database access, considering that JDBC is blocked, the size of the thread pool needs to be set to the number of valid connections for the database connection pool. Fewer settings will not effectively consume database connections, and more settings will cause unnecessary competition for database connections.



Below we list some of the common patterns that users might want to use in the play framework, 


purely asynchronous



In this case, the application does not use blocking IO, so you will not be blocked. The default configuration for a single thread of a processor can perfectly fit your usage, so no additional configuration is required. The play default execution context can be competent in any of these situations.



 highly synchronized


This pattern refers to web frameworks that are based on traditional synchronous IO, such as Java servlet containers. Use a large thread pool to handle blocking IO. If the majority of operations are to invoke database synchronization IO (such as accessing a database), and you do not want or need concurrency control over different types of work, it is capable. This mode is the simplest for handling blocking IO.



In this mode, you can use the default execution Context in every place, but you need to configure very many threads to the thread pool. Because the default thread pool needs to serve play requests and database requests, the thread pool size should be the maximum database connection pool, plus the number of cores, plus several additional threads for internal processing.




Akka {
  actor {
    default-dispatcher {
      executor = ' thread-pool-executor '
      throughput = 1
      thread-pool-executor {
        fixed-pool-size = # DB conn Pool (+) + Number of cores (4) + housekeeping (1)
      }
    }}}


This mode is recommended for Java applications that do synchronous IO, because distributing tasks to other threads in Java is more difficult.



Also, refer to the sample template in Http://playframework.com/download#examples as an example of how to configure your application blocking API. 


many specific thread pools


This mode is for you to want to do a lot of synchronous IO, and you also want to control exactly how many types of operations the application does at once. In this mode, non-blocking operations are done only in the default execution context and the blocking operations are distributed to different specific execution context.
In this case, you can create different execution Context for different types of operations, such as the following:




Object Contexts {
  implicit val simpledblookups:executioncontext = AkkaSystem.dispatchers.lookup (" Contexts.simple-db-lookups ")
  implicit val expensivedblookups:executioncontext = AkkaSystem.dispatchers.lookup ( "Contexts.expensive-db-lookups")
  implicit val dbwriteoperations:executioncontext = AkkaSystem.dispatchers.lookup ("contexts.db-write-operations")
  implicit val expensivecpuoperations: ExecutionContext = AkkaSystem.dispatchers.lookup ("Contexts.expensive-cpu-operations")}


They can be configured like this:




Contexts {
  Simple-db-lookups {
    executor = "Thread-pool-executor"
    throughput = 1
    thread-pool-executor {
      fixed-pool-size = +
    }
  }
  expensive-db-lookups {
    executor = "Thread-pool-executor"
    Throughput = 1
    thread-pool-executor {
      fixed-pool-size =
    db-write-operations}
  }
  {
    Executor = "Thread-pool-executor"
    throughput = 1
    thread-pool-executor {
      fixed-pool-size = ten
    }
  }
  expensive-cpu-operations {
    Fork-join-executor {
      Parallelism-max = 2
    }}
  }


So in your code, you can create a future and pass executioncontext to it. So this future will work in this executioncontext.



Note: The configuration namespace is freely selectable as long as it matches the dispatcher ID passed into the app.actorSystem.dispatchers.lookup. The Customexecutioncontext class will automatically match you. 


few specific thread pools





This is a combination of many specific thread pools and highly synchronous patterns. You can do most of the simple IO in the default execution Context and set the number of threads reasonably to a larger value (for example, 100) and distribute expensive operations to a specific execution context. In that you can set the number of them to complete at once. 



Debug Thread pool



Dispatcher has many configurations and it is difficult to determine which configurations are applied and the default values for those configurations. This is especially true after overriding the default dispatcher. The Akka.log-config-on-start configuration option displays the full app configuration when the app is loaded.




Akka.log-config-on-start = On


Note that you must configure the Akka log as the debug level so that you can see the output. In Logback.xml Add the following configuration:




<logger name= "Akka" level= "DEBUG"/>


Once you see the Hocon output in the log, you can copy and paste it into a "example.conf" file and view it in IntelliJ idea, which supports hocon syntax. You should see your configuration merged into Akka dispatcher. So if you overwrite the Thread-pool-executor, you will see the following configuration:




{ 
  # elided Hocon ... 
  " Actor ": {
    " Default-dispatcher ": {
      # application.conf @ file:/users/wsargent/work/catapi/target/universal/ stage/conf/application.conf:19
      "executor": "Thread-pool-executor"}}
  


Also note that play's development and production modes are configured differently. To ensure that the thread pool is configured correctly, you should run play in the production configuration.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.