Talk about high concurrency (ii) a case study of thread closure and the design idea behind it

Source: Internet
Author: User

The problem of the high concurrency problem, which is implemented to the level of the code, is multithreading. Multithreading is primarily a thread-safe issue (other active issues, performance issues, etc.).

So what is thread safety? The following definition comes from the Java Concurrency Programming practice, which is highly recommended by several Java language authors, all of which are great gods of concurrent programming.

Thread safety means that when multiple threads access a class, the class will always behave correctly.

Correctly refers to "what you see is what you know," and the results of the program execution are consistent with the results you expect.


It is important to understand the concept of thread safety, the so-called thread-safety problem, which is dealing with object state . If the object to be processed is stateless (invariant), or you can avoid multiple threads sharing (thread closure), then we can rest assured that this object may be thread-safe. When it is unavoidable to have to share this object state for multithreaded access, this is the time to use a series of techniques for thread synchronization.


This understanding is magnified to the architectural level, when we design the business layer code, the business layer is best to be stateless, so that the business layer is scalable, can be scaled smoothly to cope with high concurrency.


So we can handle thread safety at several levels:

1. Can the immutable object be made stateless. No state is the safest.

2. Can threads be closed

3. What synchronization technology is used


I understand that to be able to "evade" multi-threaded problems, can escape the escape, really no more to deal with.


Understand the thread closure of the background, to say that thread closure of the specific techniques and ideas

1. Stack closure

2. ThreadLocal

3. Program Control thread closure


stack closure is to use local variables more . The classmates who understand the Java runtime model know that references to local variables are kept online stacks, visible only to the current thread, and other threads are not visible. So local variables are thread-safe.


The threadlocal mechanism is essentially a program-controlled thread closure, but Java itself is helping to handle it . See Java's thread class and Threadlocal class

1. The thread thread class maintains a Threadlocalmap instance variable

2. Threadlocalmap is a map structure

3. Threadlocal's Set method takes the current thread, gets the Threadlocalmap object of the current thread, and then takes the Threadlocal object as a key, puts the value to be placed as value, and puts it in the map

4. The Get method of threadlocal takes the current thread, gets the Threadlocalmap object of the current thread, and then takes the Threadlocal object as a key to get the corresponding value.

public class Thread implements Runnable {Threadlocal.threadlocalmap threadlocals = null;} public class Threadlocal<t> {public T get () {        Thread t = Thread.curren TThread ();        threadlocalmap map = getmap (t);         if (map! = null) {            Threadlocalmap.entry e = Map.getentry (this);            if ( E! = null)                 return (T) e.value;       }        return Setinitialvalue ();   } threadlocalmap Getmap (Thread t) {         return t.threadlocals;   } public void set (t value) {        Thread t =Thread.CurrentThread ();        threadlocalmap map = getMap (t);         if (map! = null)             Map.set (this, value);        else             Createmap (t, value);   }}

The design of the threadlocal is simple, which is to set the thread object with an internal map, which can put some data. The JVM guarantees from the bottom that the thread object will not see each other's data.

The premise of using threadlocal is to save a separate object for each threadlocal, which cannot be shared with multiple threadlocal, otherwise this object is also thread insecure.

Structs2 used the threadlocal to save the data for each request, using the idea of thread closure. but Threadlocal's shortcomings are also obvious, and multiple copies must be saved and space exchanged for efficiency.


Program Control thread closure, this is not a specific technology, but a design idea, from the design of the processing of an object state code is put into a thread, so as to avoid the problem of thread safety .

There are a lot of such examples, the Netty5 of the eventloop of the design, our game background processing user request is also adopted this design.

The concrete idea is this:

1. Put the user-state-related code in a queue, which is handled by a single thread

2. Consider whether to isolate the state between users, that is, if one user is using one queue, or if multiple users are using a queue


Take Netty For example, EventLoop is designed as a thread pool of threads. We know that the thread pool is made up of worker threads + task queues. EventLoop has only one worker thread.

When the user requests come in, they are randomly placed in a eventloop, which is the task queue that is placed in the EventLoop thread pool, which is handled by a thread. And the code that handles the user's request is encapsulated with the pipeline responsibility chain, and a pipeline is handed to a thread to handle, ensuring that the state of the same user is closed to a thread.

More Netty EventLoop related content see this Netty5 source Analysis (ii)--Threading model analysis


The problem here is also obvious, that is, if you put multiple users in a queue, to a thread processing, then the processing speed of the previous user will affect the time after the user was processed.


Our game server design takes a User a task queue way, the processing task code is made runnable, so that multiple runnable can be handed to a thread pool execution, so that multiple users can be processed at the same time, And the same user's state processing is closed to a single task queue, without interference .


However, there is also a problem, that is, the thread Cheng the worker thread and the task queue is bounded, so a single thread processing time must be fast, otherwise a large number of requests are backlog in the task queue too late processing, once the task queue is full, then the subsequent requests are not in.

If the unbounded task queue is used, all requests can come in, but the problem is that a large number of requests come in high concurrency, which will burst the system memory and invert oom.

So a common design idea is as follows:

1. Use a bounded task queue and an unlimited number of worker threads, so that high concurrency can be handled smoothly, without the memory being blown

2. Single thread request time must be fast, try not to exceed 100ms

3. If the time for a single thread to process is too large to be time consuming, then take the task to a small task to execute multiple times

4. If the task is small or slow, then the synchronous operation becomes an asynchronous operation, that is, the method returns immediately after execution and does not wait for the result. Threads are processed asynchronously by another thread, such as using a separate thread to check the processing state on a timed basis, or by using an asynchronous callback







Talk about high concurrency (ii) a case study of thread closure and the design idea behind it

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.