From Java multithreading comprehension to cluster distributed and network design analysis

Source: Internet
Author: User
Tags time in milliseconds try catch

For the Java Multi-Threading application is very extensive, now the system is not multi-threading almost nothing can not do, many times we are in the context of how to apply multithreading is a first choice, but also about the Java Multi-Threading knowledge is also very much, This article first introduced and explained some commonly used, in the subsequent article if necessary to explain more complicated, this article mainly explains the multi-threaded several content:

1. When to choose Multithreading in application development?

2 , multithreading should pay attention to what?

3 , State transition control, how to resolve deadlocks?

4 , how to design a multi-threaded processor with scalability?

5, Multi - threaded Lenovo: Under the multi-host extension- cluster?

6.WEB the application of multithreading and the principle of long connections.

1. When to choose Multi-Threading in application development.

In the preface of the article has been briefly mentioned in the article on multi-threaded applications, through the web of some threads control the download traffic control, in fact, it is only tricks, there are a lot of problems need to solve, but the face of a small number of users of the general problem is not much.

Multithreading in life is the embodiment of a number of the same many things to multiple people to the completion of parallel, and the middle of a main thread to the role of the dispatcher, the runner can be forced to rely on the existence of the main thread exists, can also make the main thread dependent on itself; I've heard a lot of people say if your machine is a single CPU, Multithreading does not make sense, in fact, I do not think that a single CPU can only prove that the thread is scheduled to execute only one of the lowest command at the same time, and does not mean that the CPU can not improve efficiency, one is memory level, and the other is CPU level, There is still a big gap in efficiency; (This lets a program single-threaded to loop 1 billion times (increment 1 each time ), and let 10 threads run independently 100 million times is also the same action, remember here do not each piece of data System.out.println out, one is the machine can not carry, the other is here will have an impact on the test data, because this method in my previous article has shown that there will be blocking, especially in the concurrency of the block, even if the results of a single CPU is certainly a big gap, I do not have a single-core PC machine, so I can not get some test results data to everyone, please have the condition of friends to test their own.

In today's system is inseparable from the idea of multi-threaded, including clustering, distributed can be understood as a principle of multithreading, then what is the principle of multithreading? What is multi-threaded and multi-process?

In fact, to achieve the simplest distribution of ideas is a multi-process, in fact, similar to the system in the separation process of a vertical separation, the different business systems distributed on different nodes to run, they do not interfere with each other, and the application of multi-process, release resources in all aspects of the cost is very large, and the resource is not CPU level, And the thread is a more detailed inside the process of content, a process can allocate n threads, these threads will be in parallel to the acquisition of CPU resources, if your machine is a multi-core processor, concurrency will bring an abnormal performance improvement, in fact, the principle is under the limited resources, How to play the maximum performance advantage (but there must be a certain amount of resources, the so-called do not do too much to do).

There are 3 common ways to implement multithreading in Java :

1, inherit from the thread class, rewrite the Run method

2. Implement the Runable interface and implement the Run method

3. Implement callable interface, implement call method (with return value)

As for the various methods of invocation, you can start directly with start, or you can use java.util.concurrent.Executors to create a thread pool, and the thread pool created is mainly divided into:

1. Executors.newsinglethreadscheduledexecutor () Create a thread pool that executes sequentially, and you do not need to use synchronized to synchronize inside the Run method , because it is sequential in itself.

2. Executors.newcachedthreadpool () Creates a thread pool where threads execute it in parallel.

3,Executors.newfixedthreadpool (10) Create a thread pool of size 10, the thread pool creates a queue of up to 10 length, if more than 10, there are 10 threads are executing, that is, you can control the number of threads, or you can make them run in parallel.

If your system is a Web application, it is recommended to try to avoid long threads in the Web application, because this part of the program is mainly controlled by the Web container, if necessary to establish, try to build less, Or try to get rid of the thread that can be dispatched less frequently, even if the next rebuild is done.

If your multi-line program is run independently, dedicated to receiving and processing some messages, then I believe at least one thread is constantly probing (there are many programs that sleep a little longer, such as:TimeUnit.MINUTES.sleep (Sleep_time) This method is dormant for a period of time in milliseconds), such programs, it is best to set the thread as a background thread (Setdaemon (true), be sure to call the method valid before calling run), The biggest difference between a background thread and a non-background thread is that background threads are automatically killed and recycled after all non-background threads die, and as you write other multithreaded programs, even if your Main method is complete (the main thread), but the child threads requested in main are not completed, The program will still not end.

In general, almost every moment of the code is multi-threaded, but a lot of things containers help us to complete, even if the local AWT,SWING, but also in a lot of control processing Chinese asynchronous, but this asynchronous relatively few, more asynchronous can be written by the program, Custom multithreading is typically used in background processing that is independent of the previous container application. Why the front end of similar Web applications will be multithreading long ago, one is because in order to reduce the program and Bug, the other is to write multi-threading is really not easy, this will allow the programmer to care about more than the need to care about things, but also require a high level of programmer, But if you want to be a good programmer must understand multi-threading, we will start with a few questions, and then to explain:

If a system is dedicated to clock processing, trigger processing, and the system may be distributed, how should it be written inside a system? In addition, many threads in the process of writing our most depressing thing, but also the most difficult to figure out what the patch is: multi-threaded Now what is the health of the state? My thread can't die, if I die, how do I find out? Find out how to handle (Automatic, manual, restart)?

With these questions, we bring up some of the topics under the article.

2 , multithreading should pay attention to what?

Multithreading with cool, there are problems you are not so good, simple to say, multi-threaded you are the most puzzling is its problem; but do not be afraid of it, you fear it will never conquer it, hehe, as long as the touch of some temper, we always have the means to conquer it.

Understand the multi-threaded stateful information, and the conversion rules between?

Multi-threaded generally under what circumstances will appear welding or dead phenomenon?

How to capture and handle multi-threaded welding or dying?

Just ask questions, ask questions, and before you talk about problems, mention the extended knowledge points, which are explained in the following sections.

A good choice in the framework of open source multi-threaded scheduling task is:Quartz, articles about it can go to http://wenku.baidu.com/view/3220792eb4daa58da0114a01.html

Download this document, this document also describes the most of the use of the framework, but because the framework itself has a lot of packaging, so many of the underlying implementation is not so obvious, and for the thread pool management is basically transparent, you can only through some other means to get the content.

So to get this framework first to learn its features, further is to see how to further encapsulate it to get the most suitable content for your project.

In addition multithreading in the data structure options also have a lot of tricks, about multi-threaded concurrent resource sharing on the data structure selection specifically to discuss with you, because the technique is indeed a lot of, especially after JDK 1.6 proposed a lot of data structure, it reference similar to the Oracle version number principle, The method of data copy and Atom copy in memory is realized, which ensures consistency of reading and writing and reduces concurrent requisition to a large extent. There is also a very important knowledge system for optimistic locking mechanism and high performance multithreaded design.

3 , State transition control, and how to troubleshoot deadlocks.

3.1.java What is the status of the default thread? (The so-called default thread is not rewriting itself)

NEW : The thread that you just created has nothing to do, that is, a thread that has not started with the start command.

BLOCKED : Blocking or obstruction, that is, the thread at this time due to lock or some network causes blocking, there are signs of welding.

waiting: Waits for the lock state, it waits for the notify of a resource, that is, a lock opportunity for a resource, which is typically bound to a static resource, and has a synchronzed keyword wrapper in use when using Obj.wait () method, the current thread waits for a notify method on the Obj object, which may be this and, if this is the case, then there is usually an object on the method body Synchronized keyword.

time_waitde : Time-based wait, when the thread uses the Sleep command, it will be in a time-waiting state, time to return to the running state.

RUNNING: The running state, that is, the thread is running (when the thread is obstructed).

TERMINATED: The thread has finished, and the thread's isAlive () returns to false.

In general, the default thread state is these, some containers or frameworks will be the state of the thread for further encapsulation operations, the name of the thread and the state of the content will be a lot of changes, but as long as you find the corresponding principle will not be divorced from this essence.

3.1. Under what circumstances will the thread die?

locks , cross parties, and eventually lead to deadlocks; it is possible that the program itself causes the thread pool management of the shared cache and the customization to be separated from the container. There is a possibility that some distributed shared files, or a distributed database lock, may be responsible for this.

Network obstruction , the network is not afraid of no, also not afraid too fast, afraid when the time is slow, now is called too not to force, hurt Ah! But at home now often is this does not give force; when the network communication calls the content (including the database interaction is generally also through the network), it is easy to produce the phenomenon of welding, that is, suspended animation, it is difficult to determine how the thread in the end, unless there is an advance monitoring plan.

Other cases will the thread still die? As far as my personal experience is concerned, but not impossible, I think the probability that a thread that operates within the regular same JVM will die is only possible if the system hangs up, or the Sun's java Virtual machine is too much to trust. At least from this point we can decide that in most cases the main cause of the thread blockage is the above two main sources.

On the basis of understanding the vast majority of reasons, there have been questions and preliminary analysis of the problem, then continue to how to solve these problems, or to reduce the probability of the problem to a very low level (because there is no hundred of high availability environment, we just try to do it as perfect as possible, Amazon's cloud computing also has a staggering moment of downtime, hehe).

3.1. How do I capture and process a multithreaded weld or die?

When it comes to capturing, learning Java friends must be the first to think of a try catch, but the thread will never throw an exception, how to know that the thread is dead?

This needs to be done from our design level, for later Java provides a thread pool can be more comfortable to use, but for many very complex thread management, we need to design management. How to capture we use an example of life to give it feedback to the actual system design.

First multi-threaded himself died it certainly do not know, think of a person himself drunk or be knocked out of the same, hehe, then how to know its status quo? Put forward two kinds of realistic ideas, one is to have a valet, and the other is that it has a leader with a group of people out to play, the following people lost a it must go to find.

First look at the first way of thinking, the valet that I if he usually do nothing, with the former, when found the former fell, he immediately followed up to replace the work, which is often used in the system architecture redundant master-slave switch, may be a master more from , and cloud computing is also on the basis of further doing the remote shunt switching and resource dynamic scheduling (that is, there are fewer things, these people can do other things or sleep to raise the spirit and save food for the country, when this side of the matter is not busy, there will be someone else to do other things or stand by to help do these things , and even here by the earthquake flood type of natural disaster, and other institutions can replace the same work content, so that the external service will always be interrupted, that is, the legendary 24*7 high availability service, but such redundancy is too large, the cost will be very huge.

Look at the second service, there is a boss, it will be a little to see what the younger brother is doing, is not encountering difficulties, there is busy it in the above dynamic deployment of this resource; it seems that this model is very good? If the younger brother, it will not be busy, because the allocation of resources is required to understand the details of the following resources, otherwise this leader is not a good leader; then think about it, we can use multiple bosses, each boss lead a small team, the team can be resource deployment, but the team can be controlled by the boss himself, Boss of the above there is a mister it is only concerned about what the eldest brother do, and do not need to care about the behavior of the younger brothers, so that everyone's things on average, then the problem comes out, the younger brother's problem is can be transparent see, if the eldest brother accident even Mister accident how to do? At this time combined with the first thought, we just need to hang a valet at this time, the collection of two patterns of characteristics, that is, the younger brother does not need to match the valet, this saves a lot of cost (because the number of leaf nodes is the most), and the above node we need to have valet, if you want to maximize cost savings, Only need to let the master node configuration one or more valet can, but so the recovery costs up, because the recovery of information needs to find content on the level, generally we do not need to further to save costs on this basis.

These are real things, how to integrate into the computer system architecture, and then back to the multi-threaded design of this article, the fourth chapter to explore together.

4 , how to design a multi-threaded processor with scalability.

In fact, in the third chapter, has found a lot of solutions from the management mode of life, this is my personal approach to solve the problem, because the individual think that the complexity of the mathematical algorithm is not the complexity of human nature itself, the various means of life in the computer may get a lot of magical effect.

If you do not use any open source technology, to do a multi-threading framework should start from, on the basis of the above analysis, we will generally be a specialized multi-threaded system to decompose at least two layers, that is, the main thread to boot multiple running threads to deal with the problem; Well, at this point we need to address the following issues:

a) The content of multiple threads is similar, how to control concurrent requisition data or reduce the granularity of concurrency hotspots.

method 1: Hash hash thought will be excellent principle, according to the data characteristics of the decomposition data frame, each box of data rules according to a hash rule distribution,hash hash for programming easy to traverse, and the calculation speed is very fast, can almost ignore the positioning of the group time, However, the structure expansion process is cumbersome, but in the multithreaded design generally do not need to consider this problem.

Method 2: Range distribution, rangedistribution data is in advance to let managers know the approximate distribution of data, and in accordance with a more average rule to the following operating threads to deal with their own range of data, the data between each other is not any cross, its extensibility is better, Can be arbitrarily extended, if the number of decomposition is not controlled, too much decomposition, will cause a relatively slow positioning range, but the multithreaded design also generally do not consider this problem, because the program is written by itself.

method 3: Bitmap distribution, that is, the data has a bitmap rule, the general state, the data according to the bitmap distribution, the thread can be set as the number of bitmaps, to find their own bitmap segment data can do the operation without further updating, but often the number of bitmaps is limited, The amount of data that needs to be processed is very large, and it is often too much for a thread to handle all the data under a single bitmap, and if multiple threads process a bitmap again.

Three methods have advantages and disadvantages, so we tend to use a combination of the system to achieve a complete structure of the perfect state, of course, there is no perfect thing, only the most suitable for the current application environment of the architecture, so before design need to consider a lot of predictive issues, about this data distribution more for the architecture, But the foundation of the architecture also stems from the idea of programming, both of which are consistent, with regard to architecture and data storage distributions, which are later discussed separately.

b) How threads die to discover (and handle):

Management thread In addition to the thread running the action, there are 1~n valet, the number according to the actual situation, at least one when the management thread can immediately replace the work, there should be a line two way to periodically detect the running of the thread, because it is only responsible for this thing, so it is very simple, And in this group of threads who die can replace work with each other and restart the new thread to replace, the detection cycle is not too fast, not too slow, as long as the application can be accepted, because hanging something, the application of blocking a little time is very normal thing.

found that the thread has blocking phenomenon, found in the execution of some kind of blocking, caused by the reasons we have analyzed above, the solution is usually detected several times (this number is usually based on the configuration) after the discovery is in a blocking state, it is basically considered wrong The wrong situation at this point needs to execute a interrupt () method for the thread, at which time the execution of the thread will automatically throw an exception, that is, when the content of the executing thread is understood, especially with the network operation, a try catch is required, and the execution part is in the try. When the status of suspended animation and other situations, the external detection to use a interrupt () method, the running program will jump into the catch, there is no problem of expropriation, and the rapid return of their own needs to roll out the content, and that the end of the thread execution, the corresponding resources will be released, While using the Stop method is not recommended now because it does not release resources, it can cause a lot of problems.

In addition, before writing the code, if it involves some network operation, must have a lot of in-depth understanding of the network interaction program you use, such as socket interaction, if the other party due to network reasons (usually there is IP at that time the port is wrong or the network segment of the protocol is not connected) causes in the start connection, The socket connects to each other for several minutes before it shows a timeout connection, which is the default, so you need to set a boot connection timeout in advance to ensure that the network is able to communicate, and then execute (note that there is a timeout inside the socket is the continuous time after the connection, The former is a connection before setting a start connection time-out, generally this time is very short, generally 2 seconds is very long, because 2 seconds are not connected to this network basic connection is not on, and the latter is running, some interactions may be up to a few hours is also possible, but similar to this interaction is recommended to use asynchronous interaction to ensure stable operation.

C If you start and manage a level two management thread group:

There is a main thread to control the startup and shutdown, where these threads can be Setdaemon (True) before start, then the thread will be set as a background thread, and the so-called background thread is when the main thread executes and frees the resource. These threads created by the main thread will automatically free up resources and die, and if a thread is set as a background thread, other child threads created inside its run method will automatically be created as a background thread (not if created in the constructor method).

Management threads can also manage child nodes like a two-level thread, as long as your program is not afraid to write complex, although it needs to be written with very good code, and requires a very complex test to run stably, but once successful, the framework will be very beautiful and stable, but also highly available.

5, Multi - threading under the extension of multiple hosts- Cluster

In fact, we mentioned in the above and some of the distributed knowledge, also known as the partition of data knowledge (in the network environment using the pc to achieve similar to the same host partition mode, basically can be called the data is distributed storage).

But the cluster mentioned here and this has some differences, it can be said that the distribution contains the concept of clustering, but the concept of general cluster is also a lot of differences, and to separate the app cluster and database cluster.

Cluster generally refers to the same unit under a plurality of nodes (the same machine can also deploy multiple nodes), these nodes almost to complete the same thing, or similar things, this and multi-threaded pull together, multithreading is so, the comparison is multi-threaded scheduling in the implementation of multiple host groups, so the reference App cluster, there is a management node, it almost does very little things, because we do not want to let it hang, because although he did less, but very important, one is from it can get each node's application deployment and configuration, as well as the status and other information, and the agent node is called distribution node, It is distributed almost exclusively under the control of the Management node and, of course, ensures session consistency.

Another embodiment of the cluster in multi-threading is to hang one, the rest can be replaced, and will not cause the overall death, and the cluster group is equivalent to a large group of threads, related to divert management, can fail to switch to each other, and multiple business or multi-tool items will be divided into different cluster groups, This is similar to the pattern of the multi-group thread groups that we design in the three-tier threading pattern in threads, each with its own personalized properties and shared properties.

In the face of the database cluster, relatively more complex than the app cluster, theapp in the vertical expansion is almost only limited by the ability of the distribution node, and this part can be adjusted, so it is very convenient in the vertical expansion process, and the database cluster is not the same, it must ensure transactional consistency, and to achieve transaction-level switching and some degree of grid computing power, the middle of the more complex also in the memory of this block, because its data read into memory to the memory of multiple hosts to be configured like a memory (through the heartbeat), and need to get the ability to dynamically expand, This is also one of the reasons for the development of extensibility received under the database cluster.

Isn't the app as difficult as the database? Yes, but the granularity is relatively small, theapp cluster generally does not need to consider the transaction, because a user's session is generally not in the case of downtime, there is no replication requirements, but will always access the specified machine, so they do not need communication between While the particle size of the coupling lies in the design of the application itself, some applications will write their own code to inject some content into memory, or inject into a file in the app as a file cache, so that when the data changes, they first change the database, then modify the memory or notify memory failure Database because the cluster uses the heartbeat connection, so maintain consistency, and the app side of the data because it only modifies its own memory-related information, does not modify the memory information of other machines, so will inevitably lead to access to other data on the machine content is inconsistent; As for this part of the solution, Depending on the actual project, there is a communication done, but also through the shared buffer is completed (but this way back to the shared pool resource requisition generated lock), but also by other means to complete.

Large-scale system architecture final data distribution, centralized management, distributed storage computing, business-level horizontal cutting, vertical separation with business applications , data-level hash +range+ bitmap distribution structure, remote shunt disaster recovery, standby unit and resource allocation integration, All of this is based on the implementation of multi-threaded design idea architecture in distributed units.

6.WEB application of multithreading and long-connection principle

In Web applications, special server customizations are made to special business services, similar to some high-concurrency access systems that are even dedicated to systems that are transient and highly concurrent (many times the system is not afraid of high concurrency, but are afraid of an instant high concurrency) but their access is often simple, mainly for transactional processing and data consistency assurance, They are in the data processing requirements in the database side also does not allow too much computation, the calculation is generally done in the app, the database is generally only do storage, fetch, transaction consistency action, this kind of general belongs to special OLTP system; But each processing of data and calculations tend to be more, one is the OLAP class system, and the source of data is generally OLTP,OLAP data per processing may be very large, generally in the type collection and statistical data dump, you need to The data in OLTP is extracted from the method of query and retrieval of some business rules, and the organization is stored in another place for valid information, this place may or may not be (the database's computational power is the strongest of the data, but it's the slowest thing in the real world, Because the database is more need to ensure a lot of transactional consistency and locking mechanism problems, and some intermediate parsing and optimization and so the resulting overhead is very large, and the application interaction with the process is to be done through the network, so a lot of data in the actual application does not necessarily have to use the database ) These two types of systems in the design and architecture are very different, but the common system both have characteristics, but are not so extreme, so do not consider too much, here need to mention is a kind of very special system, is real-time push data and high concurrency system, so far I personally do not know what kind of system to merge it, This is indeed a very special kind of system.

Such systems, such as high-concurrency access, and the need to the same platform of data to allow the client to get the content in real time, such a site is unlikely to get a lot of content to the client again, and certainly through a lot of asynchronous interaction process to complete, the following simple to say this asynchronous interaction.

All of the frameworks underlying web asynchronous interaction are Ajax, and the rest of the frameworks are done on top of that, so how can AJAX control the interaction in order to get near-real-time content? Does the client constantly go to refresh the same URL? If the client is very many, similar to a large web site, it is possible that the server will go down very quickly, unless the server cost is much higher than the normal situation, and more servers may also need to be modified in the architecture to perform their performance (because on the server's architecture,1 + 1 is always less than 2 Performance, more servers in the overhead).

Think of another way is to push data from the server side to the client, then the question is how to push, such operations are based on a long connection mechanism, long connections are open connections, the client using Ajax and back-end communication, the back end of the feedback information as long as it is not disconnected can be regarded as a long connection mechanism A lot of it is through the socket and the server side communication, but also can use Ajax, but Ajax needs to do a lot of processing on it.

Server side also must use the corresponding strategy, now more is Javanio, relative bio performance is lower, but also very good, it is not immediately for the user request to allocate threads to process, but to queue the request, While the queued process can control granularity on its own, and the thread will be allocated as a queue for the thread pool, that is, the server-side request to the client is an asynchronous response (note that this is not an AJAX-only asynchronous interaction, but rather a server-side response to the request), which responds to many requests not in time. When data changes occur, the server first obtains the client session list and outputs it to the client via the request list , similar to the server-side proactive push data to the clients, while the benefit of asynchronous interaction is that the server side does not allocate or request a new thread for each client. This can lead to high concurrency caused by resource allocation is not caused by the memory overflow phenomenon, after solving the two problems, there is another problem to be resolved, when a thread in processing a request task, because the thread processing a task before the completion of a job until dead or welded, otherwise it will not be broken down, This is certain (we can cut some big tasks into small tasks, the threads will be processed much faster), but one problem is that the server side of this thread may quickly process the data to be processed and push to the client, but the client due to various types of network communication problems, Causes the delay to accept the completion, at this time the thread will also be taking up some unnecessary time, then whether in this middle need to further make a layer of breakpoint transfer cache? Caching is not just about replacing the content of the application server when the breakpoint data is needed, asynchronous breakpoints outputting information to the client while almost all the application server processing time is focused on data and business processing, rather than on the output network, there are many ways to Follow up on the opportunity to discuss the knowledge about network caching with you.

From Java multithreading comprehension to cluster distributed and network design analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.