Multi-threaded Programming Learning Summary (reprint)

Source: Internet
Author: User
Tags time in milliseconds

Concepts and principles of threading

Why use multithreading?

In order to accomplish tasks more efficiently and utilize CPU resources, today's operating system is designed as a multitasking operating system, and multi-process and multi-threading is the way to achieve multi-tasking.

What are processes and threads?

A process is an application running in memory in which each process has its own separate piece of memory, and a process can start multiple threads. A process is the smallest unit of OS-allocated resources. A thread is an execution process in a process that can run multiple threads in a process. Threads are always part of a process, and multiple threads in a process share the memory of the process. A process is the smallest unit of OS scheduling.

How does it work?

    1. Multithreading is a mechanism that allows multiple instruction flows to be executed concurrently in a program, each of which is called a thread and is independent of each other. A thread, also known as a lightweight process, has independent execution control as a process, which is scheduled by the operating system, except that the thread does not have separate storage space, but rather shares a storage space with other threads in the owning process, which makes communication between threads much simpler than the process.
    2. Specific to the Java memory model, because Java is designed as a cross-platform language, in memory management, there is obviously a unified model. There is a main memory in the system, and all the variables in Java are stored in memory and shared for all threads. Each thread has its own working memory-the call stack, which is the copy of some of the variables in main memory that is stored in the working RAM, the thread's operations on all variables are in working memory, the threads are not directly accessible to each other, and the variables are passed through main working.
    3. The execution of multiple threads is concurrent, that is, logically "simultaneous", regardless of whether it is physically "simultaneous". If the system has only one CPU, then the real "at the same time" is impossible. The biggest difference between multithreading and traditional single threading is that because the control flow of each thread is independent of each other, the code between threads is executed in random order, which will bring about thread scheduling, synchronization and so on.

Thread state Transitions

The state transition of a thread is the basis of thread control. The total thread state can be divided into five states: New, ready, run, wait/block, and death, respectively. This is described in a diagram as follows:

1. New state: The thread object has been created and the start () method has not been invoked on it. 2. Ready state: When the thread is eligible to run, but the scheduler has not selected it as the state of the thread when the thread is running. When the start () method is called, the thread first enters the operational state. It also returns to the ready state after the thread has run or has returned from a blocking, waiting, or sleeping state. 3. Running state: The thread Scheduler selects a thread from the ready thread pool as the current thread when the thread is in the state. This is also the only way that a thread goes into a running state. 4. Wait/block/Sleep state: Threads are not allocated CPU time, cannot be executed, may block I/O, or block synchronization locks. In fact, this three-state combination is one, and its common denominator is that the thread is still alive, but there is no condition to run at the moment. In other words, it is operational, but if an event occurs, he may return to the operational state. 5. Death state: When the thread's run () method finishes thinking it dies, calling stop () or destroy () also has the same effect, but is not recommended, the former produces an exception, the latter is forced to terminate, and the lock is not released. This thread object may be alive, but it is not a separate thread. Once a thread dies, it cannot be resurrected. If you call the start () method on a dead thread, the java.lang.IllegalThreadStateException exception is thrown.

lock Mechanism

the essence of the thread lock mechanism is to solve the mutex problem in thread communication. Since we can guarantee that the data object is accessible only by means of the private keyword, we only need to propose a mechanism for the method, which is the Synchronized keyword, which consists of two usages: the Synchronized method and Synchronized block. Note: Each class instance corresponds to a lock, and synchronization and mutex are relative to multithreading .

Synchronized method

Declare the Synchronized method by adding the Synchronized keyword to the method declaration, with the following syntax:

Public synchronized void Procdata ();

Synchronized method principle: When multiple threads access the same synchronized method, they must obtain a lock on the class instance that invokes the method to execute, or the owning thread is blocked, and once the method executes, the lock is exclusive until the lock is released when the method returns. The blocked thread will then be able to obtain the lock and re-enter the executable state. This mechanism ensures that at the same time for each class instance, at most one of its member functions declared as synchronized is in an executable state (since at most one can obtain the corresponding lock for that class instance), This effectively avoids access violations of class member variables (as long as all methods that may access the class member variable are declared as synchronized).

In Java, not only class instances, each class also corresponds to a lock, so we can also declare the static member function of the class as synchronized to control its access to static member variables of the class.

The flaw of the Synchronized method: Declaring a large method as synchronized will greatly affect efficiency, typically if the method run () of the thread class is declared as synchronized, because it is running throughout the lifetime of the threads, As a result, the call to any synchronized method in this class will never succeed. Of course we can put the code that accesses the class member variable into a specialized method, declare it as synchronized, and use it in the main method to solve the problem, but Java provides us with a better solution, that is, the synchronized block.

Synchronized block

Declare the synchronized block with the Synchronized keyword, with the following syntax:

Synchronized (SyncObject) {
Code that allows access control
}

The synchronized block is a block of code in which the code must obtain an object SyncObject (as previously described, which can be a class instance or Class) to execute, as described in the previous mechanism. Because it can be arbitrary code block, and can arbitrarily specify the locked object, it is more flexible.

  blocking mechanism

The nature of the blocking mechanism is to solve the synchronization problem of thread communication. Locking and blocking mechanisms to resolve mutex and synchronization issues in thread communication.

In order to solve the access violation of the shared storage area , the locking mechanism is introduced to investigate the access of multiple threads to the shared resources , obviously the lock mechanism is not enough, because the resource required at any time is not necessarily ready to be accessed, in turn, There may be more than one resource ready at the same time. In order to solve the problem of access control in this case, the support of blocking mechanism is introduced.

Blocking refers to pausing the execution of a thread to wait for a condition to occur (such as a resource is ready). Java provides a number of ways to support blocking, let's analyze each of them.

    1. sleep () method : Sleep () allows you to specify a period of time in milliseconds as a parameter, which causes the thread to enter a blocking state within a specified time, cannot get CPU time, the specified time is over, and the thread re-enters the executable state. Typically, sleep () is used when waiting for a resource to be ready: After the test discovery condition is not met, let the thread block for a period of time and then re-test until the condition is satisfied.
    2. suspend () and resume () methods : Two methods are used, suspend () causes the thread to enter a blocking state, and does not automatically recover, it must be called by its corresponding resume () to enable the thread to re-enter the executable state. Typically, suspend () and resume () are used when waiting for the result of another thread: After the test finds that the result has not yet been generated, the thread is blocked and the other thread produces the result, calling resume () to restore it.
    1. yield () method : Yield () causes the thread to discard the current CPU time, but does not cause the thread to block, that is, the thread is still in an executable state, and the CPU time may be split again at any time. The effect of calling yield () is equivalent to the scheduler thinking that the thread has performed enough time to go to another thread.
    2. Wait () and notify () methods : Two methods are used, wait () causes the thread to enter the blocking state, it has two forms, one allows to specify a period of time in milliseconds as a parameter, and the other without parameters, the former when the corresponding notify () The thread re-enters the executable state when it is called or exceeds the specified time, and the latter must be called by the corresponding notify ().
Comparison of blocking methods
    1. The core of the difference between 2 and 4 is that all of the methods described earlier, blocking will not release the lock that is occupied (if it is occupied ), but this pair of methods is the opposite. The core differences above lead to a series of differences in detail.
    2. First, all the methods described earlier are subordinate to the thread class, but the pair is directly subordinate to the object class, which means that all objects have this pair of methods. Because this pair of methods is blocking to release the lock that is occupied, and the lock is for any object, calling the Wait () method of any object causes the thread to block and the lock on the object is freed. The Notify () method that invokes an arbitrary object causes a randomly selected unblocking in the thread that is blocked by calling the wait () method of the object (but is not really executable until the lock is acquired).
    3. Second, all the methods described earlier can be called anywhere, but this pair of methods must be called in the Synchronized method or block, the reason is simple, only in the Synchronized method or block the current line friend occupy the lock, only the lock can be released. Similarly, locks on objects that call this pair of methods must be owned by the current thread, so that locks can be freed. Therefore, this pair of method calls must be placed in such a synchronized method or block where the locked object of the method or block is the object that invokes the pair of methods. If this condition is not met, the program can still compile, but the illegalmonitorstateexception exception will occur at run time.

The above characteristics of the wait () and notify () methods determine that they are often used with synchronized methods or blocks. A comparison between them and the operating system's interprocess communication mechanism reveals their similarity: the Synchronized method or block provides functionality similar to the operating system primitives that are used to solve a variety of complex inter-threading communication problems.

About the Wait () and notify () methods
    1. Calling the Notify () method causes the unblocked thread to be randomly selected from the thread that was blocked by calling the wait () method of the object, and we cannot predict which thread will be selected, so be careful when programming, and avoid problems with this uncertainty.
    2. In addition to notify (), there is also a method Notifyall () can also play a similar role, the only difference is that the call to the Notifyall () method will be called by the Wait () method of the object is blocked all at once unblocked all the threads. Of course, only the thread that gets the lock can go into the executable state.

When it comes to blocking, it is impossible to talk about deadlocks, and a brief analysis reveals that the suspend () method and the call to the Wait () method, which does not specify a time-out period, can generate a deadlock. Unfortunately, Java does not support deadlock avoidance at the language level, and we must be careful in programming to avoid deadlocks.

We have analyzed the various methods of threading blocking in Java, and we have focused on the wait () and notify () methods, because they are the most powerful and flexible to use, but it also makes them less efficient and prone to error. In practice we should use various methods flexibly in order to achieve our goal better.

About the Join () method

The join () method can be used to block the current thread to wait for the extinction of a particular thread (the thread that calls the join). Thread objects are not allowed to invoke the join of their own threads in their own executable.

  Thread Priority

The priority of the thread represents the importance of the thread, and when multiple threads are in an executable state and waiting for CPU time, the thread dispatch system decides which CPU time to assign to the individual threads based on the priority of each thread, the higher priority thread has a greater chance of getting CPU time, and the low priority thread is not without opportunity. Just the chance to be smaller.

You can call methods GetPriority () and SetPriority () of the thread class to access the thread's priority, the thread's priority bounds are between 1 (min_priority) and ten (max_priority), and the default is 5 (norm_ Priority).

  daemon Threads and user threads

Threads can be divided into user threads (users) and daemon threads (Daemon): The daemon is a special kind of thread that differs from a normal thread in that it is not a core part of the application, and that when all non-daemons of an application are terminated, the application terminates even if there is still a daemon thread running. , the application will not terminate as long as a non-daemon thread is running. A daemon thread is typically used to service other threads in the background. You can call method Isdaemon () to determine whether a thread is a daemon thread, or you can call method Setdaemon () to Cheng a line as a daemon thread.

An individual's more intuitive understanding of daemon threads is that both the user and the daemon thread have an executable sequence with their own work stack, except that the daemon thread ends with its parent thread, which is not part of the program ontology. Another layer means that the end of the parent thread depends on all its child user threads, regardless of the daemon thread. The difference between them determines that they are used in different scenarios, and the daemon thread typically serves other threads, such as the garbage collector.

It is important to note that the Setdaemon () method must be called before the start () method is called by the thread object, otherwise no effect .

  Thread Group mechanism
    1. Thread groups are a Java-specific concept in Java where thread groups are objects of class Threadgroup, each of which is subordinate to the only thread group, which is specified during thread creation and cannot be changed during the entire lifetime of the process. You can specify a thread group that belongs to threads by calling the Threadgroup class constructor that contains the type parameter, and, if not specified, the thread defaults to the system thread group named system.
    2. In Java, all thread groups must be explicitly created in addition to the pre-built system thread group. In Java, each thread group other than the system thread group is subordinate to another thread group, and you can specify the thread group to which it belongs when you create the thread group, which, if not specified, is subordinate to the system thread group by default. In this way, all thread groups make up a tree that is rooted in the system thread group.
    3. Java allows us to operate on all threads in a thread group at the same time, for example, by invoking the appropriate method of the thread group to prioritize all of them, or to start or block all threads in it.
    4. Another important role of the thread group mechanism of Java is thread safety. The thread group mechanism allows us to differentiate between threads with different security features by grouping, to handle different groups of threads differently, and to support the adoption of incorrect security measures through the hierarchical structure of thread groups. The Java Threadgroup class provides a number of ways to make it easier for us to operate on each thread group in the thread group tree and on each thread in the thread group.

ThreadLocal

Java.lang.ThreadLocal is the local variable (thread local variable). It provides a copy of the value of a variable for each thread that uses the variable , so that each thread can independently change its copy without conflicting with the other thread's copy. From a thread's point of view, it's as if every thread has exactly the variable. threadlocal Essence is a thread-safe hashmap,key for Threadname,value as a variable within a thread.

    • synchronized is used for data sharing between threads, while threadlocal is used for data isolation between threads .
    • Threadlocal mainly solves the problem that the data in multi-threading is inconsistent with concurrency. ThreadLocal provides a copy of the data objects that are accessed concurrently in each thread , running the business through the access replica, which consumes memory, but greatly reduces the performance cost of thread synchronization and reduces the complexity of thread concurrency control.
    • Threadlocal cannot use atomic types, only object types. Threadlocal is much simpler to use than synchronized.
    • Both threadlocal and synchonized are used to solve multi-threaded concurrent access. But there is an essential difference between threadlocal and synchronized. Synchronized is the mechanism by which a lock is used so that a variable or block of code can be accessed by only one thread at a time. Instead, Threadlocal provides a copy of the variable for each thread, so that each thread accesses the same object at a certain time, isolating data sharing from multiple threads. Synchronized, in contrast, is used to gain data sharing when communicating between multiple threads.
    • Of course threadlocal is not a substitute for synchronized, they deal with different problem domains. Synchronized is used to implement the locking mechanism, which is more complex than threadlocal.

Summary
    • Together we learned all aspects of Java multithreaded programming, including creating threads and scheduling and managing multiple threads. We are acutely aware of the complexities of multithreaded programming and the inefficiency of multi-threaded programs brought about by thread switching overhead, which also prompts us to think seriously about whether we need multi-threading? When do I need multiple threads?
    • the core of multi-threading is that multiple code blocks are executed concurrently, essentially because the code between blocks of code is executed in a disorderly order. whether or not our program requires multithreading is to see if this is the intrinsic feature of it.
    • If our program does not require multiple code blocks concurrent execution, it naturally does not need to use multi-threading, if our program requires multiple code blocks concurrent execution, but does not require chaos, then we can use a loop to be simple and efficient implementation, and do not need to use multithreading; only if it fully conforms to the characteristics of multithreading , the multithreading mechanism's strong support for inter-thread communication and thread management can be useful, and it is worthwhile to use multi-line friend.

Reference: http://programming.iteye.com/blog/158568 http://lavasoft.blog.51cto.com/62575/51926

Multi-threaded Programming Learning Summary (reprint)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.