Turn from: Http://www.tuicool.com/articles/VRVFZb Preface
Multi-threaded classification of 21 articles, more than 21 articles, a lot of content, personal thinking, learning, content, the more miscellaneous knowledge, the more need to carry out a profound summary, so as to remember the profound, the knowledge into their own. This article mainly summarizes the problem of multithreading, so it lists 40 multi-threading problems.
These multithreading problems, some from the major sites, some from their own thinking. There may be some problems on the Internet, there may be some problems corresponding to the answer also have, may be some netizens have also seen, but the focus of this writing is all the questions will be in accordance with their own understanding of the answer, will not see the answer on the Internet, so there may be some problems, can correct the hope that everyone feel free.
40 Questions Summary
1. What is the use of multithreading?
A question that may seem a little ridiculous to many people: I can use multithreading, and what's the use of it? In my opinion, this answer is even more nonsense. The so-called "know the reason why", "will use" just "Know It", "Why Use" is "know the reason why", only to achieve "know it is the reason why" the degree can be said to be a knowledge point of ease. OK, here's my opinion on this question:
(1) Advantages of multi-core CPU
With the progress of industry, the current notebook, desktop and even commercial application server is at least dual-core, 4-core, 8-core or even 16-core is not uncommon, if it is a single-threaded program, then on the dual-core CPU wasted 50%, on the 4-core CPU wasted 75%. The so-called "multithreading" on a single-core CPU is a fake multi-threaded, and the same time processor only handles a piece of logic, except that the threads switch faster and look like multiple threads running "simultaneously". Multi-core CPU multithreading is really multi-threading, it can make your multi-segment logic work simultaneously, multi-threading, can really play out the advantages of multi-core CPU, to achieve the full use of CPU.
(2) Prevent obstruction
From the point of view of the efficiency of the program operation, the single core CPU not only will not play the advantage of multi-threading, but will cause the thread context switching because of running multithreading on the single core CPU, and reduce the overall efficiency of the program. But single-core CPUs we still have to apply multi-threading, just to prevent blocking. Imagine if a single-core CPU is using a single thread, so long as it blocks, say, to read a data remotely, the end of the delay does not set the timeout period, then your entire program will stop running before the data returns. Multithreading prevents this problem by running multiple threads at the same time, even if the code of one thread performs read data blocking, and does not affect the execution of other tasks.
(3) Easy to model
This is another not so obvious merit. Assuming there is a large task A, single-threaded programming, then there is a lot to consider, the establishment of the entire program model is more troublesome. But if this big task a break down into a few small tasks, Task B, Task C, Task D, respectively, set up the program model, and through multiple threads to run these tasks, it is much simpler.
2. How to create Threads
One of the more common problems is that they are generally two kinds:
(1) Inherit the thread class
(2) Implement Runnable interface
As for which good, needless to say, is the latter good, because the way to implement the interface is more flexible than inheriting classes, but also reduce the degree of coupling between programs, interface-oriented programming is the core of the design pattern 6 principles.
3. The difference between the start () method and the Run () method
Only the start () method is called to show the multi-threading feature, and the code in the Run () method of the different threads executes alternately. If you just call the run () method, then the code is executed synchronously, and you must wait for the code in the Run () method of one thread to execute after all of it has been executed, and the other thread can execute the code inside its run () method.
4. The difference between runnable interface and callable interface
A bit deep problem, also see a Java programmer to learn the breadth of knowledge.
The return value of the run () method in the Runnable interface is void, and what it does is simply to execute the code in the Run () method; the call () method in the callable interface has a return value, is a generic type, and the future, Futuretask mates can be used to get the results of asynchronous execution.
This is actually a very useful feature, because multithreading is more difficult than single-threaded, more complex is an important reason because multithreading is full of unknown, a thread is executed? How long has a thread been executing? Is the data we expect to be assigned when a thread executes? It is impossible to know that all we can do is wait for this multi-threaded task to complete. But Callable+future/futuretask can get the result of multi-threaded running, can cancel the task of the thread if wait time is too long not get the data that need, really is very useful.
5. The difference between Cyclicbarrier and Countdownlatch
Two classes that look a bit like, all under Java.util.concurrent, can be used to indicate that the code is running to a point where the difference is:
(1) After a thread of cyclicbarrier runs to a point, the thread stops running until all the threads have reached the point and all the threads are rerun; Countdownlatch is not, after a thread has run to a point, it just gives a value of-1. The thread continues to run
(2) Cyclicbarrier can only evoke one task, countdownlatch may evoke multiple tasks
(3) Cyclicbarrier Reusable, countdownlatch non-reusable, count value 0 The Countdownlatch is no longer usable
6, the role of volatile keywords
A very important problem is that every Java programmer who learns and applies multithreading must master it. The premise of understanding the role of the volatile keyword is to understand the Java memory model, this is not the Java memory model, you can see the 31st, the volatile keyword has two main functions:
(1) Multithreading mainly around the two characteristics of visibility and atomicity, the use of volatile keyword modified variables, to ensure its visibility between multithreading, that is, every time you read to the volatile variable, it must be the latest data
(2) code underlying execution is not as simple as the high-level language we see----Java program, it executes Java code----and bytecode--and executes the corresponding C/ and hardware circuit interaction, in reality, in order to get better performance the JVM may reorder the instructions, some unexpected problems may occur under multithreading. Using volatile will disable semantic reordering and, of course, reduce the efficiency of code execution to some extent
From a practical point of view, one of the important functions of volatile is to combine with CAs to ensure atomicity, and in detail can be found in the classes under the Java.util.concurrent.atomic package, such as Atomicinteger.
7. What is thread safety
Again a theoretical question, a variety of answers have a lot, I give a person think the best explanation: If your code in the multi-threaded execution and execution in a single thread will always get the same results, then your code is thread-safe.
The point is that there are several levels of thread safety:
(1) Immutable
Like string, Integer, long these are the final types of classes, any one thread can not change their values, to change unless a new one, so that these immutable objects do not need any synchronization means to use directly in a multithreaded environment
(2) Absolute thread safety
The caller does not need additional synchronization measures, regardless of the run-time environment. To do this often takes a lot of extra cost, and Java labels itself as a thread-safe class, in fact most of them are not thread-safe, but absolutely thread-safe classes, as well as Java, for example, Copyonwritearraylist, Copyonwritearrayset
(3) Relative thread safety
Relative thread safety is what we usually call thread safety, such as vector, the Add, remove methods are atomic operations, will not be interrupted, but also limited to this, if a thread is traversing a vector, there is a thread at the same time add this vector,99% Concurrentmodificationexception, which is the fail-fast mechanism , will appear.
(4) Thread non-secure
This is nothing to say, ArrayList, LinkedList, HashMap, etc. are thread-unsafe classes
8. How to get to thread dump file in Java
Dead loop, deadlock, blocking, page open slow and so on, hitting thread dump is the best way to solve the problem. The so-called thread dump is the thread stack, which gets to the thread stack in two steps:
(1) Get the PID of the thread, you can use the JPS command, in the Linux environment can also be used Ps-ef | grep java
(2) Print thread stack, you can use the Jstack PID command, in Linux environment can also be used kill-3 PID
In addition, the thread class provides a getstacktrace () method that can also be used to get the thread stack. This is an instance method, so this method is bound to a specific thread instance, and each fetch gets to the stack that the specific thread is currently running.
9. What happens to a thread if a run-time exception occurs
If the exception is not captured, the thread stops executing. Another important point is that if the thread holds a monitor for an object, the object monitor is immediately released
10. How to share data between two threads
It is possible to share objects between threads, and then evoke and wait through Wait/notify/notifyall, Await/signal/signalall, for example, blocking queues Blockingqueue is designed to share data between
11. What is the difference between the sleep method and the wait method?
This question often asks, the sleep method and the wait method can be used to abandon the CPU for a certain time, the difference is that if the thread holds the monitor of an object, the sleep method does not abandon the object's monitor, the wait method discards the object's monitor
12. What is the role of the producer consumer model?
The question is very theoretical, but it is important:
(1) To improve the efficiency of the whole system by balancing producer's production capacity and consumer's consumption capacity, which is the most important role of producer consumer model.
(2) decoupling, which is the role of the producer consumer model, decoupling means that there is less contact between producers and consumers, less connections can be developed on their own without the need to receive mutual constraints
13. What is the use of threadlocal?
Simply put, Threadlocal is a space-time approach, In each thread to maintain a THREADLOCAL.THREADLOCALMAP implementation of the Open address method, the data are isolated, data is not shared, naturally there is no problem of thread safety
14. Why the Wait () method and the Notify ()/notifyall () method are called in the synchronization block
This is the JDK mandatory, and the Wait () method and the Notify ()/notifyall () method must first obtain the object's lock before calling
15. What is the difference between the wait () method and the Notify ()/notifyall () method when discarding object monitor
The wait () method and the Notify ()/notifyall () method differ when discarding the object monitor: The Wait () method immediately releases the object monitor, notify ()/notifyall () Method waits for the thread's remaining code to finish before discarding the object monitor.
16. Why to use thread pool
Avoid frequently creating and destroying threads to reuse thread objects. In addition, using the thread pool gives you the flexibility to control the number of concurrent items based on your project.
17. How to detect if a thread holds an object monitor
I also see a multi-threaded question on the Internet to know that there is a way to determine whether a thread holds an object monitor: The thread class provides a Holdslock (object obj) method that returns true only if the monitor of the object obj is held by a thread. Note that this is a static method, which means that "one thread" refers to the current thread .
18. The difference between synchronized and Reentrantlock
Synchronized is the same keyword as if, else, for, and while, Reentrantlock is a class, which is the essential difference between the two. Since Reentrantlock is a class, it provides more and more flexible features than synchronized, can be inherited, can have methods, can have a variety of class variables, reentrantlock than synchronized extensibility embodied in several points:
(1) Reentrantlock can set the wait time to acquire the lock, thus avoiding the deadlock
(2) Reentrantlock can obtain the information of various locks
(3) Reentrantlock can flexibly implement multi-channel notification
In addition, the locking mechanism of the two is actually different. Reentrantlock The bottom call is Unsafe Park method lock, synchronized operation should be the object in the head of Mark Word, I am not sure.
19. What is the concurrency of Concurrenthashmap?
The concurrency of Concurrenthashmap is the size of segment, which defaults to 16, which means that up to 16 threads can operate concurrenthashmap at the same time. This is also concurrenthashmap to hashtable the biggest advantage, in any case, Hashtable can have two threads to get the data in Hashtable?
20. What is Readwritelock?
First clear, not to say Reentrantlock bad, just reentrantlock some time have limitations. If you use Reentrantlock, it may be in itself to prevent thread A in writing data, thread B in reading data caused by inconsistent data, but so, if thread C in reading data, thread d is also reading data, read data will not change the data, there is no need to lock, but still lock, reduce the performance of the program.
Because of this, the birth of the read-write lock Readwritelock. Readwritelock is a read-write lock interface, Reentrantreadwritelock is a concrete implementation of the Readwritelock interface, which realizes the separation of read and write, the reading Lock is shared, the write lock is exclusive , and the read and read are not mutually exclusive. Read and write, write and read, write and write are mutually exclusive, improve the performance of reading and writing.
21. What is Futuretask?
This is actually mentioned earlier, Futuretask represents the task of an asynchronous operation. Futuretask inside can pass in a specific implementation class of callable, can wait for the result of the task of this asynchronous operation to obtain, judge whether has completed, cancels the task and so on operation. Of course, because Futuretask is also an implementation class for the Runnable interface, Futuretask can also be put into the thread pool.
22. How to find out which thread uses the longest CPU in Linux environment
This is a more biased practice problem, which I think is very meaningful. You can do this:
(1) Get the project Pid,jps or Ps-ef | grep Java, which has been said before
(2) Top-h-P PID, order cannot be changed
This allows you to print out the current project, with each thread occupying a percentage of CPU time. Note Here is the LWP, that is, the operating system native thread of the thread number, my notebook mountain is not deployed in the Linux environment Java project, so there is no way to demonstrate, friends if the company is using the Linux environment Deployment project, you can try.
Using "Top-h-P pid" + "JPS pid" makes it easy to find the thread stack of a thread that consumes high CPU, thus locating the reason for high CPU usage, generally because improper code operations lead to a dead loop.
Finally, the "top-h-P pid" hit out of the LWP is a decimal, "JPS pid" out of the local thread number is hexadecimal, a conversion, you can locate the CPU-intensive thread of the current thread stack.
23. Java programming Write a program that will cause deadlocks
The first time I saw this topic, I thought it was a very good question. Many people know what deadlocks are all about: thread A and thread B wait for each other's locks to cause the program to loop indefinitely. Of course, is limited to this, ask how to write a deadlock program do not know, this situation is plainly not understand what is a deadlock, understand a theory is finished son, in practice, encounter the problem of deadlock is basically invisible.
Really understand what a deadlock is, this problem is actually not difficult, several steps:
(1) Two threads each hold two object objects: Lock1 and Lock2. These two locks act as the lock of the synchronous code block;
(2) Thread 1 of the Run () method in the synchronization code block first get Lock1 object lock, Thread.Sleep (XXX), time does not need too much, 50 milliseconds almost, and then get Lock2 object lock. This is done primarily to prevent thread 1 from starting all at once to obtain an object lock for the Lock1 and Lock2 two objects in a row
(3) thread 2 Run) (method in the synchronous code block first get Lock2 object lock, and then get Lock1 object Lock, of course, then Lock1 object lock has been held by thread 1 lock, thread 2 must be waiting for thread 1 to release Lock1 object lock
In this way, thread 1 sleeps, thread 2 has acquired the Lock2 object lock, thread 1 tries to acquire Lock2 object lock at this time, it is blocked, and a deadlock is formed. The code will not write, occupy a bit more space, Java Multi-Threading 7: Deadlock This article has, is the above step of the code implementation.
24. How to wake up a blocked thread
If the thread is blocked by calling the wait (), sleep (), or join () method, it can be disconnected and wake it up by throwing interruptedexception, if the thread encounters io blocking, because IO is implemented by the operating system, Java code has no way of directly contacting the operating system.
25, non-variable object to multi-threaded how to help
As mentioned earlier, immutable objects guarantee the memory visibility of objects, and the reading of immutable objects does not require additional synchronization means, which improves the efficiency of code execution.
26, what is multi-threaded context Switch
A multi-threaded context switch is the process by which the CPU control is switched from one thread that is already running to another that is ready and waiting for the CPU to execute.
27. What happens if the thread pool queue is full when you submit a task
If you're using the Linkedblockingqueue, which is the unbounded queue, it's okay to continue adding tasks to the blocking queue to wait for execution, because Linkedblockingqueue can be thought of as an infinite queue that can hold the task indefinitely If you're using a bounded queue, say Arrayblockingqueue, the task is first added to Arrayblockingqueue, Arrayblockingqueue full, The Rejectedexecutionhandler will use the Deny policy to handle the full task, which is abortpolicy by default.
28. What is the thread scheduling algorithm used in Java?
Preemptive type. After a thread runs out of CPU, the operating system calculates a total priority based on data such as thread priority, thread starvation, and allocates the next time slice for execution by a thread.
29, what is the role of Thread.Sleep (0)
The question is related to the question above and I am connected. Because Java employs a preemptive thread scheduling algorithm, it is possible for a thread to gain control of the CPU often, in order for some lower priority threads to gain control of the CPU, you can use Thread.Sleep (0) To manually trigger the operation of the operating system to allocate time slices. This is also an operation to balance CPU control.
30. What is spin
Many synchronized inside the code is just some very simple code, the execution time is very fast, this time waiting for the thread lock may be a less worthwhile operation, because the thread blocking involves the user state and the kernel state switch problem. Since the code inside the synchronized executes very quickly, let's not block the thread waiting for the lock, but do a busy loop at the synchronized boundary, which is spin. This can be a better strategy if you do multiple busy loops and find that you haven't got a lock and then blocked it.
31. What is the Java memory model
The Java memory model defines a specification for multithreaded access to Java memory. Java memory model to be complete not a few words here can be said clearly, I briefly summarize the Java memory model of a few parts of the content:
(1) The Java memory model divides memory into main memory and working memory . The state of the class, the variable shared between the classes, is stored in the main memory, and each time the Java thread uses the variables in the main memory, it reads the variables in the main memory and lets them have a copy of their working memory, which is used when running their own thread code. The operation is the one in your own working memory. After the thread code executes, the latest values are updated to main memory
(2) defines several atomic operations for manipulating variables in main memory and working memory
(3) Define the use rules for volatile variables
(4) Happens-before, that is, the first occurrence of the principle, defines the operation a must first occur in operation B some rules, such as in the same thread control flow in front of the code must precede the control flow code, A release lock unlock action must first occur in the following for the same lock locking lock action and so on, as long as these rules, you do not need to do additional synchronization measures, if a piece of code does not conform to all happens-before rules, this code must be thread non-secure
32. What is CAs
CAS, all called compare and set, that is, compare-set. Suppose there are three operands: a memory value of V, an old expected value of a, a value B to be modified, and only if the expected value A and the memory value of V are the same, the memory value is modified to B and returns true, otherwise nothing is done and false is returned. Of course, the CAS must be volatile variable matching, so as to ensure that the variable is the most recent in the main memory of the value, otherwise the old expected value A to a thread, will always be a constant value a, as long as a CAS operation failed, will never be successful.
33, what is optimistic lock and pessimistic lock
(1) Optimistic lock: Just like its name, optimistic locking is optimistic about the thread safety issues arising from concurrent operations, so it does not need to hold a lock, compare-set These two actions as an atomic operation to try to modify the variables in memory, If the failure indicates a conflict, then there should be a corresponding retry logic.
(2) Pessimistic lock: Still like its name, for the problem of thread safety caused by concurrency, pessimistic locks believe that competition always happens, so every time a resource is operated, it will hold an exclusive lock, like synchronized, regardless of 3,721, Directly on the lock on the operation of resources.
34. What is Aqs
Simply say that Aqs,aqs is all called abstractqueuedsychronizer, translation should be an abstract queue synchronizer.
If Java.util.concurrent is the basis of the CAs, then AQS is the entire Java and the core of the contract, Reentrantlock, Countdownlatch, semaphore and so on have used it. Aqs actually connects all the entry in the form of a two-way queue, say Reentrantlock, all the waiting threads are placed in a entry and connected to a two-way queue, and the previous thread uses Reentrantlock. The two-way queue is actually the first entry to start running.
AQS defines all operations on a two-way queue, but only the Trylock and Tryrelease methods are available to developers, and developers can override Trylock and Tryrelease methods to implement their concurrency capabilities based on their implementation.
35. Thread safety in single-case mode
The old question is, the first thing to say is that a singleton pattern of thread safety means that an instance of a class will only be created once in a multithreaded environment . The singleton pattern has many kinds of writing, I summarize:
(1) How to a hungry man a single-case pattern: thread safety
(2) Lazy single-case pattern: Non-thread-safe
(3) Double lock single case pattern: thread safety
36. What is the role of semaphore?
A semaphore is a semaphore that restricts the number of concurrent numbers of a block of code . Semaphore has a constructor that can pass in an int integer n, indicating that a piece of code can be accessed by up to n threads, and if n is exceeded, wait until a thread executes the block of code and the next thread enters. It can be seen that if an int integer n=1 is passed into the Semaphore constructor, the equivalent becomes a synchronized.
37, Hashtable the size () method is clearly only one statement "return count", why do you want to synchronize?
This is one of my previous confusion, do not know whether people have thought about this problem. If there are more than one statement in a method, and all are operating the same class variable, then in the multi-threaded environment without locking, it is bound to raise the thread security problem, this is very good understanding, but the size () method is clearly only one statement, why also lock?
With regard to this problem, in the slow work, the study, has the understanding, the main reason has two points:
(1) Only one thread at a time can perform a synchronous method of a fixed class, but for a non-synchronous method of a class, multiple threads are accessible simultaneously. So, there is a problem, maybe thread A in the execution of Hashtable put method to add data, thread B can call the size () method to read the current number of elements in the Hashtable, the read value may not be up-to-date, maybe thread A has added the data, But without the size++, thread B has already read the size, so the size read to thread B must be inaccurate. After adding a synchronization to the size () method, it means that thread B calls the size () method only after the call to the Put method is complete, which guarantees thread safety
(2) The CPU executes the code, does not execute the Java code, this is very important, must remember. Java code is ultimately translated into assembly code execution, the assembly code is really can and hardware circuit interaction code. Even if you see a single line of Java code, even if you see the Java code compiled after the generation of bytecode is only one line, it does not mean that for the bottom line this statement is the only operation. A "return count" hypothesis is translated into a three-sentence assembly statement execution, which is completely possible after the first sentence is executed, and the thread switches.
38, the construction method of the thread class, the static block is called by which thread
This is a very tricky and tricky question. Keep in mind that the constructor method of the thread class, the static block is called by the thread where the new thread class is located, and the code inside the Run method is called by the thread itself.
If you're confused by the above, let me give you an example, assuming that Thread2 new in the Thread1,main function is new Thread2, then:
(1) Thread2 constructor method, static block is called by main thread, Thread2 's Run () method is Thread2 call
(2) Thread1 construction method, static block is called by Thread2, Thread1 's Run () method is called by Thread1 itself
39, synchronous method and synchronization block, which is a better choice
Synchronous blocks, which means that code outside of the synchronization block executes asynchronously, which increases the efficiency of the code more than synchronizing the entire method. Please know one principle: the less the range of synchronization, the better.
In this article, I would add that, although the range of synchronization is as small as possible, there is still an optimization method called lock coarsening in the Java Virtual machine, and this method is to make the synchronization range larger. This is useful, for example, StringBuffer, which is a thread-safe class, naturally the most commonly used append () method is a synchronous method, we write code repeatedly append string, which means to be repeated locking-to-unlock, which is bad for performance, Because this means that the Java virtual machine on this thread is repeatedly switching between the kernel state and the user state, so the Java virtual opportunity will append the code of the method calls several times to do a lock coarsening operation, the operation of multiple append extended to the Append method, the tail, becomes a large synchronization block, which reduces the number of locking and unlocks, effectively improving the efficiency of code execution.
40, high concurrency, short task execution time how to use the thread pool? How does a business with a low concurrency and long task execution use the thread pool? How does a business with high concurrency and long business execution use the thread pool?
This is a problem I see on the concurrent programming web, putting this problem last, and hopefully everyone can see and think about it because it's a very good, practical, and professional problem. On this issue, the personal view is:
(1) High concurrency, short task execution time, the number of thread pool threads can be set to CPU core number +1, reduce the switch of thread context
(2) Concurrency is not high, task execution time long business to distinguish between the look:
A) If the business time is concentrated on IO operations, that is, io-intensive tasks, because IO operations do not occupy the CPU, so do not let all the CPU idle, you can increase the number of threads in the thread pool, so that the CPU processing more business
b) If the business time is focused on computational operations, which is computationally intensive, there is no way, and (1), the number of threads in the thread pool is set less, reducing the switching of the thread context
(3) High concurrency, long business execution time, the key to solving this type of task is not in the thread pool but in the design of the overall architecture, to see if some of these business data can be cached is the first step, the additional server is the second step, as to the thread pool settings, set reference (2). Finally, the problem of long business execution may also need to be analyzed to see if the task can be split and decoupled using middleware
Java multi-threaded face question