Java Multithreading is what
Java provides a mechanism for concurrent (simultaneous, independent) processing of multiple tasks. Multiple lines Cheng in the same JVM process, so sharing the same memory space, compared to multiple processes, the communication between multithreading more lightweight. As I understand it, Java multithreading is all about improving CPU utilization. Java threads have 4 states, new (new), Run (Runnable), blocking (Blocked), End (Dead), the key is blocking (Blocked), blocking means waiting, blocked threads do not participate in the thread dispatcher (threads Scheduler Time slice allocation, naturally will not be used to the CPU. In a multithreaded environment, those non-blocking (Blocked) threads run and take full advantage of the CPU.
40 Issues Summary
1. What is the use of multithreading?
A problem that may seem to be a lot of people: I'll just use multithreading, and what's the use? In my opinion, this answer is even more nonsense. The so-called "know the reason why", "will use" is only "know", "Why Use" is "know why", only to reach the "know the reason why" can be said to be a knowledge point to use freely. OK, here's what I think about this question:
(1) To play the advantages of multi-core CPU
With the development of industry, the current notebook, desktop and even commercial application servers are at least dual-core, 4 cores, 8 cores or even 16 cores are not uncommon, if it is a single-threaded program, then the dual-core CPU on the waste of 50%, in 4 nuclear CPU on the waste of 75%. Single-core CPU on the so-called "multithreading" that is a fake multithreaded, the same time processor will only deal with a piece of logic, but the thread between the switch faster, looking like multiple threads "at the same time" run it. Multi-core multithreading is the real multithreaded, it allows you to work with multiple logic at the same time, multiple threads, you can really play the advantages of multi-core CPU to achieve the full use of CPU purposes.
(2) Prevent obstruction
From the point of view of the efficiency of the program, the single core CPU not only can not play the advantage of multithreading, but will reduce the efficiency of the whole program because of the thread context switching due to the multithread running on the single core CPU. But the single core CPU we still have to apply multithreading, is to prevent blocking. Just imagine, if the single core CPU use single-threaded, so long as this thread is blocked, for example, remote read a data bar, the end of late return and did not set timeout time, then your entire program in the data back before the return of the operation. Multithreading can prevent this problem, multiple threads running at the same time, even if the code of a thread to perform read data blocking, it will not affect the execution of other tasks.
(3) Easy modeling
This is another advantage that is not so obvious. Assuming that there is a large task A, single-threaded programming, then a lot to consider, the establishment of the entire program model is more cumbersome. But if this big task a break into a few small tasks, Task B, Task C, Task D, set up a program model, and through multithreading to run these tasks separately, it is much simpler.
2. How to create a thread
One of the more common problems, is generally two kinds:
(1) Inherit thread class
(2) Implement Runnable interface
As for which is better, needless to say certainly is the latter good, because implements the interface the way is more flexible than inherits the class the way, also can reduce the coupling between the program, the interface programming is also the design pattern 6 big principle core.
3. The difference between the start () method and the Run () method
Only the start () method is invoked to show a multithreaded nature, and the code in the Run () method of the different threads executes alternately. If you simply invoke the run () method, the code is executed synchronously, and the code in the Run () method must be executed after the code in the Run () method of one thread has finished executing.
4, the difference between the Runnable interface and the callable interface
A bit of a deep problem, but also see a Java programmer to learn the breadth of knowledge.
The return value of the run () method in the Runnable interface is void, and the only thing it does is simply execute the code in the Run () method; the call () method in the callable interface has a return value, a generic, and a future, Futuretask can be used to get the results of asynchronous execution.
This is actually a very useful feature, because multithreading is more difficult and more complex than a single thread because it is full of unknowns, is it executed? How long has a thread been executing? Do we expect data to be assigned when a thread executes? It's impossible to know that all we can do is wait for this multithreaded task to finish. But Callable+future/futuretask can get the result of multithreading, it is very useful to cancel the task of the thread if the waiting time is too long without getting the required data.
5, the difference between Cyclicbarrier and Countdownlatch
A two-something-looking class, all under Java.util.concurrent, can be used to indicate that the code runs to a point, and the difference is:
(1) When a cyclicbarrier thread runs to a point, the thread stops running until all threads have reached this point, and all threads are rerun; Countdownlatch is not, a thread runs to a point, only to a value of 1. The thread continues to run
(2) Cyclicbarrier can evoke only one task, Countdownlatch may evoke multiple tasks
(3) Cyclicbarrier Reusable, Countdownlatch can not be reused, the count is 0 the Countdownlatch can no longer be used
6, the role of volatile keyword
A very important problem is that every Java programmer who learns and applies multithreading must master it. The premise of understanding the role of the volatile keyword is to understand the Java memory model, this is not the Java memory model, see 31st, the role of the volatile keyword is mainly two:
(1) Multithreading mainly around the visibility and atomicity of two characteristics, the use of volatile keyword modified variables to ensure that its visibility between multithreading, that is, each read to volatile variables, must be the latest data
(2) The low-level execution of the code is not like the high-level language we see----Java program so simple, it is the implementation of Java code--> bytecode--> According to the bytecode execution of the corresponding C/s code-->c/c++ code is compiled into assembly language--> And the hardware circuit interaction, in reality, in order to get better performance JVM may be reordering the instructions, multithreading may appear some unexpected problems. The use of volatile disables semantic reordering, which in some ways reduces the efficiency of code execution
From a practical point of view, an important role of volatile is to combine with CAs to ensure atomicity, the details can be seen in the Java.util.concurrent.atomic package of classes, such as Atomicinteger.
7. What is thread-safe
It's a theory, a lot of different answers, and I give the best explanation for one person: If your code is executed in multiple threads and executed in a single thread and always gets the same result, your code is thread safe.
It is worth mentioning that there are several levels of thread safety:
(1) Immutable
Like string, Integer, long, these are final types of classes, and no single thread can change their values, and unless a new one is created, these immutable objects may be used directly in a multithreaded environment without any synchronous means.
(2) Absolute thread safety
Regardless of the run-time environment, callers do not need additional synchronization measures. It usually takes a lot of extra cost to do this, and the Java tag itself is thread-safe, and in fact most of them are not thread-safe, but absolutely thread-safe classes, in Java, for example, Copyonwritearraylist, Copyonwritearrayset
(3) Relative thread safety
Relative to thread safety is what we normally call thread safety, like vector, the add and remove methods are atomic, not interrupted, but also limited to this, if a thread is traversing a vector, there is a thread at the same time add this vector,99% , there will be concurrentmodificationexception, which is the fail-fast mechanism.
(4) thread is not safe
This is nothing to say, ArrayList, LinkedList, HashMap, etc. are thread-unsafe classes
8. How to get the thread dump file in Java
Dead loop, deadlock, blocking, page open slow, and so on, hit the thread dump is the best way to solve the problem. The so-called thread dump is the thread stack, which takes two steps to get to the thread stack:
(1) Get to the thread of the PID, you can use the JPS command, in the Linux environment can also use PS-EF | grep java
(2) Print thread stack, you can use the Jstack PID command, in the Linux environment can also be used kill-3 PID
In addition, the thread class provides a getstacktrace () method that can also be used to get the thread stack. This is an instance method, so this method is bound to a specific thread instance, and each fetch is a stack that is currently running on a specific thread.
9, a thread what happens if a Run-time exception occurs
If this exception is not captured, the thread stops executing. Another important point is that if this thread holds a monitor for an object, the object monitor will be released immediately
10. How to share data between two threads
It's OK to share objects between threads, and then invoke and wait through Wait/notify/notifyall, Await/signal/signalall, for example, blocking queues Blockingqueue is designed to share data between thread
11. What is the difference between the sleep method and the wait method?
This question is often asked, the sleep method and the wait method can be used to give up the CPU a certain amount of time, the difference is that if the thread holds the monitor of an object, sleep method will not discard the object's monitor, the wait method will discard the object's monitor
12. What is the role of the producer consumer model?
The question is theoretical, but very important:
(1) To improve the efficiency of the whole system by balancing producer's production capacity and consumer's consuming ability, which is the most important function of producer's consumer model.
(2) decoupling, which is a by-product of the producer consumer model, decoupling means that there is less contact between producers and consumers, fewer connections, and more independent development without the need to receive mutual constraints.
13. What's the use of threadlocal
Simply said Threadlocal is a space to change the time, in each thread to maintain an open address method to implement the THREADLOCAL.THREADLOCALMAP, the data isolation, data is not shared, naturally there is no thread safety problems
14. Why the Wait () method and the Notify ()/notifyall () method are called in the synchronization block
This is JDK enforced, and the Wait () method and the Notify ()/notifyall () method must first obtain the lock of the object before it is invoked
15. What is the difference between the wait () method and the Notify ()/notifyall () method when discarding the object monitor
The difference between the wait () method and the Notify ()/notifyall () method when discarding the object monitor is that the wait () method immediately releases the object monitor, notify ()/notifyall () Method waits for the thread's remaining code to finish executing before discarding the object monitor.
16, why to use the thread pool
Avoid frequent creation and destruction of threads to achieve reuse of thread objects. In addition, using a thread pool also allows you to flexibly control the number of concurrent items based on your project.
17. How to detect whether a thread holds object monitor
I also saw a multi-threaded face on the internet to know that there is a way to determine whether a thread holds object monitor: The thread class provides a Holdslock (object obj) method that returns true only if the monitor of object obj is held by a thread. Note that this is a static method, which means that "a thread" refers to the current thread.
18, the difference between synchronized and Reentrantlock
Synchronized is the same as if, else, for, while the keyword, Reentrantlock is a class, which is the essential difference between the two. Since Reentrantlock is a class, it provides more flexible features than synchronized, can be inherited, can have methods, can have a variety of class variables, reentrantlock than synchronized extensibility in several points:
(1) Reentrantlock can set the wait time for the lock to avoid deadlock
(2) Reentrantlock can obtain information of various locks
(3) Reentrantlock can flexibly implement multiple-channel notification
In addition, the locking mechanism of the two is actually different. Reentrantlock the bottom of the call is the unsafe Park method lock, synchronized operation should be the object header mark Word, which I am not sure.
19. What is the concurrency of Concurrenthashmap?
Concurrenthashmap's concurrency is the size of the segment, which defaults to 16, which means that you can have up to 16 threads operating Concurrenthashmap at the same time, This is also the biggest advantage of Concurrenthashmap to Hashtable, in any case, Hashtable can have two threads to get the data in Hashtable at the same time?
20. What is Readwritelock
First of all, not to say Reentrantlock bad, but reentrantlock some time limitations. If Reentrantlock is used, it may be in itself to prevent thread A from writing data, thread B data is inconsistent with reading data, but in this way, if thread C reads data, thread d is reading data, reading data does not change the data, there is no need to lock it, but it is locked, which reduces the performance of the program.
Because of this, just born read and write lock Readwritelock. Readwritelock is a read-write lock interface, Reentrantreadwritelock is a specific implementation of the Readwritelock interface, to achieve the separation of reading and writing, read lock is shared, write lock is exclusive, reading and reading will not be mutually exclusive, read and write, write and read, Between writing and writing will be mutually exclusive, improve the performance of reading and writing.
21. What is Futuretask
This is actually mentioned earlier, Futuretask represents the task of an asynchronous operation. Futuretask inside can be passed into a callable of the specific implementation class, can be the task of the asynchronous operation of the results of waiting to get, to determine whether the task has been completed, canceled tasks and other operations. Of course, because Futuretask is also the implementation class of the Runnable interface, Futuretask can also be put into the thread pool.
22, Linux environment How to find which thread to use the longest CPU
This is a relatively biased practice of the problem, I think it is very meaningful. Can do this:
(1) Get the project Pid,jps or Ps-ef | grep Java, this one has been said before
(2) Top-h-P PID, order can not be changed
This allows you to print out the current item, the percentage of CPU time per thread. Note that this is the LWP, which is the operating system of the original thread of the thread number, my notebook mountain does not deploy the Linux environment in the Java project, so there is no way to capture the demo, friends, if the company is using Linux environment Deployment project, you can try.
Using the Top-h-P PID + JPS PID makes it easy to find a thread stack of a CPU-intensive thread, thus locating the reason for CPU height, typically because improper code operations cause a dead loop.
Finally, the "top-h-P pid" LWP is Decimal, "JPS pid" The local thread number is hexadecimal, converted, can be positioned to occupy the CPU high thread of the current thread stack.
23, Java programming to write a deadlock can cause the program
The first time I saw this topic, I thought it was a very good question. A lot of people know what a deadlock is all about: thread A and thread B wait for each other's lock to cause the program to die indefinitely. Of course, only this, ask how to write a deadlock procedures do not know, this situation is not understand what is a deadlock, understand a theory on the end of the child, the practice of the problem of the deadlock is basically invisible.
To really understand what a deadlock is, the problem is not difficult, but a few steps:
(1) Two threads hold two object objects respectively: Lock1 and Lock2. These two locks are used as a lock for synchronizing code blocks;
(2) Thread 1 of the Run () method in the synchronization code block first get Lock1 object lock, Thread.Sleep (XXX), time does not need too much, 50 milliseconds almost, and then get the Lock2 object lock. This is done primarily to prevent thread 1 from starting up and sequentially acquiring object locks on Lock1 and lock2 two objects
(3) thread 2 run) (in the method of synchronizing code block first get Lock2 object lock, then get Lock1 object Lock, of course, then Lock1 object lock has been thread 1 lock hold, thread 2 is sure to wait for thread 1 release Lock1 object lock
In this way, thread 1 "Sleep" sleep, thread 2 has acquired Lock2 object lock, thread 1 at this time trying to get Lock2 object lock, it was blocked, at this time a deadlock formed. The code does not write, occupies a bit more space, Java Multithreading 7: Deadlock This article has, is the above steps of the code implementation.
24, how to wake a blocked thread
If the thread is blocked by invoking the wait (), the Sleep (), or join () method, it can be disconnected and awakened by throwing a interruptedexception, and if the thread encounters IO blocking, there is nothing to do, because IO is implemented by the operating system. Java code has no way of directly contacting the operating system.
25, immutable objects to multithreading what help
As mentioned earlier, immutable objects guarantee the memory visibility of objects, and the reading of immutable objects does not require additional synchronization, which improves the efficiency of code execution.
26, what is the context of multithreading switch
Multi-threaded context switching refers to the process of CPU control being switched by a running thread to another thread that is ready and waits for the CPU to execute.
27. What happens if the thread pool queue is full when you submit a task
If you use the Linkedblockingqueue, that is, the unbounded queue, it doesn't matter, continue to add the task to the blocking queue waiting for execution, because the linkedblockingqueue can be almost considered an infinite queue, can store the task indefinitely If you are using a bounded queue for example Arrayblockingqueue, the task will first be added to the Arrayblockingqueue, Arrayblockingqueue full, You will use the Reject policy rejectedexecutionhandler to handle the full task, which is abortpolicy by default.
28, Java in the use of the thread scheduling algorithm is what
preemption type. After a thread has run out of CPU, the operating system calculates a total priority based on data such as thread priority, thread starvation, and assigns the next time slice to a thread.
29, the role of Thread.Sleep (0) is what
This question is related to the above question, and I am connected to it. Because Java uses a preemptive thread-scheduling algorithm, it is possible that a thread often acquires CPU control, and in order for some of the lower priority threads to gain CPU control, you can use Thread.Sleep (0) To manually trigger the operation of an operating system to allocate time slices. This is also an operation to balance CPU control.
30, what is spin
Many synchronized inside the code is just some very simple code, the execution time is very fast, at this time the waiting thread all lock may be a kind of less worthwhile operation, because the thread blocking involves the user state and the kernel state switching problem. Since the code inside the synchronized is very fast, let the thread waiting for the lock not be blocked, but do a busy loop at the synchronized boundary, which is the spin. It may be a better strategy if you do a lot of busy loops and find that you haven't got a lock and then blocked it.
31, what is the Java memory model
The Java memory model defines a specification for multithreaded access to Java memory. Java memory model to be complete it's not a few words here that can be made clear.
Some of the contents of the Java memory model:
(1) The Java memory model divides memory into main memory and working memory. The state of a class, which is a variable shared between classes, is stored in main memory, every time a Java thread uses the variables in these main memory, it reads the variables in the main memory once, and lets them have a copy of their working memory, and use these variables when running their own thread code. The operation is in their own working memory of that part. After the thread code has finished executing, the latest values are updated into main memory
(2) defines several atomic operations that are used to manipulate variables in the main memory and working memory
(3) The rules for the use of volatile variables are defined
(4) Happens-before, that is, the principle of first occurrence, defines the operation a must first occur in operation B of some rules, such as in the same thread control flow in front of the code must first occur in the control flow behind the code, A release lock unlock action must first occur in the following for the same lock lock on the action and so on, as long as the rules are met, there is no need for additional synchronization measures, if a piece of code does not conform to all happens-before rules, then this code must be thread-unsafe
32. What is CAs
CAS, all called compare and set, that is, comparison-settings. Suppose there are three operands: Memory value V, old expected value A, value B to modify, and if and only if the expected value A and memory value v are the same, the memory value is modified to B and returns true, otherwise nothing is done and false is returned. Of course, CAs must be volatile variable, so as to ensure that every variable is the most recent value in the main memory, otherwise the old expected value a for a thread, is always a constant value a, as long as a CAS operation failed, can never succeed.
33, what is optimistic lock and pessimistic lock
(1) Optimistic lock: Just like its name, optimistic about the thread-safety problems resulting from concurrent operations, optimistic locking that competition does not always occur, so it does not need to hold locks, will compare-set these two actions as an atomic operation to try to modify the variables in memory, if the failure is a conflict, Then there should be the corresponding retry logic.
(2) Pessimistic lock: or like its name, for concurrent operations generated by the thread security issues pessimistic, pessimistic lock that competition will always occur, so each time a resource to operate, will hold an exclusive lock, like synchronized, no matter 3,721, Directly on the lock on the operation of resources.
34, what is Aqs
Simply say Aqs,aqs is all called abstractqueuedsychronizer, translation should be abstract queue synchronizer.
If the basis of Java.util.concurrent is CAs, then AQS is the core of the entire Java and contract, Reentrantlock, Countdownlatch, semaphore and so on have used it. Aqs actually connects all the entry in the form of a two-way queue, such as Reentrantlock, where all the waiting threads are placed in a entry and connected to a two-way queue, with the previous thread using Reentrantlock. The first entry of the two-way queue actually begins to run.
AQS defines all the operations on a two-way queue, and only opens the Trylock and Tryrelease methods to developers, and developers can rewrite Trylock and Tryrelease methods to implement their concurrency functions according to their implementations.
35, single case mode of thread security
The first thing to say is that the thread-safe of a singleton pattern means that an instance of a class is created only once in a multithreaded environment. The single example pattern has many kinds of writing, I summarize:
(1) A Hungry man style single case pattern: thread safety
(2) Lazy single case mode of writing: non-thread-safe
(3) Double check lock single case mode writing: Thread safety
36, what is the role of semaphore
Semaphore is a semaphore that restricts the number of concurrent pieces of code. Semaphore has a constructor that can pass in an int-type integer n, which means that a segment of code can be accessed by up to n threads, and if n is exceeded, wait until a thread has finished executing the code block, and the next thread will enter again. It can be seen that if the INT-type integer n=1 passed in in the Semaphore constructor, the equivalent becomes a synchronized.
37, Hashtable's size () method is clearly only one statement "return count", why do you want to sync?
This is my previous confusion, I do not know whether you have thought about this issue. If there are more than one statement in a method, and all of the same class variables are being manipulated, it is quite understandable that the thread safety problem will be raised without locking in a multithreaded environment, but the size () method has only one statement and why is it locked?
On this issue, in the slow work, learning, there is understanding, the main reason for two points:
(1) Only one thread at a time can perform a synchronous method of a fixed class, but for asynchronous methods of a class, multiple threads can be accessed concurrently. So, there's a problem, maybe thread A is adding data to the Hashtable put method, thread B can normally call the size () method to read the number of current elements in the Hashtable, the read value may not be up to date, maybe thread A has added the data, But without the size++, thread B has already read size, so the size read to thread B must be inaccurate. After the size () method is synchronized, it means that thread B calls the size () method to be invoked only after thread a invokes the Put method, which guarantees thread security
(2) The CPU executes the code, executes is not the Java code, this is very important, must remember. Java code is ultimately translated into assembly code execution, the assembly code is really able to interact with the hardware circuit code. Even if you see that there is only one line of Java code, and even you see that the bytecode generated after the Java code is compiled is only a single line, it does not mean that for the bottom of the sentence the operation of the statement is only a. A "return count" hypothesis has been translated into the execution of the three-sentence Assembly statement, and the thread is completely likely to execute the first sentence.
38, the thread class construction method, the static block is called by which thread
This is a very tricky and tricky question. Remember that the thread class constructor, the static block is called by the thread on which the new thread class is located, and the code inside the Run method is invoked by the thread itself.
If the above statement is confusing you, let me give you an example, assuming that the new Thread1,main function in Thread2 is new Thread2, then:
(1) Thread2 construction method, static block is called by the main thread, Thread2 's Run () method is Thread2 call
(2) The construction method of Thread1, static block is Thread2 call, Thread1 's Run () method is Thread1 own call
39, Sync method and sync block, which is the better choice
Synchronization blocks, which means that code outside the synchronization block executes asynchronously, which is more efficient than synchronizing the entire method. Please know one rule: the less the scope of synchronization
The better.
By this, I would add that although the scope of synchronization is better, there is an optimization method called Lock coarsening in the Java Virtual machine, which is to make the scope of synchronization larger. This is useful, for example StringBuffer, it is a thread-safe class, the most commonly used append () method is a synchronous method, we write code repeatedly append string, which means that repeated lock-> unlock, which is bad for performance, Because this means that the Java Virtual machine will repeatedly switch between the kernel state and the user state on this thread, so the Java virtual Opportunity append the code of the method invocation multiple times, extending the append operations to the ends of the Append method. becomes a large synchronized block, which reduces the number of lock--> unlocked and effectively increases the efficiency of code execution.
40, high concurrency, short task execution of the business how to use the thread pool? How do you use a thread pool for a business with a high concurrency and long task execution time? Concurrent high, business execution time long business how to use thread pool?
This is a problem I see on the concurrent programming web, put this problem in the last one, I hope everyone can see and think about it, because this question is very good, very practical, very professional. On this issue, the personal view is:
(1) High concurrency, short task execution time, thread pool threads can be set to +1 CPU core, reducing thread context switching
(2) Concurrency is not high, task execution time long business to distinguish between look:
A If the business time is concentrated on IO operations, that is, io-intensive tasks, because IO operations do not occupy the CPU, so do not let all the CPU idle, you can increase the number of threads in the thread pool, so that the CPU to deal with more business
b If the business time is concentrated in the calculation operations, that is, computationally-intensive tasks, this is no way, and (1) the same bar, thread pool number of threads set less, reduce the thread context of the switch
(3) Concurrent high, long business execution time, the key to solve this type of task is not in the thread pool but the overall architecture of the design, to see whether some of these business data can do caching is the first step, adding the server is the second step, as for the thread pool settings, set reference (2) Finally, the problem of long business execution time may also need to be analyzed to see if you can use middleware to split and decouple tasks.
type of Java thread blocking (Blocked) :
Call the sleep function into the sleeping state, Thread.Sleep (1000) or TimeUnit.SECONDS.sleep (1), and Morpheus does not release the lock.
Waiting (wait) an event, divided into two, (Wait,notify,notifyall), (await, signal,signalall), will be described in detail later. Wait and await release the lock and must be invoked in the context of acquiring the lock.
Waiting for the lock, synchronized and lock environment, the lock has been taken away by another thread, waiting to acquire the lock.
Io blocking (Blocked), such as network wait, file open, console read. System.in.read ().