BAT and other large companies often test Java multi-threaded face questions

Source: Internet
Author: User
Tags cas thread class time in milliseconds volatile

1. Talk about the differences between process, thread, and co-processes

In short, a process is the basic unit of program operation and resource allocation, a program has at least one process, and a process has at least one thread. The process has a separate memory unit during execution, and multiple threads share memory resources, reducing the number of switches and making them more efficient. A thread is an entity of a process, is the basic unit of CPU dispatch and dispatch, and it is the basic unit that can run independently, which is smaller than the program. Multiple threads in the same process can execute concurrently.

2. Do you know the daemon thread? What's the difference between it and non-daemon threads?

When the program finishes running, the JVM waits for the non-daemon thread to close, but the JVM does not wait for the daemon thread. The most typical example of a daemon thread is the GC thread

3. What is multi-threaded context Switch

A multi-threaded context switch is the process by which the CPU control is switched from one thread that is already running to another that is ready and waiting for the CPU to execute.

4. How to create two kinds of threads? What's the difference between them?

By implementing java.lang.Runnable or by extending the Java.lang.Thread class. Implementing the Runnable interface may be better than extending the thread. There are two reasons:

    • Java does not support multiple inheritance. Therefore, extending the thread class means that the subclass cannot extend other classes. The class that implements the Runnable interface may also extend another class.

    • Class may only require executable, so the overhead of inheriting the entire thread class is too high.

5. What is the difference between the start () and run () methods in the thread class?

The start () method is used to start the newly created thread, and start () calls the run () method internally, which is not the same as calling the run () method directly. When you call the run () method, only the original thread is called, no new thread is started, and the start () method starts the new thread.

6. How to detect if a thread holds an object monitor

The thread class provides a Holdslock (object obj) method that returns true only if the monitor of the object obj is held by a thread, note that this is a static method, which means that "one thread" refers to the current thread.

7. The difference between runnable and callable

The return value of the run () method in the Runnable interface is void, and what it does is simply to execute the code in the Run () method; the call () method in the callable interface has a return value, is a generic type, and the future, Futuretask mates can be used to get the results of asynchronous execution.
This is actually a very useful feature, because multithreading is more difficult than single-threaded, more complex is an important reason because multithreading is full of unknown, a thread is executed? How long has a thread been executing? Is the data we expect to be assigned when a thread executes? It is impossible to know that all we can do is wait for this multi-threaded task to complete. But Callable+future/futuretask can easily get the results of multi-threaded running, can cancel the task of the thread if the waiting time is too long without getting the data needed

8. What causes thread blocking

blocking refers to pausing the execution of a thread to wait for a condition to occur (such as a resource is ready), and the students who have learned the operating system must already be familiar with it. Java provides a number of ways to support blocking, let's analyze each of them.
Method Description
sleep () sleep () allows you to specify a Period of time as a parameter, it causes the thread to enter a blocking state within a specified time, cannot get CPU time, the specified time is over, and the thread re-enters the executable state. Typically, sleep () is used when waiting for a resource to be ready: After the test discovery condition is not satisfied, let the thread block for a period of time and then re-test until the condition is met
suspend () and resume () Two methods are used, suspend () causes the thread to enter a blocking state, and does not automatically recover, it must be called by its corresponding resume () to enable the thread to re-enter the executable state. Typically, suspend () and resume () are used when waiting for the result of another thread: After the test finds that the result has not yet been generated, the thread is blocked and the other thread produces the result, calling resume () to restore it.
yield () yield () causes the current thread to discard the currently-divided CPU time, but does not block the current thread, that is, the thread is still in an executable state, and the CPU time may be split again at any time. The effect of calling yield () is equivalent to the scheduler thinking that the thread has performed enough time to go to another thread
Wait () and notify () Two methods are used, and wait () causes the thread Into the blocking state, which has two forms, one that allows you to specify a period of time in milliseconds as a parameter, and the other without parameters, when the corresponding notify () is called or the thread is re-entered in the executable state when it exceeds the specified time, and the latter must be called by the corresponding notify ().

9, Wait (), notify () and suspend (), the difference between resume ()

At first glance they have nothing to do with the suspend () and the Resume () methods, but in fact they are quite different. The core of the difference is that all of the methods described earlier, blocking will not release the lock (if occupied), and this pair of methods is the opposite. The core differences above lead to a series of differences in detail.

First, all the methods described earlier are subordinate to the Thread class, but the pair is directly subordinate to the object class, which means that all objects have this pair of methods. At first glance this is very magical, but in fact it is very natural, because this pair of methods when blocking to release the lock occupied, and the lock is any object has, call any object's wait () method causes the thread to block, and the lock on the object is freed. The Notify () method that invokes an arbitrary object causes a randomly selected unblocking in the thread that is blocked from calling the wait () method of the object (but is not really executable until the lock is acquired).

Second, all the methods described earlier can be called anywhere, but this pair of methods must be called in the Synchronized method or block, the reason is simple, only in the Synchronized method or block the current line friend occupy the lock, only the lock can be released. Similarly, locks on objects that call this pair of methods must be owned by the current thread, so that locks can be freed. Therefore, this pair of method calls must be placed in such a synchronized method or block where the locked object of the method or block is the object that invokes the pair of methods. If this condition is not met, the program can still compile, but the illegalmonitorstateexception exception will occur at run time.

The above characteristics of the wait () and notify () methods determine that they are often used with the synchronized keyword. Comparing them to the operating system interprocess communication mechanism will find their similarity: the Synchronized method or block provides functionality similar to the operating system primitive, and their execution is not interfered by the multithreading mechanism, which is equivalent to block and wakeup Primitives (this pair of methods are declared as synchronized). Their combination allows us to implement an array of sophisticated inter-process communication algorithms (such as semaphore algorithms) on the operating system, and to solve a variety of complex inter-threading communication problems.

The Wait () and notify () methods are finally explained in two points:

First: Calling the Notify () method causes the unblocked thread to be randomly selected from the thread that was blocked by calling the wait () method of the object, and we cannot predict which thread will be selected, so be careful when programming, and avoid problems with this uncertainty.

Second: In addition to notify (), there is also a method Notifyall () can also play a similar role, the only difference is that the call to the Notifyall () method will be called by the Wait () method of the object is blocked all at once unblocked all the threads. Of course, only the thread that gets the lock can go into the executable state.

When it comes to blocking, it is impossible to talk about deadlocks, and a brief analysis reveals that the suspend () method and the call to the Wait () method, which does not specify a time-out period, can generate a deadlock. Unfortunately, Java does not support deadlock avoidance at the language level, and we must be careful in programming to avoid deadlocks.

We have analyzed the various methods of threading blocking in Java, and we have focused on the wait () and notify () methods, because they are the most powerful and flexible to use, but it also makes them less efficient and prone to error. In practice we should use various methods flexibly in order to achieve our goal better.

11. Conditions for the birth and Death lock

1. Mutex condition: A resource can only be used by one process at a time.
2. Request and hold condition: When a process is blocked by a request for resources, it remains in place for the resources that have been obtained.
3. Conditions of deprivation: the resources acquired by the process cannot be forcibly deprived until the end of use.
4. Cyclic wait conditions: a cyclic waiting resource relationship is formed between several processes.

12. Why the Wait () method and the Notify ()/notifyall () method are called in the synchronization block

This is the JDK mandatory, and the Wait () method and the Notify ()/notifyall () method must first obtain the object's lock before calling
What is the difference between the wait () method and the Notify ()/notifyall () method when discarding object monitor

The wait () method and the Notify ()/notifyall () method differ when discarding the object monitor: The Wait () method immediately releases the object monitor, notify ()/notifyall () Method waits for the thread's remaining code to finish before discarding the object monitor.

13. The difference between wait () and sleep ()

The two have been described in detail above, here is a summary:

    • Sleep () comes from the thread class, and wait () comes from the object class. During the call to the sleep () method, the thread does not release the object lock. Calling the wait method thread frees the object lock

    • Sleep () does not sell system resources after sleeping, wait for other threads to consume CPU

    • Sleep (milliseconds) needs to specify a sleeping time when the time is automatically awakened. Wait () needs to be combined with notify () or Notifyall ()

14, why Wait,nofity and Nofityall These methods are not placed in the thread class

One obvious reason is that the locks provided by Java are object-level rather than thread-level, and each object has a lock, which is obtained through the thread. The wait () method in the calling object is meaningful if the thread waits for some locks. If the wait () method is defined in the thread class, it is not obvious which lock the thread is waiting for. Simply put, because Wait,notify and Notifyall are both lock-level operations, they are defined in the object class because the locks belong to the objects.

15. How to wake up a blocked thread

If the thread is blocked by calling the wait (), sleep (), or join () method, it can be disconnected and wake it up by throwing interruptedexception, if the thread encounters io blocking, because IO is implemented by the operating system, Java code has no way of directly contacting the operating system.

16, what is multi-threaded context Switch

A multi-threaded context switch is the process by which the CPU control is switched from one thread that is already running to another that is ready and waiting for the CPU to execute.

17. The difference between synchronized and Reentrantlock

Synchronized is the same keyword as if, else, for, and while, Reentrantlock is a class, which is the essential difference between the two. Since Reentrantlock is a class, it provides more and more flexible features than synchronized, can be inherited, can have methods, can have a variety of class variables, reentrantlock than synchronized extensibility embodied in several points:
(1) Reentrantlock can set the wait time to acquire the lock, thus avoiding the deadlock
(2) Reentrantlock can obtain the information of various locks
(3) Reentrantlock can flexibly implement multi-channel notification
In addition, the locking mechanism of the two is actually different: Reentrantlock is called Unsafe Park method Lock, synchronized operation should be the object in the head of Mark Word.

18. What is Futuretask?

This is actually mentioned earlier, Futuretask represents the task of an asynchronous operation. Futuretask inside can pass in a specific implementation class of callable, can wait for the result of the task of this asynchronous operation to obtain, judge whether has completed, cancels the task and so on operation. Of course, because Futuretask is also an implementation class for the Runnable interface, Futuretask can also be put into the thread pool.

19. What if a run-time exception occurs for a thread?

If the exception is not captured, the thread stops executing. Another important point is that if the thread holds a monitor for an object, the object monitor is immediately released

20. What kinds of locks are in Java?

    • Spin Lock: The spin lock is turned on by default after JDK1.6. Based on previous observations, the locked state of the shared data will only last for a very short period of time, and for this short period of time it is a bit wasteful to suspend and resume the thread, so here is a process where the thread behind the request lock waits a little while, but does not abandon the processor execution time to see if the thread holding the lock can be released quickly In order for the thread to wait, it is necessary to have the thread perform a busy loop that is the spin operation. After Jdk6, an adaptive spin lock was introduced, i.e. the waiting time was no longer fixed, but was determined by the last spin time on the same lock and the owner state of the lock.

    • Bias Lock: A lock optimization introduced after JDK1 to eliminate the synchronization primitives of data in non-competitive situations. Further improve the running performance of the program. The bias lock is biased, meaning that the lock will be biased toward the first to get his thread, if the next execution of the lock is not taken by other threads, then the thread holding a biased lock will never need to synchronize again. Biased lock can improve the performance of the program with synchronization but no competition, that is to say he is not always good for the program, if most of the locks in the program are accessed by a number of different threads, the bias pattern is redundant, in the context of specific analysis of specific problems, you can consider whether to use biased lock.

    • Lightweight locks: In order to reduce the performance cost of acquiring locks and unlocking locks, "biased lock" and "lightweight lock" are introduced, so there are four states in the Java SE1.6 Lock, no lock state, favor lock state, lightweight lock state and heavyweight lock state, it will gradually upgrade with the competition situation. Locks can be upgraded but not degraded, which means that biased locks cannot be downgraded to a biased lock after being upgraded to a lightweight lock

21. How to share data between two threads

It is possible to share objects between threads, and then evoke and wait through Wait/notify/notifyall, Await/signal/signalall, for example, blocking queues Blockingqueue is designed to share data between

22, how to use the right wait ()? Use if or while?

The wait () method should be called in a loop, because when the thread gets to the CPU to start executing, other conditions may not be met, so it is better to have the loop detection condition satisfied before processing. The following is a standard code for using the wait and notify methods:

synchronized (obj) {   while (condition does not hold)     obj.wait(); // (Releases lock, and reacquires on wakeup)     ... // Perform action appropriate to condition}

23. What is thread local variable threadlocal

A thread local variable is a variable that is confined to the thread itself and is owned by the thread itself and is not shared among multiple threads. Java provides the Threadlocal class to support thread-local variables, which is a way to implement thread safety. However, when using thread-local variables in a managed environment (such as a Web server), you should be particularly careful, in which case the worker thread has a longer life cycle than any application variable. Once any thread local variable is not released after the work is completed, there is a risk of a memory leak in the Java application.

24. What is the role of threadloal?

Simply said Threadlocal is a space-time approach in each thread to maintain a threadlocal.threadlocalmap to isolate the data, the data is not shared, naturally there is no thread safety problems.

25. What is the role of the producer consumer model?

(1) To improve the efficiency of the whole system by balancing producer's production capacity and consumer's consumption capacity, which is the most important role of producer consumer model.
(2) decoupling, which is the role of the producer consumer model, decoupling means that there is less contact between producers and consumers, less connections can be developed on their own without the need to receive mutual constraints

26. Write a producer-consumer queue

Can be implemented by blocking queues, or by wait-notify.
Use blocking queues to implement

Consumer public class Producer implements runnable{private final blockingqueue<integer> queue;   Public Producer (Blockingqueue q) {this.queue=q;               } @Override public void Run () {try {while (true) {thread.sleep (1000);//simulation time-consuming           Queue.put (Produce ());       }}catch (Interruptedexception e) {}} private int produce () {int n=new Random (). Nextint (10000);       System.out.println ("Thread:" + thread.currentthread (). GetId () + "Produce:" + N);   return n;   }}//consumer public class Consumer implements Runnable {private final blockingqueue<integer> queue;   Public Consumer (Blockingqueue q) {this.queue=q;               @Override public void Run () {while (true) {try {thread.sleep (2000);//simulation time-consuming           Consume (Queue.take ()); }catch (Interruptedexception e) {}}} private void Consume (Integer N) {System.out.println ("Thr EAD: "+ thread.cUrrentthread (). GetId () + "consume:" + N); }}//test public class Main {public static void main (string[] args) {blockingqueue<integer> queue=new Arrayblo       ckingqueue<integer> (100);       Producer p=new Producer (queue);       Consumer c1=new Consumer (queue);       Consumer c2=new Consumer (queue);       New Thread (P). Start ();       New Thread (C1). Start ();   New Thread (C2). Start (); }}

Use Wait-notify to implement

This way should be the most classic, here do not explain

27. What happens if the thread pool queue is full when you submit a task

If you're using Linkedblockingqueue, which is the xxx queue, it's okay to continue adding tasks to the blocking queue to wait for execution, because Linkedblockingqueue can almost be considered an infinite queue and can hold unlimited tasks If you're using a bounded queue, say Arrayblockingqueue, the task is first added to Arrayblockingqueue, Arrayblockingqueue full, The Rejectedexecutionhandler will use the Deny policy to handle the full task, which is abortpolicy by default.

28. Why to use thread pool

Avoid frequently creating and destroying threads to reuse thread objects. In addition, using the thread pool gives you the flexibility to control the number of concurrent items based on your project.

29. What is the thread scheduling algorithm used in Java?

Preemptive type. After a thread runs out of CPU, the operating system calculates a total priority based on data such as thread priority, thread starvation, and allocates the next time slice for execution by a thread.

30, what is the role of Thread.Sleep (0)

Because Java employs a preemptive thread scheduling algorithm, it is possible for a thread to gain control of the CPU often, in order for some lower priority threads to gain control of the CPU, you can use Thread.Sleep (0) To manually trigger the operation of the operating system to allocate time slices. This is also an operation to balance CPU control.

31. What is CAs

CAS, all called compare and swaps, that is, comparison-substitution. Suppose there are three operands: a memory value of V, an old expected value of a, a value B to be modified, and only if the expected value A and the memory value of V are the same, the memory value is modified to B and returns true, otherwise nothing is done and false is returned. Of course, the CAS must be volatile variable matching, so as to ensure that the variable is the most recent in the main memory of the value, otherwise the old expected value A to a thread, will always be a constant value a, as long as a CAS operation failed, will never be successful

32, what is optimistic lock and pessimistic lock

Optimistic Lock: Optimistic lock that the competition does not always occur, so it does not need to hold the lock, will compare-replace these two actions as an atomic operation to modify the memory variables, if the failure indicates a conflict, then there should be a corresponding retry logic.

Pessimistic lock: Pessimistic locks believe that competition will always occur, so each time a resource operation, will hold an exclusive lock, like synchronized, regardless of 3,721, directly on the lock on the operation of resources.

33. What is the concurrency level of concurrenthashmap?

The concurrency of Concurrenthashmap is the size of segment, which defaults to 16, which means that up to 16 threads can operate concurrenthashmap at the same time. This is also concurrenthashmap to hashtable the biggest advantage, in any case, Hashtable can have two threads to get the data in Hashtable?

34. Working principle of Concurrenthashmap

The implementation principle of concurrenthashmap in JDK 1.6 and JDK 1.8 is different.
JDK 1.6:

Concurrenthashmap is thread-safe, but there are different ways to implement thread safety than Hashtablea. Hashtable is blocked by locking the hash table structure, and when a thread occupies the lock, other threads must block waiting for its release lock. Concurrenthashmap is a separate lock, it does not lock the entire hash table, but the local lock, that is, when a thread occupies this local lock, does not affect other threads to the hash table other places of access.
Concrete realization: Concurrenthashmap inside has a segment
JDK 1.8

In JDK 8, Concurrenthashmap no longer uses the segment separation lock, but uses an optimistic locking CAS algorithm to implement synchronization problems, but its underlying is the implementation of "array + linked list-red-black tree".

37, Cyclicbarrier and Countdownlatch differences

These two classes are very similar, all under Java.util.concurrent, and can be used to indicate that the code runs to a point where the difference is:

    • After a thread of cyclicbarrier runs to a point, that thread stops running until all the threads have reached that point and all the threads are rerun; Countdownlatch is not, after a thread runs to a point, it just gives a value of 1. The thread continues to run

    • Cyclicbarrier can evoke only one task, Countdownlatch may evoke multiple tasks

    • Cyclicbarrier Reusable, Countdownlatch non-reusable, with a count value of 0 The Countdownlatch is no longer usable.

39. Is the + + operator thread safe in Java?

is not a thread-safe operation. It involves multiple instructions, such as reading the value of a variable, increasing it, and then storing it back into memory, and this process may occur when multiple threads are

40. What kind of multithreaded development do you have in good practice?

    • Give the thread a name

    • Minimizing sync Range

    • Priority use of volatile

    • Use a higher-level concurrency tool instead of wait and notify () to implement thread communication, as much as possible, such as Blockingqueue,semeaphore

    • Prefer to use concurrent containers rather than synchronous containers.

    • Consider using a thread pool
About the volatile keyword

1. Can I create a volatile array?

You can create an array of volatile types in Java, but only a reference to an array, not an entire array. If you change the array that the reference points to, it will be protected by volatile, but if multiple threads change the elements of the array at the same time, the volatile identifier will not act as a protection.

2, volatile can make a non-atomic operation into an atomic operation?

A typical example is a long type member variable in a class. If you know that the member variable will be accessed by multiple threads, such as counters, prices, etc., you'd better set it to volatile. Why? Because reading a long type variable in Java is not atomic, it needs to be divided into two steps, and if one thread is modifying the value of the long variable, the other thread may see only half of that value (the first 32 bits). However, reading and writing to a volatile long or double variable is atomic.

One practice is to use volatile to modify long and double variables so that they can be read and written by atomic type. Double and long are 64 bits wide, so for these two types of reading is divided into two parts, the first read the first 32 bits, and then read the remaining 32 bits, the process is not atomic, but the Java volatile type long or double variable read and write is atomic. Another effect of the volatile modifier is to provide memory barriers (barrier), such as applications in distributed frameworks. Simply put, it is when you write a volatile variable that the Java memory model inserts a write barrier (write barrier), and before reading a volatile variable, it inserts a reading barrier (read barrier). This means that when you write a volatile domain, you can ensure that any thread can see the value you write, and that the update of any value is visible to all threads before writing, because the memory barrier updates all other write values to the cache.

3. What are the guarantees for volatile type variables?

Volatile has two main functions: 1. Avoid command rearrangement 2. Visibility guarantees. For example, the JVM or JIT will reorder the statements for better performance, but the volatile type variable will not be reordered with other statements even if there are no synchronized blocks. Volatile provides happens-before guarantees that a thread's modifications can be visible to other threads. In some cases, volatile can also provide atomicity, such as reading 64-bit data types, such as long and double are not atomic (low 32-bit and high 32-bit), but the volatile type of double and long is atomic.

Sweep attention to the public number "Learn Java", high-quality articles first time to understand

BAT and other large companies often test Java multi-threaded face questions

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.