Java Multithreading High concurrency learning

Source: Internet
Author: User
Tags cas instance method thread class visibility

1. Computer Systems

The cache is used as a buffer between the memory and the processor, the data needed for the operation is copied into the cache, the computation can be made fast, and when the operation is finished, it is synchronized back to memory from the cache so that the processor does not have to wait for slow memory to read and write.

Cache Consistency : In multiprocessor systems, because the same main memory is shared, when multiple processor operations are designed into the same chunk of memory, which can result in inconsistent cache data, there are some protocols that need to be followed when synchronizing back to main memory.

Chaos Execution Optimization : In order to make the operating unit within the processor be fully utilized as far as possible.

2. Java Memory model

The goal is to define the access rules for each variable in the program. (includes instance fields, static fields, and elements that make up the array, excluding local variables and method parameters)

    1. All variables are stored in main memory (part of the virtual machine memory).
    2. Each thread is made up of its own working memory, and the thread's working memory holds a copy of the main memory copies of the variables used by the thread, and all operations on the variables must be made in working memory and not directly read and written to variables in the main memory.
    3. There is no direct access to variables in the other's working memory between threads, and the transfer of variables between threads needs to be done through main memory.

Inter-memory interaction operations :

Lock: A variable that acts on the main memory and identifies a variable as a thread-exclusive state .

READ: A variable that acts on the main memory, transferring the value of a variable from main memory to the working memory of the thread.

Load: A variable in working memory that puts the value of the variable that the read operation obtains from main memory into a variable copy of the working memory.

Use: A variable in working memory that passes the value of a variable in the working memory to the execution engine.

Assign (Assignment): A variable in working memory that assigns a value received from the execution engine to a variable in working memory.

Store: A variable in the working memory that transfers the value of a variable in the working memory to the main memory.

Write: A variable that acts on the main memory, putting the value of the variable that the store operation obtains from the working memory into a variable of main memory.

Unlock (Unlocked): A variable that acts on the main memory, releases a variable that is locked, and can then be locked by another thread.

rules :

    1. One of the read and load, store, and write operations is not allowed to appear separately.
    2. A thread is not allowed to discard the most recent assign operation, and the variable must be synchronized back to main memory after it has changed in working memory.
    3. No assign operation is allowed on a thread to synchronize data from the working memory of the thread back to main memory.
    4. A new variable can only be born in main memory.
    5. A variable allows only one thread to lock it at the same time, but it can be executed repeatedly by the same thread.
    6. If you perform a lock operation on a variable, the value of this variable in the working memory is emptied, and the read and load operations need to be re-executed before the execution engine uses the variable.
    7. If a variable is not locked by the lock operation beforehand, it is not allowed to perform a unlock operation on it.
    8. 8. before performing a unlock operation on a variable, you must first synchronize the variable back into main memory.

3. Volatile type variable
    1. guarantees the visibility of this variable for all threads . Each thread needs to refresh before using this type of variable, and the execution engine does not see the inconsistency.

The result of the operation does not depend on the current value of the variable, or ensures that only a single thread modifies the value of the variable.

Variables do not need to participate in invariant constraints with other state variables.

    1. disables command reordering optimizations . The normal variable only guarantees that the correct result will be obtained in all areas where the result of the assignment is dependent on the execution of the method. There is no guarantee that the order of the assignment operations is consistent with the order in the program code.
    2. Load must appear at the same time as use; Assign and store must appear simultaneously.

4. Atomicity, Visibility and ordering

atomicity : Access to basic data types is atomic, and operations betweensynchronized blocks are atomic.

Visibility : When a thread modifies the value of a shared variable, other threads can immediately know the change. synchronized (rule 8) and final can guarantee visibility. Once the final decorated field is initialized in the constructor and the constructor does not pass the reference to this, the value of the final field can be seen in other threads.

order :volatile itself contains the semantics of the Prohibition order reordering, and synchronized is obtained by rule 5, which determines that the two synchronized blocks that hold the same one can only be entered serially.

5. Principle of antecedent occurrence

The partial-order relationship between the two operations defined in the Java memory model, if operation a precedes operation B, is to say that the effect of operation A can be observed by Operation B before Operation B occurs.

Program Order rules: Within a thread, control the flow sequence in accordance with the Code, preceded by the previous operation.

Tube lock rule: A unlock operation occurs after the lock operation that faces the same lock .

Volatile variable rule: a write operation to a volatile variable takes place before the read operation that faces the variable.

Thread Initiation rule: the Start () method of the Thread object precedes each operation of this thread.

Thread termination rule: All operations in the thread first occur on termination detection for this thread.

Thread break rule: the invocation of the interrupt () method on the thread occurs before the code detection interrupt event of the interrupted thread.

Object is terminated: Initialization of an object occurs at the beginning of its finalize () method.

Transitivity: If operation a precedes operation B and Operation B occurs in Operation C, then it is possible to conclude that operation a precedes operation C.

There is basically not much relationship between the sequencing of time and the principle of antecedent occurrence.

6. Threading implementation

using kernel thread implementations :

Kernel thread kernel Thread: threads that are supported directly by the operating system kernel, which completes thread switching by the kernel class, and the kernel dispatches the threads by manipulating the scheduler and is responsible for mapping the threads ' tasks to each processor.

Lightweight processes Light Weight process: Each lightweight process is supported by a kernel thread.

Limitations: Various process operations require system calls (system calls are relatively expensive and need to be switched back and forth between the user state and the kernel state ); Lightweight processes consume a certain amount of kernel resources, and the number of lightweight processes one system supports at a time is limited.

using the user thread implementation :

User thread: Completely built on the line libraries of user space, the system kernel cannot directly perceive the implementation of thread existence. The creation, synchronization, destruction, and scheduling of user threads are completely done in the user state without the need for kernel help. All thread operations require the user program to handle it themselves.

Hybrid implementations :

The way that kernel threads and user threads are used together. The lightweight process supported by the operating system serves as a bridge between the user thread and the kernel thread .

The Sun JDK, both Windows and Linux, is implemented using a one-to-one threading model, with a Java thread mapped to a lightweight process .

7. Thread Scheduling

Thread scheduling refers to the process by which a system assigns a processor to a thread: cooperative, preemptive.

Synergy : The execution time of a thread is controlled by the thread itself, and after the thread has executed its own work, the system is actively notified to switch to another thread. Cons: Thread execution time is not controllable.

preemption : Each thread is assigned the execution time by the system, and the thread's switchover is not determined by the thread itself. Java uses this method of invocation.

Thread priority: On some platforms (the operating system thread priority is less than the Java thread priority) the priority level will actually become the same; The priority may be changed by the system itself.

8. Thread Status

Thread State:

Create New:

Run Runnable:

Wait indefinitely waiting: Wait for other lines to wake up Cheng.

The timeout parameter is not set object.wait (); thread.wait () is not set for the timeout parameter.

Wait timed_waiting: The system will automatically wake up after a certain amount of time.

Set the Object.wait () of the timeout parameter, set the Thread.wait () of the timeout parameter, and the Thread.Sleep () method.

Block blocked: Waits for an exclusive lock to enter a synchronization area.

End Terminated:

9. Thread Safety

Thread safety: When multiple threads access an object, it is thread-safe to call this object if it is not necessary to consider the scheduling and interchange execution of these threads in the run-time environment, or to perform additional synchronization, or to perform any other coordinated operation by the caller.

immutable : As long as an immutable object is properly constructed. the base data type that is decorated with the final keyword, and if the shared data is an object, you need to ensure that the object's behavior does not have any effect on its state (the object of the string class). Method: Declare the variable with statein the object as final, such as the Integer class. There are: enumeration types, partial subclasses of number (except Atomicinteger and Atomiclong).

Absolute Thread Safety :

Relative thread safety : A separate operation on this object is thread-safe. Thread safety in the general sense.

Thread-compatible : it is necessary to ensure that objects are used safely in a concurrent environment by using synchronous means correctly through the caller.

thread antagonism : code that cannot be used concurrently in a multithreaded environment, regardless of whether the caller has taken synchronization measures. With: System.setin (), System.setout (), System.runfinalizersonexit ()

10. How to implement thread safety
    1. 1. Mutex synchronization : When multiple threads concurrently access shared data, the shared data is guaranteed to be used by only one thread at a time. Mutex: The critical section, the mutex, and the semaphore.

Synchronized keywords : After compiling, the Monitorenter and monitorexit Two bytecode instructions are formed before and after the synchronization block. All two instructions require a parameter of a reference type to indicate the object to lock and unlock. If the object parameter is not explicitly specified, then the corresponding object instance or class object is taken as the lock object according to whether the synchronized modifies the instance method or the classes method.

When executing the monitorenter instruction, first try to get the lock of the object, if it is not locked or when the front thread already owns the lock of the object, the lock counter is added 1, the corresponding execution moniterexit, the lock counter minus 1, when the counter is 0 o'clock, the lock is released. If you fail to acquire an object lock, the current thread will block the wait.

Reentrantlock advanced features relative to synchronized:

Wait interruptible: When the thread holding the lock does not release the lock for a long time, the waiting thread can choose to discard the wait and handle other things instead.

Fair Lock: When multiple threads are waiting for the same lock, the lock must be acquired one at a time in the order in which the lock is requested, while the non-fair lock is released, and any thread that waits for a lock has a chance to acquire the lock. The lock in synchronized is a non-fair lock, Reentrantlock is also an unfair lock by default.

Locks bind multiple conditions: A Reentrantlock object can bind multiple condition objects at the same time.

    1. 2. non-blocking synchronization :

Optimistic concurrency strategy based on conflict detection: First, if no other thread is competing for shared data, then the operation succeeds, and if there is contention for the shared data, there is a conflict, and then other compensation measures (usually continuous attempts until successful).

Atomicinteger and other atomic classes provide a way to implement CAS directives .

    1. 3. No synchronization scheme :

reentrant Code : You can break it at any point during code execution, instead of executing another piece of code, and the original program will not have any errors after control returns. Characteristics: The data stored on the heap and the common system resources, the amount of state used are passed by the parameters, not the non-reentrant methods, and so on. If a method, its return result is predictable, as long as the same data in and out, you can return the same results, it satisfies the requirements of reentrant.

thread-Local storage : If the data needed in a piece of code must be shared with other code, see if the code that shared the data is guaranteed to execute in the same thread.

A. Threadlocal class

ThreadLocal: A thread-level local variable that provides a separate copy of the variable for each thread that uses the variable, and that each thread modifies the copy without affecting the copy of the other thread object. Threadlocal instances are typically present in a class as static private fields.

11. Lock Optimization
    1. 1. spin lock

In order for the thread to wait, let the thread perform a busy loop (spin). Requires that the physical machine has more than one processor. The spin wait, while avoiding the overhead of thread switching , takes up processor Time , so if the lock takes a short time, the spin-wait will be very good, whereas the spin thread will only consume the processor resources in vain. The default value of the spin count is 10 times, which can be changed using the parameter-xx:preblockspin.

Adaptive spin Lock: The spin time is no longer fixed, but is determined by the previous spin time in the same lock and the state of the owner of the lock.

    1. 2. Lock Removal

Refers to the virtual machine instant compiler at run time, to some code on the requirements of synchronization, but it is detected that there is no possibility of the sharing of data competition of the lock to clear (Escape analysis technology: All the data on the heap will not escape out of the other threads to access, you can treat them as data on the stack).

    1. 3. lock coarsening

If a virtual machine detects a string of fragmented operations that lock on the same object, the lock synchronization will be extended to the outside of the entire sequence of operations.

HotSpot memory layout of objects for virtual machines : Object Header is divided into two parts, and the first part (Mark Word) is used to store the runtime data of the object itself. The other part stores a pointer to the data type of the method area object, and, if it is an array, an extra section is used to store the length of the array.

In a 32-bit hotspot virtual machine where an object is not locked, the 25 bits in Mark Word's 32 bits space are used to store the object hash code, the 4-bit storage object is of generational age, the 2-bit storage lock flag bit, and 1 bits are fixed to 0.

Hotspot Virtual Machine Object Header Mark Word

Store content

Flag bit

State

Object hash code, object generational age

01

Not locked

Pointer to lock record

00

Lightweight locking

Pointer to a heavyweight lock

10

Expansion (heavy-weight lock)

Empty, do not log information

11

GC Flag

Bias to Thread ID, time stamp, object generational age

01

can be biased

    1. 4. Lightweight Lock

When the code enters the synchronization block, if the synchronization object is not locked, the virtual machine will first establish a space named lock record in the stack frame of the current thread to store the object's current copy of Mark Word. The virtual machine then uses the CAS action to attempt to update the object's mark Word to a pointer that performs the lock record. If successful, the thread has a lock on the object. If the update operation fails, the virtual machine first checks to see if the object's mark word points to the stack frame of the current thread, or if it indicates that the current thread already owns the lock on the object, otherwise the object has been preempted by another thread. If more than two threads are contending for the same lock, then the lightweight lock is no longer valid and is inflated to a heavyweight lock.

Unlocking process: If the object's Mark Word still points to the thread's lock record, use CAs to replace the object's current mark Word and the displaced mark word that is copied in the thread, and if the replacement succeeds, the entire process is complete. If the failure indicates that another thread has attempted to acquire the lock, it is necessary to wake the suspended thread while releasing the lock.

The basis of the lightweight lock: For most locks, there is no competition throughout the synchronization cycle.

Traditional locks (heavyweight locks) are implemented using operating system mutexes.

    1. 5. bias Lock

The aim is to eliminate synchronization primitives in non-competitive situations, and to further improve the running performance of the program. The lock will favor the first thread to get it, and if the lock is not fetched by another thread during the next execution, the thread holding the lock will never need to be synchronized again.

When the lock is acquired by the thread for the first time, the virtual machine will set the flag bit in the object header to 01, and the ID of the thread acquiring the lock is recorded in the object's Mark Word with the CAS operation, and if successful, the thread that holds the lock will each enter this lock-related synchronization block. can be done without any synchronization.

When there is another thread trying to acquire the lock, the bias mode is declared closed. Restores the bias to an unlocked or lightweight locked state, depending on whether the object is currently locked.

12. Kernel State and User Configuration

Operating systems, Intel CPUs provide-ring3 three modes of operation.

RING0 is left to the operating system code, device driver code used, they work in the system kernel mentality, and Ring3 to ordinary user programs, they work in the user state. The code running on the processor core mentality is free to access any valid address for direct port access without any restrictions. While the code running in the user state is subject to many checks by the processor, they can only access the virtual address of the page that is specified in the page table entry that maps its address space, and only the I/O license bitmap in the task status segment (TSS) (I/O Permission BITMAP) provides direct access to the accessible ports specified in the

13. Common methods

    1. 1. object.wait ():

The current thread T waits (the thread T must have a lock on the object ) before other threads call the Notify () or Notifyall () method of this object, or before the specified amount of time. The thread t is placed in the rest area of the object and the lock is released. In the case of Wake, interrupt , timeout, remove thread t from the object's rest area and re-thread schedule. Once the thread T obtains the lock on the object, all the synchronization statements on that object are restored to the state when the wait () method is called, and then the thread T returns from the Wait () method . Interruptedexception is thrown if the current thread is interrupted by any thread before it waits or waits. This exception is thrown when the lock state of this object is resumed as described above. When this exception is thrown, the interrupt state of the current thread is cleared.

Only the lock of the object is freed, and no other synchronization resources held by the current thread are freed.

    1. 2. object.notify ()

Wakes a single thread waiting on this object lock. This method can only be called by a thread that owns the lock on the object.

    1. 3. thread.sleep ()

Lets the currently executing thread hibernate (suspends execution) within the specified number of milliseconds, which is affected by the accuracy and accuracy of the system timer and scheduler. The monitoring status remains, automatically reverts to the operational state, and no object locks are released . If any thread interrupts the current thread. When the interruptedexception exception is thrown, the interrupt state of the current thread is cleared. the execution time of the CPU allocation is conceded .

Thread.Join (): called on a Thread object so that the current thread waits for the thread object to end.

Thread.yield (): Pauses the currently executing thread object and executes other threads.

Thread.Interrupt ()

Break the thread and stop everything that is going on. Breaking a thread that is not active does not have any effect.

If a thread is blocked during the call to the object class's Wait () method, or join (), sleep () method, its break state is cleared and a interruptedexception is received.

thread.interrupted () : detects if the current thread has been interrupted and clears the interrupt state of the thread (back to non-disruptive state).

thread.isalive (): Active If the thread has been started and has not been terminated .

Thread.setdaemon (): Need to be called before the start () method call. When a running thread is a background thread , the Java virtual machine exits. Otherwise, when the main thread exits, the other threads will still continue to execute.

14. Other
    1. When the Wait (), notify (), Notifyall () of object is called, the illegalmonitorstateexception exception is thrown if the current thread does not acquire the object lock.

    1. If a method is declared as synchronized, it is equivalent to calling synchronized (this) on this method.

If a static method is declared as synchronized, it is equivalent to calling synchronized (class. Classes) on this method. When a thread enters a synchronous static method, other threads cannot enter any static synchronization methods for this class.

    1. The line Chengcheng is the owner of the object lock:
      1. By executing a synchronous instance method of this object
      2. The body of the synchronized statement that is synchronized by executing on this object
      3. For objects of class type, you can do this by executing a synchronous static method of the class.

    1. Deadlock:

A deadlock is when two or more threads are blocked indefinitely, and the threads wait for the required resources between them.

May occur in the following situations:

When two threads call each other thread.join ();

When two threads use nested synchronization blocks, one thread takes a lock that another thread must have, and a deadlock may occur when it is blocked waiting for each other.

    1. Invoking the Start () method of the thread class ( requesting another thread space from the CPU to execute the code in the Run () method ), the thread's run () method does not necessarily execute immediately, but waits for the JVM to dispatch.

The run () method contains the thread's body, which is the code that will run after the thread is started.

Original address: http://www.cnblogs.com/yshb/archive/2012/06/15/2550367.html

Java Multithreading High concurrency learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.