Concurrency overview
>> Sync
How to synchronize access to shared resources by multiple threads is one of the most basic problems in multithreaded programming.
When multiple threads access shared data concurrently, the data is in an intermediate or inconsistent state, which affects the correct operation of the program. We often call this a race condition (race condition), which is called a critical area (critical section) for concurrent access to shared data.
Synchronization is the order in which multiple threads enter critical areas to avoid the occurrence of competitive conditions.
>> Thread Safety
The core of writing thread-safe code is to manage state access operations, especially for shared and mutable state access.
Thread safety definition: When multiple threads access a class, the class always shows the correct behavior, then it is called a thread-safe class.
Stateless objects must be thread-safe.
>> atomicity and competitive conditions
(1) An atomic operation is called atomic manipulation.
In Java, simple operations on basic types except long and double are atomic. A simple operation is to assign or return a value. such as "a = 1;" and "return A;" Such operations are atomic in nature.
In some JVMs, "A + = B" may have to go through a three-step process:
1) Read: Remove A and B
2) Modify: Calculate A+b
3) Write: Writes the result of the calculation to memory
There is a thread-safety problem with non-atomic operations, and we need to use synchronous technology (sychronized) to turn it into an atomic operation. Concurrent package provides some atomic classes, such as: Atomicinteger, Atomiclong, atomicreference and so on.
(2) competitive conditions, refers to situations in concurrent programming where incorrect results occur due to inappropriate execution timing
A race condition occurs when the correctness of a calculated result depends on the alternating execution timing of multiple threads.
The two most common competitive conditions are "check before executing" and "read-Modify-write".
(3) built-in lock and re-entry lock
Java provides a built-in locking mechanism to support atomicity: synchronizing blocks of code. The synchronization code block consists of two parts: one is an object reference as a lock, and one is a block of code protected by this lock. Each Java object can be used as a lock that implements synchronization, which is called a built-in lock.
When a thread requests a lock held by another thread, the thread that makes the request is blocked, but the built-in lock can be re-entered. So if a thread tries to get a lock that already holds its own, the request succeeds.
Note that the "lock" Holder is an instance object, not a class!
>> Threads and collection classes
(1) Thread-Safe collection classes
Java.util.Vector
Java.util.Stack
Java.util.HashTable
Java.util.concurrent.ConcurrentHashMap
Java.util.concurrent.CopyOnWriteArrayList
Java.util.concurrent.CopyOnWriteArraySet
Java.util.concurrent.ConcurrentLinkedQueue
(2) Non-thread safe Collection class
Java.util.BitSet
Java.util.HashSet (Linkedhashset)
Java.util.TreeSet
Java.util.HashMap (Weekhashmap, TreeMap, Linkedhashmap, Identityhashmap)
Java.util.ArrayList (LinkedList)
Java.util.PriorityQueue
These non-thread-safe collections can be wrapped into thread-safe collections by means of java.util.Collections.SynchronizedList, Synchronizedmap, Synchronizedset, and so on. The wrapper class simply adds synchronized protection to the various operations of the packaged set. It is important to note that additional synchronized protection must be added when using cursors to traverse these wrapper collections, otherwise there will be a problem.
List List = Collections.synchronizedlist (new ArrayList ()); ... Synchronized (list) { Iterator i = list.iterator ();//Must be in synchronized block while (I.hasnext ()) foo (i . Next ());
(3) Thread notification Collection class
Java.util.concurrent.ArrayBlockingQueue
Java.util.concurrent.LinkedBlockingQueue
Java.util.concurrent.SynchronousQueue
Java.util.concurrent.PriorityBlockingQueue
Java.util.concurrent.DelayQueue
These collection classes implement the Blockingqueue interface. The blocking queue is characterized when an element is removed from the queue and if the queue is empty, the thread is blocked until an element in the queue is inserted. When an element is inserted from a queue, if the queue is full, the thread is blocked until an element in the queue has been fetched with free space. Blocking queues can be used to implement producer consumer patterns (Producer/consumer pattern).
>> thread Pool
Frequent creation and destruction of threads can degrade program performance.
The number of threads that an application can create is subject to machine physical conditions, and too many threads deplete the machine's resources, and you need to limit the number of concurrent threads when designing your program.
The thread pool initializes several threads at the start of the thread (either on demand, or on a thread that is idle for a certain amount of time), and then the program takes the task to the thread pool to execute instead of handing it over to a thread, and the thread pool assigns threads to the tasks.
When a thread finishes a task, it is set to idle for the next task to be reused instead of destroying it.
The thread pool needs to specify the maximum number of threads when initializing, and when the number of concurrent tasks exceeds the number of threads,
The thread pool no longer creates new threads but lets new tasks wait, so we don't have to worry about the number of threads running out of system resources too much. JDK1.5 started to provide us with a standard thread pool.
(1) Executor interface
The Java thread pool implements the following executor interfaces:
The Java thread pool implements the following Executor interface: Public interface Executor { void execute (Runnable command); }
In multithreaded programming, the executor is a common design pattern, its advantage is to provide a simple and efficient programming model, we just need to split the work of the concurrent processing into separate tasks, and then to the executor to execute can not care about thread creation, allocation and scheduling.
The JDK provides two main functions: Threadpoolexecutor and Scheduledthreadpoolexecutor. Threadpoolexecutor is the basic thread pool implementation, scheduledthreadpoolexecutor on the basis of the former to increase the function of task scheduling, when the task is given to it we can specify the execution time of the task, rather than immediately execute.
(2) Executors create thread pool
Java.util.concurrent.Executors is the factory class used to create the thread pool,
With the factory method it provides, we can easily create thread pools of different characteristics, including the cache thread pool, various priority thread pools, and so on.
(3) Future interface
The executor interface does not look so ideal, and sometimes we perform a task to get the result of the calculation, sometimes we need to have more control over the task, such as knowing whether it is complete or terminating it halfway. The Execute method that returns void does not meet our requirements. Of course we can work on the incoming Runnable class to provide similar functionality, but it's tedious and error-prone. In fact, the thread pool implements a richer Executorservice interface that defines the Submit method that performs the task and returns the future object that represents the task.
Through the future interface, we can see whether the task that has been submitted to the thread pool is complete, gets the result of the execution, or terminates the task.
(4) Runnable and callable interface
Classes that implement the runnable or callable interface can be submitted as tasks to the thread pool execution, The main difference between the two interfaces is that the callable call method has the result of returning and can throw an exception and the runnable Run method returns void and does not allow a check exception to be thrown (only the runtime exception is thrown). Therefore, if our task executes with the result returned, we should use the callable interface.
>> Explicit Lock
Mechanisms for reconciling shared object access: JDK5 was preceded by synchronized and volatile (), JDK5 added Reentrantlock, now can be used with lock explicit lock () and unlock (), with timing lock, read-write lock, etc.
(1) Reentrantlock
The Reentrantlock implements the lock interface and provides the same mutex and memory visibility as the synchronized.
public interface Lock {void lock (), or//if the current thread is not interrupted, gets the lock. void lockinterruptibly () throws interruptedexception; Boolean trylock (); Boolean Trylock (long timeout, timeunit unit) throws Interruptedexception; void unlock (); Condition newcondition (); }
Use Reentrantlock to protect object state.
Lock lock = new Reentrantlock (); Lock.lock (); try { //related operation} finally { lock.unlock (); Be sure to release the lock}
(2) Polling lock and timing lock
The timing and polling lock acquisition mode is implemented by the Trylock method, which has a more perfect error recovery mechanism than the unconditional lock acquisition mode.
(3) Interruptible lock
(4) Readwritelock read-write lock
Readwritelock maintains a pair of related locks, one for read-only operations and the other for write operations.
Readwritelock interface
Readwritelock interface public interface Readwritelock { Lock readlock (); Lock Writelock (); }
>> counter countdownlatch and fence Cyclicbarrier
(1) Countdownlatch, counter or latching
Countdownlatch is a synchronous helper class that allows one or more threads to wait until a set of operations that are executing in another thread is completed java.util.concurrent.CountDownLatch. Initializes the countdownlatch with the given count. Because the countdown () method is called, the await method is blocked until the current count reaches 0. After that, all the waiting threads are freed, and all subsequent calls to await are returned immediately. This behavior occurs only once-the count cannot be reset.
Countdownlatch is a thread (or multiple) that waits for another n thread to complete something before it can execute
(2) Cyclicbarrier, fence
You can start a set of related threads at the same time by latching (Countdownlatch), or wait for the end of a set of related threads. However, the lockout is a one-time object and cannot be reset once it enters the terminating state. Fences are similar to latching, which can block a set of threads until an event occurs.
Cyclicbarrier is very useful in parallel iterative algorithms.
>> Signal Volume semaphore
The Count Semaphore (counting Semaphore) is used to control the number of simultaneous accesses to a particular resource, or the number of simultaneous executions of a specified operation.
Sometimes we have multiple identical shared resources that can be used by multiple threads at the same time. We want to base the lock on a counter, based on the number of resources to initialize this counter, each successful lock operation will be the value of the counter minus 1, as long as the value of the counter is not zero to indicate that there are resources can be used, the lock operation can be successful. Each unlock operation will add 1 to this counter. The lock operation blocks the current thread only if the value of the counter is 0. This is the semaphore semaphore in Java.
The Semaphore class provides a method that is very similar to the lock interface, when the number of resources in the semaphore is set to 1 o'clock, the semaphore is degraded to a normal lock.
>>threadlocal thread Private variables
(1) is a variable is not a thread
If each thread has its own private member variable, then we do not need to synchronize. Threadlocal is a private variable for a thread, and each thread that uses the threadlocal variable has its own independent threadlocal object, so there is no problem with multiple threads accessing the same variable.
It is not a thread, but a threadlocalvariable (thread local variable).
(2) Realization principle of ThreadLocal
Each thread object has its own container threadlocalmap for storing private threadlocal objects, when a thread calls the Get () method of the Threadlocal object to fetch values,
The Get method first obtains the current thread object, then takes out the thread's threadlocalmap, and then checks if it is already in the map, and returns the value in the map directly if it already exists.
If it does not exist, make yourself a key and initialize a value into the map of the current thread.
Public T get () { Thread T = Thread.CurrentThread (); Threadlocalmap map = getmap (t); if (map! = null) { Threadlocalmap.entry e = Map.getentry (this); if (E! = null) return (T) e.value; } return Setinitialvalue (); }
Compiled from "Java Concurrency programming Combat"
Java Threads Multithreading 10-minute reference manual
Java Concurrent Programming Learning notes
Java Concurrency Programming Notes concurrency overview