Java concurrency: Synchronous container & concurrent container, java concurrent container
Section 1Synchronous container and concurrent container
1. Brief Introduction to synchronous containers and concurrent containers
In Java concurrent programming, I often hear about Synchronous containers and concurrent containers. What is synchronous containers and concurrent containers? Synchronous containers can be simply understood as containers that implement synchronization through synchronized, such as Vector, Hashtable, and SynchronizedList containers. If multiple threads call methods of synchronous containers, they will be executed in serial mode.
You can view the implementation code of synchronous containers such as Vector and Hashtable. You can see that the thread security mode of these containers is to encapsulate their statuses, add the keyword synchronized to the method to be synchronized. However, in some cases, the synchronization container is not necessarily thread-safe, such as obtaining the last element or deleting the last element, we need to implement additional synchronization operations:
public static Object getLast(Vector list) { int lastIndex = list.size() - 1; return list.get(lastIndex); } public static void deleteLast(Vector list) { int lastIndex = list.size() - 1; list.remove(lastIndex); }
Although the above method seems no problem, the method of Vector itself is also synchronous, but the problem is still hidden in the multi-threaded environment. If two threads A and B call the preceding two methods at the same time, assuming that the list size is 10, the lastIndex calculated here is 9, thread B First executes the delete operation (which is caused by the uncertainty of the operation between multiple threads), and thread A calls the list. in this case, an array out-of-bounds exception occurs. The cause is that the compound operation above is not an atomic operation. Here, you can use the list object lock inside the method to implement atomic operations.
Synchronous containers cause serial execution of container method calls in multiple threads to reduce concurrency because they are all locked by the container's own objects. Therefore, in environments that require concurrent support, you can consider using concurrent containers instead. Concurrent containers are designed for concurrent access by multiple threads. the concurrent package is introduced in jdk5.0, which provides many concurrent containers, such as ConcurrentHashMap and CopyOnWriteArrayList.
In fact, synchronous containers and concurrent containers provide thread security for multi-thread concurrent access, but concurrent containers have higher scalability. Before Java 5, programmers only have synchronization containers, and concurrent multi-thread access will lead to competition, hindering the scalability of the system. Java 5 introduces concurrent containers. Concurrent containers use a locking policy that is completely different from Synchronous containers to provide higher concurrency and scalability. For example, in ConcurrentHashMap, a fine-grained locking mechanism is adopted, which can be called a multipart lock. Under this lock mechanism, any number of read threads can access the map concurrently, in addition, the threads that execute read operations and write operations can also concurrently access map, and allow a certain number of write operation threads to modify map concurrently, therefore, it can achieve higher throughput in a concurrent environment. In addition, concurrent containers provide some composite operations that need to be implemented by themselves when using synchronous containers, including putIfAbsent, but because concurrent containers cannot exclusively access through locking, we cannot implement other compound operations through locking.
2. References:
(1) http://www.cnblogs.com/dolphin0520/p/3933404.html
Section 2 ConcurrentHashMap
1. First recognized ConcurrentHashMap
For ConcurrentHashMap in concurrent containers, the book java concurrent programming practice has the following text:
The mystery of ConcurrentHashMap will be uncovered here. First, let's take a look at the structure of ConcurrentHashMap, as shown below:
2. Details ConcurrentHashMap
(1) ConcurrentHashMap concurrency
ConcurrentHashMap divides the actual map into several parts to achieve its scalability and thread security. This division is obtained by the concurrency. It is an optional parameter of the ConcurrentHashMap class constructor. The default value is 16, which avoids contention in the case of multithreading.
(2) Lock separation technology of ConcurrentHashMap
HashTable containers are inefficient in highly competitive Concurrent Environments because all threads accessing HashTable must compete for the same lock. If there are multiple locks in the container and each lock is used to lock a part of the data in the container, there will be no lock competition between threads when multiple threads access data in different data segments in the container, this can effectively improve the efficiency of concurrent access. This is the lock Segmentation technology used by ConcurrentHashMap. First, we divide the data into a segment of storage, and then assign a lock to each segment of data, when a thread occupies a lock and accesses data in one segment, data in other segments can also be accessed by other threads.
Comparison (this figure is taken from the network). The synchronous container HashTable locks the entire hash table, while the concurrent container ConcurrentHashMap implementsLock bucket (A simple understanding is to think of the entire hash table as a large tank of water. Now the water in the tank is moved to several buckets. hashTable locks the tank each time, while ConcurrentHashMap locks only one bucket at a time.).
ConcurrentHashMap divides the hash table into 16 buckets (default value). Common Operations such as get, put, and remove only lock the buckets currently used. Imagine that only one thread can enter, but now 16 threads can enter at the same time. The increase in concurrency is obvious.
(3) remove ConcurrentHashMap
When removing ConcurrentHashMap, it is not a simple Node Deletion operation.
After removing a segment (that is, a node in a bucket) of ConcurrentHashMap, for example, deleting node C and node C is not actually destroyed, instead, the C node is reversed and copied to the new linked list. The C node does not need to be cloned. This operation prevents concurrent read threads from interfering with concurrent write threads. For example, A read thread reads data from node A and the write thread deletes C, the read thread can continue reading. Of course, if the read thread reads D before deleting C, it will not be affected.
As mentioned above, deleting a node in ConcurrentHashMap will not be immediately felt by the read thread.Weak ConsistencyTherefore, ConcurrentHashMap's iterator is a weak consistent iterator.
3. References:
This section briefly introduces some content of ConcurrentHashMap. The implementation mechanism of ConcurrentHashMap can be found in the following high-quality articles:
(1) http://www.cnblogs.com/ITtangtang/p/3948786.html
(2) http://ifeve.com/concurrenthashmap/
(3) http://blog.csdn.net/xuefeng0707/article/details/40834595
Section 3 SynchronousQueue
(1) SynchronousQueue
SynchronousQueue isAn unbounded, unbuffered waiting queueSynchronousQueue is a blocking queue with a cache value of 1. After an element is added, you must wait for other threads to remove the element before adding it. However, its isEmpty () the method returns true forever, The remainingCapacity () method returns 0 forever, The remove () and removeAll () methods always return false, the iterator () method always returns NULL, And the peek () method returns () the return value is null.
There are two different ways to declare a SynchronousQueue, with different behaviors between them.
Differences between the fair mode and the non-fair mode:If the fair mode is used, SynchronousQueue uses a fair lock and uses a FIFO queue to block redundant producers and consumers. If the unfair mode (SynchronousQueue default), SynchronousQueue uses an unfair lock, a lifo queue is used together to manage redundant producers and consumers. If the unfair mode is adopted and there is a gap between the processing speed of producers and consumers, hunger and thirst may easily occur (data of some producers or consumers may never be processed ).
(2) References
(1) Java concurrent packet synchronization queue SynchronousQueue implementation principle http://ifeve.com/java-synchronousqueue/