Java Concurrent Framework Concurrent container CONCURRENTHASHMAP (Doug Lea non-JDK version) source Analysis __java

Source: Internet
Author: User
Tags data structures static class volatile

In the July issue of Java Theory and Practice ("Concurrent collection Classes"), we briefly reviewed the bottlenecks of scalability and discussed how to achieve higher concurrency and throughput with shared data structure methods. Sometimes the best way to learn is to analyze the results of an expert, so this month we will analyze the implementation of Concurrenthashmap in Doug Lea's util.concurrent package. JSR 133 will specify a version of Concurrenthashmap that is optimized for the Java memory model (JMM) and will be included in JDK 1.5 's java.util.concurrent package. The version in Util.concurrent has passed thread-safety audits in both the old and new memory models. Optimized for throughput

Concurrenthashmap uses several techniques to achieve high levels of concurrency and to avoid locking, including using multiple write locks for different hash bucket (buckets) and using jmm uncertainties to minimize the time that locks are maintained-or to avoid acquiring locks at all. It is optimized for most general usages, and these usages tend to retrieve a value that is likely to already exist in the map. In fact, most successful get () operations do not require any locks to run at all. (Warning: Don't try to do this yourself.) To be smarter than JMM is not as easy as it looks. The Util.concurrent class is written by concurrent experts and is subject to rigorous peer review in JMM security. ) Multiple Write locks

We can recall that the main obstacle to the scalability of Hashtable (or alternative collections.synchronizedmap) is that it uses a map range (Map-wide) lock, in order to ensure insertion, The integrity of the delete or retrieve operation must maintain such a lock, and sometimes even to ensure the integrity of the iterative traversal operation. Thus, as long as the lock is maintained, the other threads are fundamentally blocked from accessing the MAP, which limits concurrency even if the processor is idle and inaccessible.

Concurrenthashmap rejects a single map-wide lock and replaces it with a collection of 32 locks, each of which is responsible for protecting a subset of the hash bucket. Locks are used primarily by variable operations (put () and remove ()). Having 32 separate locks means that up to 32 threads can modify the map at the same time. This does not necessarily mean that the number of threads that write the map concurrently is less than 32 o'clock, and that the other writes will not be blocked ――32 is a theoretical number of concurrent restrictions for write threads, but may not actually be able to achieve this value. However, the 32 is still much better than 1 and is sufficient for most applications running on the computer system of the current generation. & #160 Map-scoped operations

There are 32 separate locks, each of which protects a subset of the hash bucket, so that all 32 locks must be obtained for exclusive access to the map. Some map-scoped operations, such as size () and IsEmpty (), may be able to not lock the entire map at once (by appropriately qualifying the semantics of these operations), but some operations, such as map rearrangement (enlarging the number of hash bucket, redistribution elements as the map grows), You must guarantee exclusive access. The Java language does not provide an easy way to get a variable size lock collection. It is very rare to do so, and when this happens, recursive methods can be used to achieve it.

Back to the top of the page JMM Overview

Before entering the implementation of put (), get () and remove (), let's take a quick look at JMM. JMM governs how a thread's action (read and write) on memory affects the way other threads Act on memory. The Java Language Specification (JLS) allows some memory operations to be immediately visible to all other threads because of the increased performance of memory access by using processor registers and preprocessing cache. There are two language mechanisms that can be used to ensure consistency ――synchronized and volatile across thread memory operations.

According to JLS, "in the absence of explicit synchronization, an implementation is free to update main memory, and the order taken in the update may be unexpected." "It means that, if there is no synchronization, some sort of write operation in a given thread may render a different order for another different thread, and the time that an update of a memory variable is propagated from one thread to another is unpredictable."

Although the most common reason for using synchronization is to guarantee atomic access to key parts of the code, there are actually three separate functions-atomicity, visibility, and order-that are synchronized. Atomicity is very simple-synchronizing a reentrant (reentrant) mutex to prevent more than one thread from executing a block of code protected by a given monitor at the same time. Unfortunately, most articles focus on atomicity only, ignoring other aspects. But synchronization also plays an important role in JMM, causing the JVM to execute memory barriers (memorybarrier) when it obtains and releases the monitor.

After a thread obtains a monitor, it performs a read barrier-which invalidates all the variables in the cached thread-local memory (such as the processor cache or processor registers), which causes the processor to reread the variables used by the synchronized code block from main memory. Similarly, when you release the monitor, the thread executes a write barrier (write barrier), which writes all the modified variables back to main memory. The combination of mutexes and memory barriers means that as long as you follow the correct synchronization rules when you design your program (that is, you use synchronization whenever you write a variable that may be accessed later by another thread, or when you read a variable that might eventually be modified by another thread), Each thread will get the correct value for the shared variable it uses.

If there is no synchronization when accessing a shared variable, something strange can happen. Some changes can be reflected immediately through threads, while others take some time (this is due to the nature of the associated cache). As a result, without synchronization you cannot guarantee that the contents of the memory are consistent (the related variables may be inconsistent with each other) or that the current memory content (some values may be obsolete) is not guaranteed. The common way to avoid this danger (and also the recommended method) is, of course, to use synchronization correctly. However, in some cases, such as in some of the most widely used library classes like Concurrenthashmap, additional expertise and effort may be required in the development process (probably many times more than normal development) to achieve higher performance.

Back to the top of the page Concurrenthashmap implementation

As mentioned earlier, the data structure used by CONCURRENTHASHMAP is similar to the implementation of Hashtable or HASHMAP, is a variable array of hash bucket, each concurrenthashmap is composed of a chain of map.entry elements, As shown in Listing 1. Unlike Hashtable and HashMap, Concurrenthashmap does not use a single set lock (collection lock), but instead uses a fixed lock pool that forms a partition of the bucket collection. Listing 1. Map.entry elements used by Concurrenthashmap

Protected static class Entry implements Map.entry {
protected final Object key;
protected volatile Object value;
protected final int hash;
Protected final Entry next;
...
}
no lock traversal data structure

Unlike Hashtable or a typical lock pool MAP implementation, the Concurrenthashmap.get () operation does not necessarily require a lock associated with the associated bucket. If you do not use a lock, the implementation must have the ability to handle obsolete or inconsistent values for all the variables it uses, such as the field of the list head pointer and the Map.entry element (including the link pointer to the linked table that makes up each hash bucket entry).

Most concurrent classes use synchronization to guarantee exclusive access to a data structure (and to keep the data structure consistent). Concurrenthashmap does not use exclusivity and consistency, it uses a list of carefully designed lists, so its implementation can detect whether its listings are consistent or obsolete. If it detects that its list is inconsistent or obsolete, or simply cannot find the entry it is looking for, it synchronizes the appropriate bucket lock and searches the entire chain again. This is done in general to optimize the lookup, the so-called general situation is that most of the retrieval operations are successful and retrieve more times than the number of inserts and deletes. Use invariance

An important source of inconsistency is avoidable by the approach of making entry elements close to invariance-all fields except the value fields (which are variable) are final. This means that you cannot add elements to the middle or end of a hash chain, or delete elements from the middle or end of a hash chain--and you can only add elements from the beginning of the hash chain, and the deletion includes cloning a portion of the entire chain or chain and updating the head pointer of the list. So as long as there is a reference to a hash chain, you can know that the structure of the rest of the list will not change, even if you may not know if there is a reference to the list header node. Also, because value fields are variable, you can immediately see updates to value fields, which greatly simplifies the implementation of a map that can handle memory potential obsolescence.

The new JMM provides initialization security for the final variable, while the old JMM is not provided, which means that another thread may see the default value for the final field, not the value provided by the object's constructor method. The implementation must be able to detect this at the same time by ensuring that the default value of each field in the entry is not a valid value. After the list is constructed, if any one of the Entry fields has its default value (0 or empty), the search fails, prompting you to synchronize get () and traverse the chain again. Retrieving Operations

The retrieval operation first looks for the target bucket lookup head pointer (which is done without locking, so it may be obsolete) and then traverses the bucket chain without acquiring the bucket lock. If it cannot find the value to find, it synchronizes and attempts to find the entry again, as shown in Listing 2: Listing 2. Concurrenthashmap.get () Implementation

public object get (object key) {
int hash = hash (key);//throws NULL pointer exception if key is null

//Try- Without locking
... entry[] tab = table;
int index = hash & (tab.length-1);
Entry-Tab[index];
Entry e;

for (e = i e!= null; e = e.next) {
  if (E.hash = = Hash && eq (key, E.key)) {
Object value = e.value;
  //null values means that this element has been removed
if (value!= null) return 
  value;
else break
  ;
  }
}

Recheck under synch if key apparently not there or interference Segment seg
= segments[hash & segment_mask];
  synchronized (SEG) { 
  tab = table;
  Index = hash & (tab.length-1);
  Entry Newfirst = Tab[index];
  if (e!= null | | |!= newfirst) {for
(E = newfirst; e!= null; e = e.next) {
  if (E.hash = = Hash && EQ (key, E.key)) return 
E.value
}
  return null;
}
  }
Delete Operation

Because one thread might see an obsolete value for a linked pointer in a hash chain, simply removing an element from the chain is not sufficient to ensure that other threads do not continue to see the deleted value while they are searching. Instead, as we can see from listing 3, the delete operation is divided into two procedures-first find the appropriate entry object and set its value field to null, and then clone the element from the back of the chain to the part that you want to delete, and then connect to the part after the element you want to delete. Because the value field is variable, if another thread is looking for the deleted element in a chain that is out of date, it immediately sees a null value and knows to retrieve it using synchronization. Eventually, the deleted elements in the original hash chain will be garbage collected. Listing 3. Concurrenthashmap.remove () implements the

Protected object Remove (object key, Object value) {/* Find the entry, then 1. Set value field to null, to force get () to retry 2.
   Rebuild the list without this entry.  All entries following removed node can stay into list, but all preceding ones.
Traversals rely on this strategy to ensure that elements won't be repeated during.
* * int hash = hash (key);

Segment seg = Segments[hash & Segment_mask];
  Synchronized (SEG) {entry[] tab = table;
  int index = hash & (tab.length-1);
  Entry-Tab[index];

  Entry e = A; for (;;)
  {if (E = null) return null; if (E.hash = = Hash && eq (key, E.key)) break; e = E.next;
  } Object oldValue = E.value;
 
  if (value!= null &&!value.equals (oldValue)) return null;

  E.value = null;
  Entry head = E.next;
  for (Entry p = A/P!= e; p = p.next) head = new Entry (P.hash, P.key, P.value, head);
  Tab[index] = head;
  seg.count--;
return oldValue; }
  }

Figure 1 The hash chain before the deletion of an element: Figure 1. Hash chain

Figure 2 is the chain after the deletion of element 3: Figure 2. Delete procedure Insert and update operations for an element

The implementation of put () is simple. Like remove (), put () keeps the bucket lock during execution, but because it is not required to acquire a lock, this does not necessarily block the execution of other read threads (nor does it block other write threads from accessing other bucket). It first searches for the desired key value in the appropriate hash chain. If it can be found, the Value field (variable) is updated directly. If it is not found, Xinhui creates a new entry object that describes the new map, and then inserts the header into the bucket list. weakly consistent iterators

The semantics of the iterator returned by Concurrenthashmap are different from the iterators in the Ava.util set, and it is weakly consistent (weakly consistent) rather than fail-fast (the so-called fail-fast refers to, When an iterator is being used, how the underlying collection is modified throws an exception. When a user invokes keyset (). iterator () to retrieve a set of hash keys in the iterator, the implementation simply uses synchronization to ensure that the head pointer of each chain is the current value. The next () and Hasnext () operations are defined in a way that traverses each chain and then goes to the next chain until all the chains are traversed. A weakly consistent iterator may or may not reflect an insert operation during an iterator iteration, but it must reflect an update or deletion of a key that the iterator has not yet reached, and a maximum return for any value. The iterator returned by Concurrenthashmap does not throw a concurrentmodificationexception exception. Dynamic Resizing

As the number of elements in the map grows, the hash chain becomes longer, so the retrieval time increases. In a sense, it is important to increase the number of bucket and rearrange the values. In some classes like Hashtable, this is simple, because it is possible to keep an exclusive lock applied to the entire map. In Concurrenthashmap, each time an entry is inserted, if the length of the chain exceeds a threshold, the chain is marked as needing to be resized. When enough chains are marked to resize, Concurrenthashmap uses recursion to get the locks on each bucket and to row the elements in each bucket into a new, larger hash table. In most cases, this occurs automatically and is transparent to the caller. not locked.

It seems a bit exaggerated to say that you can successfully complete the get () operation without locking because the value field of the Entry is variable, which is used to detect updates and deletions. At the machine level, variable and synchronized content is usually translated into the same cache-consistent primitives at the end, so there will be some locking, albeit fine-grained and without scheduling, or without the JVM overhead of acquiring and releasing the monitor. However, in addition to semantics, in many general cases, the number of searches is greater than the number of insertions and deletions, so the concurrency achieved by Concurrenthashmap is quite high.

Back to first closing

Concurrenthashmap is a very useful class for many concurrent applications and is a good example of the subtle details of how JMM can achieve higher performance. Concurrenthashmap is a coded classic that requires a deep understanding of concurrency and JMM to be able to write. Use it, learn from it, enjoy it-but unless you're an expert in Java concurrency, you shouldn't try this yourself.



Resources to participate in this discussion forum. (You can also click the discussion at the top or bottom of this article to enter the Forum.) Read the entire series of Java theories and practices written by Brian Goetz. Especially relevant are: "Concurrency to a certain extent make everything simple" (September 2002), introduced the Util.concurrent package. "Change is still the same." (February 2003) discusses the advantages of thread safety in invariance. The Concurrency Collection Class (July 2003) analyzes the bottlenecks of scalability and how to achieve high concurrency and throughput in shared data structures. Doug Lea's Concurrent programming in Java, Second Edition is a monograph on the complex issues surrounding multithreaded programming in Java applications. The excerpt from Doug Lea's book describes the true meaning of synchronization. Download the Util.concurrent package. The Concurrenthashmap Javadoc page explains the difference between Concurrenthashmap and Hashtable in great detail. View the source code for the Concurrenthashmap. JSR 166 is standardizing the Util.concurrent library for JDK 1.5. Bill Pugh maintains a complete set of resources on the Java Memory Model. JSR 133 publishes a concurrenthashmap version optimized for the new Java memory model.          Hundreds of other Java references can be found in the DeveloperWorks Java technology Zone. Java Theory and Practice: building a better hashmap:http://www.ibm.com/developerworks/cn/java/j-jtp08223/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.