Blocking
Non-competitive synchronization can be handled entirely in the JVM, and competing synchronizations may require the intervention of the operating system, thus increasing overhead. When the lock is contested, the thread that failed the competition is bound to block. When the JVM implements the blocking behavior, it can use spin-wait, which is to try to acquire the lock through the loop continuously. Until it succeeds. Or, the blocked thread is suspended through the operating system. The efficiency of these two approaches depends on the cost of the context switch and the time to wait before the lock is successfully acquired. If the wait time is short, the use of spin-wait, if the time is longer, then the operating system hangs the way. Some JVMs choose between the two based on the analysis data of the historical wait time, but most JVMs just hang the thread while waiting for the lock.
reduce the competition of the lock
In concurrent programs, the most significant threat to scalability is the exclusive way of resource locks
There are two factors that will affect the likelihood of competition in the lock: the frequency of the locks requested, the time the lock is held. If the product is small, then most operations that acquire the lock will not compete.
Therefore, there are three ways to reduce the lock's competition program:
- Reduce the holding time of the lock.
- Reduce the frequency of lock requests.
- Using exclusive locks with a coordination mechanism, these mechanisms allow for higher concurrency.
narrowing the range of locks
public class Attributestore{private final map<string,string> attributes=new hashmap<string,string> (); Public synchronized Boolean userlocationmatches (String name,string regexp) {string key= "user." +name+ ". Location"; String Location=attributes.get (key); if (location==null) return False;elsereturn pattern.matches (regexp, location);}}
such as the Userlocation method in the above class, the entire method of the execution of the process of holding a lock, and the real need to ensure that the synchronous operation is actually only a string location=attributes.get (key); This line of code. So most of the holding time is wasted.
Modified as follows. Only the operation that needs to hold the lock is added with the built-in lock. Greatly reduces the lock holding time, can improve the scalability. And it is entirely possible to use CONCURRENTHASHMAP to further enhance its scalability.
public class Attributestore{private final map<string,string> attributes=new hashmap<string,string> (); Public Boolean userlocationmatches (String name,string regexp) {string key= "user." +name+ ". Location"; synchronized (this) {String location=attributes.get (key);} if (location==null) return False;elsereturn pattern.matches (regexp, location);}}
Reduce the granularity of locks
Just said to reduce the request frequency of the lock, which can be achieved through lock decomposition and lock segmentation techniques. In these technologies, multiple independent locks are used to protect independent state variables. For example, we add synchronized keywords to each method. But not all of the domains they access are one. Therefore, there is no need to protect a domain when the fact of the state variable is also limited.
Import Java.util.set;public class Serverstatus {private final set<string> users;private final set<string> Queries;public Serverstatus (set<string> users, set<string> queries) {super (); this.users = users; This.queries = queries;} Public synchronized void AddUser (String u) {users.add (U);} Public synchronized void AddQuery (String u) {queries.add (U);} Public synchronized void Removeuser (String u) {users.remove (U);} Public synchronized void Removequery (String q) {queries.remove (q);}}
after improvement with lock decomposition technology
Import Java.util.set;public class Serverstatus {private final set<string> users;private final set<string> Queries;public Serverstatus (set<string> users, set<string> queries) {super (); this.users = users; This.queries = queries;} public void AddUser (String u) {synchronized (users) {users.add (U);}} public void AddQuery (String u) {synchronized (queries) {Queries.add (U);}} public void Removeuser (String u) {synchronized (users) {users.remove (U);}} public void Removequery (String q) {synchronized (queries) {queries.remove (q);}}}
Lock segment
In some cases, the lock decomposition technique can be further extended to decompose locks on a set of independent objects. This situation is called lock segmentation.
For example, an array containing 16 locks is used in the implementation of CONCURRENTHASHMAP. Each lock protects 16 of 1 of all hash barrels. To achieve better concurrency.
The disadvantage of lock segmentation is that obtaining multiple locks for exclusive access is more difficult and expensive than using a single lock for exclusive access. For example, when the CONCURRENTHASHMAP needs to extend the mapping range, and the hash value of the recalculation key value is distributed to the larger bucket collection, all the locks in the collection of fragments need to be obtained.
public class Stripedmap {private static final int n_locks = 16;private final node[] buckets;private final object[] lockes; public stripedmap (int num) {super (); this.buckets = new Node[num];this.lockes = new Object[n_locks];for (Object Lock:lock ES) {lock = new Object ();}} Private final int hash (Object key) {return Math.Abs (Key.hashcode ()% buckets.length);} public object get (object key) {int hash = hash (key), synchronized (lockes[hash% n_locks]) {for (Node m = Buckets[hash]; m ! = NULL; m = m.next) {if (M.key.equals (key)) {return m.value;}}} return null;} public void Clear () {for (int i = 0; i < buckets.length; i++) {synchronized (lockes[i% n_locks]) {buckets[i] = null;} }}}
Java Concurrency Programming and Scalability (ii)