Performance and Scalability I. Law of Amdahl
1. The relationship between issues and resources
In some cases, the more resources are resolved faster, the more the problems are the opposite:
Note: Each program must have a serial part, and reasonable analysis of the serial and parallel parts of the program has a great impact; the relationship between serial part ratio and multicore execution efficiency is exponential level
2.ConcurrentLinkedQueue
In a multicore environment, this thread-safe queue is much faster than the queue generated by synchronizedlist
It can be said that the class provided in concurrent is faster than the thread-safe class generated by the method
Second, thread overhead
Because multithreading has overhead: the use of multithreading must ensure a performance boost > concurrency overhead
The cost of context switching
Overhead of memory synchronization
Iii. reducing the competition for locks 1. Reduce lock holding time: Narrow the lock range
Private Finalmap<string, string> attributes =NewHashmap<string, string>();//the whole method is locked . Public synchronized Booleanuserlocationmatches (string name, string regexp) {string key= "users." + name + ". Location"; String Location=Attributes.get (key); if(Location = =NULL) return false; Else returnpattern.matches (regexp, location); } Public Booleanuserlocationmatches (string name, string regexp) {string key= "users." + name + ". Location"; String location; //locked for variable state only synchronized( This) { location=Attributes.get (key); } if(Location = =NULL) return false; Else returnpattern.matches (regexp, location); }
2. Reduce the lock request frequency: Lock decomposition, lock segment ...
Lock decomposition: Break a lock into multiple locks such as: there is no need to update multiple state variables in an atomic operation, each state variable uses the same class lock, it is not necessary, each extraneous state variable uses its own lock.
Public classServerstatusbeforesplit { Public FinalSet<string>users; Public FinalSet<string>queries; PublicServerstatusbeforesplit () {Users=NewHashset<string>(); Queries=NewHashset<string>(); } //Each method uses the current class instance lock, similar to synchronized (this), regardless of whether the operation is in the same shared state Public synchronized voidAddUser (String u) {users.add (U); } Public synchronized voidaddquery (String q) {queries.add (q); } Public synchronized voidRemoveuser (String u) {users.remove (U); } Public synchronized voidremovequery (String q) {queries.remove (q); }} Public classServerstatusaftersplit { Public FinalSet<string>users; Public FinalSet<string>queries; //methods that operate in the same state use the same lock PublicServerstatusaftersplit () {Users=NewHashset<string>(); Queries=NewHashset<string>(); } Public voidAddUser (String u) {synchronized(users) {users.add (U); } } Public voidaddquery (String q) {synchronized(queries) {queries.add (q); } } Public voidRemoveuser (String u) {synchronized(users) {users.remove (U); } } Public voidremovequery (String q) {synchronized(users) {queries.remove (q); } }}
Lock segmentation: If the map bucket is divided into different segments, each segment has a lock, so that in performing certain operations such as GET, you can hold different locks to improve concurrency efficiency, of course, some operations need to hold all segments of the container lock, such as clear, etc.
//Map segmented Lock implementation Public classStripedmap {//synchronization Policy:buckets[n] guarded by locks[n%n_locks] Private Static Final intN_locks = 16;//Number of Locks Private FinalNode[] buckets;//container Barrels Private FinalObject[] Locks;//array of synchronization listener objects Private Static classnode {node next; Object key; Object value; } PublicStripedmap (intnumbuckets) {Buckets=NewNode[numbuckets]; Locks=NewObject[n_locks]; for(inti = 0; i < n_locks; i++) Locks[i]=NewObject (); } Private Final intHash (Object key) {returnMath.Abs (Key.hashcode ()%buckets.length); } Publicobject get (Object key) {inthash =hash (key); //gets the lock for the index region of the current key, only acquires a lock synchronized(Locks[hash%N_locks]) { for(Node m = Buckets[hash]; M! =NULL; m =m.next)if(M.key.equals (key))returnM.value; } return NULL; } Public voidClear () { for(inti = 0; i < buckets.length; i++) { //gets the lock for each I, which is the acquisition of all the segment locks of the entire container . synchronized(Locks[i%N_locks]) {Buckets[i]=NULL; } } }}
3. Avoid hotspot domains
Hotspot Resource lock competition is intense, resulting in performance issues
4. Override exclusive Lock
such as: Read-write Lock: Read read can be parallel, to prevent exclusive; use atomic State amount; Use concurrent container; Use immutable objects, etc.
5. Reduce context Switching
Tasks are toggled in a non-blocking state, similar to a context switch
Such as: Log, log printing and IO operations can cause a lot of blocking and release, causing performance problems
Java Concurrency Programming (4) Performance and scalability