Java Concurrency programming

Source: Internet
Author: User
Tags cas semaphore visibility volatile

The holder of the lock, who is the lock

public class Lock (
Public synchronized void Fun1 () {
Business operations.  
};
public static synchronized void Fun2 () {
Business operations.  
}
}
Lock A = new lock ();

FUN1: Locks are the lock object that is a this, which holds the thread of the lock call.
FUN2: The lock is Lock.class, which holds the thread of the lock call.
When a thread holds a lock, it can enter where it needs the same lock.

Synchronized note place, disadvantage:

Note Place:
A lock is used in a multithreaded concurrency operation: When a thread acquires a lock, calls a Sleeep (hibernate), the thread does not release the resource, releases the lock,
Wait, the thread releases the lock, and when it wakes up again and acquires the lock again, it needs to be in the sync block.
Notify: Wakes up a thread in a thread that waits because of this condition and needs to be in the synchronization block.
Notifyall: Wake up all. Generally call Notifyall: otherwise it may cause some thread to feign death, the point back has not awakened him. Required in the synchronization block.

Disadvantages:
When concurrency requires a time-out interrupt, it cannot be implemented. Just wait until you get the lock business calculation is complete exit.

Display Lock: Reentrantlock:

Semantically, and synchronized exactly the same, just more features.

Writing:

Lock.lock ();// get lock

Lock.unlock ();// release lock, must be the same as the database connection , put in finally

As with database connections, the risk is increased.

Common API explanations:

Trylock (long timeout, timeunit unit) // Gets the lock, cannot return false , or cannot get returned after a time.


Reentrantlock's wait,notify, Notifyall:

Corresponds to the wait,notify,notifyall of synchronized.
If you use Reentrantlock, you cannot use the Wait,notify,notifyall method.  
Producer consumer production environment, limited array, when the array is full after waiting for the consumer to clear the data after the need to wake up the producer thread.  
Reentrantlock lock = new Reentrantlock ();
Condition full = lock.newcondition ();  
public void put (String str) {
if (Isfull ()) {//Is full
Full.await ();
}

public void get () {
Clear Business
Full.notifyall ();

Think of more targeted object-oriented programming.  

Similar issues with databases:

Dirty reads: The first thing reads the data that the second thing is updating, and if the UPDATE statement is not yet complete, the first thing reads only the data in a process, not the actual result. Oracle's Things by default: Read Committed (commit read), the problem does not occur
Exclusive Lock: Gets an exclusive lock when the data is changed.
Shared Lock: The query acquires a shared lock, and the data can be multi-threaded for multiple shared locks. A shared lock is acquired and an exclusive lock is acquired, waiting for the shared lock to end.

Similar issues with the Java solution database
Dirty read: Violation keyword, visibility such as 64bit long,double.
Exclusive locks: The default synchronized,reentrantlock are exclusive locks.
Shared locks: Implements Class Reentrantreadwritelock, read-write locks. Scenario: As our system caches constant data is generally read and rarely modified.




Concurrency classes introduce-----atomic basic variables

Boolean:atomicboolean
Long:atomiclong
Reference: Atomicreference
A large amount of data in a linked list needs to be wrapped, using domain updates, atomicreferencefieldupdater
Case: The number of visits to the system, you must use a long sum = 0;
sum++ method,
From the atomic operation of the computer see there are 3 steps 1. Remove Sum=0 2. Add 1 sum+1 3. Put the added value back to the area of sum sum=1;
For concurrency operations, we typically use a hidden lock (synchronized) or a display lock (lock) for atomic manipulation.


Introduction to Concurrency Classes---CAS (compare and swap)
Pseudo code:
AddOne () {
for (;;) {
int old = current value;
int new = old+1;
if (CAS (old,new)) {
Return
}
}
}

/**
* This piece is the CPU instruction that implements the atom, when the substitution succeeds returns true, otherwise false.
*/
CAS (int old,int new) {
if (old== current value) {//is the current value just now, the same means no change.
The current value is =new;
return true;
}else{//if a different representation has been modified, then return False, and the above function calls the CAS again.  
return False
}
}


Introduction to concurrency Classes comparison of-----CAs and lock implementations
Basic implementation of Locks: A Thread acquires locks, B acquires lock waits, B releases locks wakes all waiting threads, B acquires locks
Hibernation waits for the context switch of the operating system, from the user state to the system state of the switch, relatively slow.
If using CAs, directly is the JVM computing, when super large concurrency, competition is fierce, CAS is not necessarily better than the lock performance, from these business algorithms, the computer science is to solve the specific things, those cows want to break the head to come out.

Concurrency classes describe-----cache queues

Arrays (array) such as: ArrayList
Combination of array chain products: HashMap;
For faster concurrent operations, the use of a simpler design. and the contract java.util.concurrent
Finite Array (array): Arrayblockingqueue,
The concurrent array chain Map:concurrenthashmap.
When the map is large, adding changes takes more resources, can be set to multiple maps in a map, and then calculates the hash modulus, with the lock adding only one of the smaller maps. is similar to a database partition. Separate locks, which lock only one area of a piece of data.

Introduction to Concurrency classes----tool classes
Semaphore, Semaphore: If a maximum of 5 signals, business operations need to get the signal (acquire), in the operation, after the end of release. Some are like connection pooling, limiting the amount of computation.
Level, barrier, implementation class: Cyclicbarrier, executes the thread at the end of the wait, waits for the last thread to finish, and then everyone ends up together.
Latching, Latch, Countdownlatch: Wait threads will run uniformly until all the resource collections have been completed.

Introduction to concurrent classes----thread pool tool class
When creating many threads that run a lot of time, the JVM becomes more expensive to allocate resources, and the thread pool and database connection pool are similar
Static methods under the Executors class,
Newfixedthreadpool (int nthreads);--Fixed thread pool.
Newcachedthreadpool (); --based on the system needs to create, and then reuse methods.
Slow and inferior:
When using the pool, your thread will not grow indefinitely causing memory overflow, under your control, your system load is very high, the user submitted data is blocked in your JVM, and later in the OS level cache commits, the operating system is not enough after the router cache, the last router timeout to the user.

Concurrency test:
Garbage collection can affect your test reports and prohibit testing when performing garbage collection:-VERBOSE:GC
The method is run using interpreted bytecode first, enough time to dynamically compile, print compile information,-xx:+printcompilation
The JDK has client,server multiple operating modes, which are definitely run in server mode when released, and are better at optimizing dead code in this mode:-server

Formula Law
Amdahl law: When computing resources must use serialization ratios, then calculate a formula that can improve performance:

Custom Java thread pool size: The ratio between wait time (WT) and service time (ST). If we call this ratio wt/st, then for a system with N processors, it is necessary to set approximately n (1+wt/st) threads to keep the processor fully utilized

Happen-before:
In order to improve multithreaded performance, the compiler will reorder the code, based on the Happen-before rules, to determine the sequence of code execution.
1. Single-threaded rule: In the same thread, write in the previous operation Happen-before write in the following operation. This rule is that the happen-before relationship between operations in a single thread is entirely determined by the order of the source code.
2. The unlock operation of the lock Happen-before subsequent lock operations on the same lock. The "Follow-up" here refers to the relationship of the time, the unlock operation occurs after exiting the synchronization block, the lock operation occurs before entering the synchronization block. All reads and writes of the same variable must be synchronized to ensure that the stale data is not read, and it is not enough to just read or write synchronously.  
3. If operation a happen-before operation B, Operation b happen-before Operation C, then operate a happen-before operation C. Called a delivery rule.
Refer to: http://www.iteye.com/topic/260515

Concurrent internal implicit lock: synchronized

Features: Visibility: As with violation ,

atomicity: The combination of some not atomic operations into atomic operations.

When a final variable does not need to be synchronized, But an internal member variable for the image needs to be final .

When a variable is created, it needs to be locked when it is modified and acquired. Otherwise, the traversal may throw concurrentmodificationexception is changed unexpectedly.

system operation is not a single computer or now the single core CPU can solve the operation well: the system needs to display multi-machine multi-CPU load calculation, in order to respond to the current low-carbon life, we need to improve the computing power of the single machine, better learning design concurrency.

Key words:

Atomic operations: Atoms are not re-divided.

violation : The keyword is visible.

Synchronized: Internal hidden lock
Reentrantlock: Display lock
Reentrantreadwritelock: Read/ write lock
Final: Unchanged after creation

JMM (Java memory model):

 

The operation of a thread on all variables is done in working memory, the threads cannot be accessed directly from each other , and the variables are passed through main

Common methods of threading thread,runable

Interrupt (): Interrupts the thread, just Java syntax to interrupt, the specific implementation needs business judgment,
Interrupted (): Determines whether the interrupt is interrupted, and clears the interrupt state.  
Isinterrupted (): Determines whether the interrupt is interrupted and does not clear the interrupt state.  
Join (): Waits for the thread to terminate.  
Yield (): Suspends execution of the current thread, yielding resources, and letting that JVM's thread scheduler re-schedule from the executable state thread. Perhaps the thread will run again immediately.  
Thread Status: new-"can run-" scheduler scheduled to run-"waiting, blocking, sleep-" Run completed

Interrupt:

Such as sock communication , read , write is blocked, bad interruption, interrupts can be achieved by turning off sock.close () ;

concurrency-visible Keyword: violation

When multithreading modifies the same data, it cannot immediately be detected by the other thread due to jmm restrictions.
Hardware implementation: CPU hardware vendors provide a level or fence in which each CPU core data is synchronized. Java provides such a keyword mechanism: violation
The data from each working memory is actively synchronized to the main memory.
The write operation of the volatile field Happen-before subsequent read operations on the same field: see the Happen-before rule after the group.
such as System threads:
Indicates whether to run
Private volatile Boolean running = false;
64-bit long,double reads and writes are divided into 2 32-bit operations, declared as VIOLATION,JMM, which are defined as atomic operations. Is there a limit on the 64bit machine?


Java Concurrency programming

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.