"Go" Java High concurrency Basics

Source: Internet
Author: User

Lock:
    1. Built-in lock (monitor Lock): Each Java object can be a lock that implements synchronization, and these locks become built-in locks. The only way to get a lock is to enter a code block or method with this lock protection
    2. Re-entry Lock: Because the built-in lock is reentrant, the request succeeds if a thread attempts to obtain a lock that is already held by himself. Re-entry means that the operation granularity of acquiring a lock is "thread", not "call"
Volatile usage conditions (all conditions must be met):
    1. The write operation on a variable does not depend on the current value of the variable, or you can ensure that only a single thread updates the value of the variable
    2. The variable is not included in the invariant condition with other state variables
    3. No lock required when accessing variable time


High concurrency terminology

Terms

English words

Describe

Compare and Exchange

Compare and Swap

CAS operations need to enter two numeric values, an old value (the value before the expected operation) and a new value, in the operation period before comparing the old value has not changed, if not changed, only to replace the new value, the change is not exchanged.

CPU pipelining

CPU Pipeline

CPU pipelining works like an assembly line in the industrial production, in the CPU by 5~6 a different function of the circuit unit composed of a command processing line, and then a X86 instruction is divided into 5~6 step after the circuit units are executed separately, so that can be achieved in a CPU clock cycle to complete an instruction, This increases the CPU's computational speed.

Memory Order Conflicts

Memory Order violation

Memory sequence conflicts are generally caused by false sharing, which means that multiple CPUs simultaneously modify different parts of the same cache line and cause one of the CPUs to be invalid, and the CPU must empty the pipeline when this memory sequence conflict occurs.

Shared variables

A variable that can be shared among multiple threads is called a shared variable. Shared variables include all instance variables, static variables, and array elements. They are all stored in heap memory, and volatile is used only for shared variables.

Memory barrier

Memory barriers

is a set of processor directives that enable sequential throttling of memory operations.

Buffer lines

Cache Line

The minimum storage unit that can be allocated in the cache. When the processor fills the cache line, it loads the entire cache line, requiring multiple main memory read cycles.

Atomic operation

Atomic operations

One or a series of operations that cannot be interrupted.

Cache row Padding

Cache line Fill

When the processor recognizes that the read from memory operand is cacheable, the processor reads the entire cache line to the appropriate cache (L1,L2,L3 or all)

Cache Hit

Cache Hit

If the memory location of the cache row fill operation is still the next time the processor accesses the address, the processor reads the operand from the cache instead of from memory.

Write hit

Write hit

When the processor writes the operand back to the area of a memory cache, it first checks that the cached memory address is in the cache row, and if there is a valid cache line, the processor writes the operand back to the cache instead of writing back to memory, which is called a write hit.


Synchronizedvolatileconcurrent

Utility classes that are common in concurrent programming. This package includes several small, standardized extensible frameworks, and classes that provide useful functionality that are difficult to implement or tedious to implement without these classes. The main components are briefly described below. See also locks and atomic packages.

Execute the program

Interface. Executoris a simple standardized interface that defines a thread-like custom subsystem, including the thread pool, asynchronous IO, and lightweight task framework. Depending on the specific Executor class used, the task may be executed in the newly created thread, in the existing task execution thread, or in the thread that calls execute () , and may be executed sequentially or concurrently. ExecutorServicemultiple complete asynchronous task execution frameworks are provided. Executorservice management tasks are queued and scheduled and allowed to be controlled off. ScheduledExecutorServiceSub-interfaces and related interfaces add support for deferred and recurring task executions. Executorservice provides a method for arranging asynchronous execution, which can be executed by Callable any function represented, and the result is similar to Runnable . Futurereturns the result of a function that allows you to determine whether the execution is complete and provides a way to cancel execution. RunnableFutureis the future of the run method, and its result is set when therun method executes.

implementation. class ThreadPoolExecutor and ScheduledThreadPoolExecutor provides an adjustable, flexible thread pool. The Executors Executor class provides the common types and configured factory methods for most of the classes, as well as several utility methods that use them. Other Executor-based utilities include a concrete class FutureTask that provides a common extensible implementation of the future, and ExecutorCompletionService it helps coordinate the processing of asynchronous task groups.

Queue

The Java.util.concurrent ConcurrentLinkedQueue class provides an efficient, scalable, thread-safe, non-blocking FIFO queue. The five implementations in Java.util.concurrent support an extended interface that BlockingQueue defines the blocking versions of Put and take:,, LinkedBlockingQueue ArrayBlockingQueue SynchronousQueue ,, PriorityBlockingQueue and DelayQueue . These different classes cover the most common use contexts for producer-consumer, messaging, parallel task execution, and associated concurrency design. BlockingDequethe blockingqueueinterface expands to support FIFO and LIFO (stack-based) operations. LinkedBlockingDequeclass provides an implementation.

Timing

TimeUnitThe class provides multiple granularity (including nanosecond levels) for specifying and controlling timeout-based operations. Most of the classes in the package contain timeout-based operations in addition to indeterminate waits. In all cases where the timeout is used, the timeout specifies the minimum time that the method should wait before indicating that it has timed out. After the timeout occurs, the implementation will "try to" detect timeouts. However, there may be an indeterminate time between the detection timeout and the actual execution of the thread after the timeout. All methods that accept a time-out parameter will treat values less than or equal to 0 as simply not waiting. To "Always" wait, you can use the long.max_value value.

Synchronous device

Four classes can assist in implementing common private synchronization statements. Semaphoreis a classic concurrency tool. CountDownLatchis an extremely simple but extremely common utility used to block execution before a given number of signals, events, or conditions are maintained. CyclicBarrieris a reconfigurable multi-sync point that is useful in some parallel programming styles. Exchangerallows two threads to Exchange objects at the collection point, which is useful in multi-pipelined designs.

Concurrent Collection

In addition to queues, this package also provides Collection implementations designed for use in multithreaded contexts: ConcurrentHashMap ,, ConcurrentSkipListMap ConcurrentSkipListSet ,, CopyOnWriteArrayList and CopyOnWriteArraySet . When many threads are expected to access a given collection,Concurrenthashmap is usually better than the synchronized HashMap,concurrentskiplistmap Usually better than synchronous TreeMap. Copyonwritearraylist is better than synchronous ArrayListwhen the desired readings and traversal are far greater than the number of updates in the list.

The "concurrent&rdquo prefix" that is used in this package with some classes, and is a shorthand that differs from similar "synchronous" classes. For example,java.util.Hashtable and collections.synchronizedmap (New HashMap ()) are synchronous, but are ConcurrentHashMap "concurrent". Concurrent collection are thread-safe, but are not managed by a single exclusive lock. In this particular case of concurrenthashmap, it is safe to allow any number of concurrent reads, as well as a number of concurrent writes that can be adjusted. The "Sync" class is useful when you need to disallow all access to collection through a single lock, and the cost is poor scalability. In other cases where multiple threads are expected to access public collection, it is generally better to have a "concurrent" version. Non-synchronous collection is better when collection is not shared, or collection is accessible only when other locks are kept.

Most concurrent Collection implementations, including most Queue, are also different from regular java.util conventions because their iterators provide weak consistency rather than a fast-failing traversal. Weakly consistent iterators are thread-safe, but there is no need to freeze collection during iterations, so it does not necessarily reflect all the updates since the iterator was created.

Memory Consistency Properties

The 17th chapter of the

Java Language specification defines the  happen-before  relationship of memory operations, such as read and write of shared variables. Only write operation   happen-before   read operation, To ensure that the result of one thread's write is visible to the other thread's read. synchronized   and   volatile   construction  happen-before  relationships, Thread.Start ()   and Thread.Join ()   methods form  happen-before  relationships. In particular:

    • Each action in a thread happen-before every action in that thread that is passed in later in program order.
    • A unlock monitor ( synchronized blocking or method exit)Happen-before Each subsequent lock ( synchronized blocking or method entry) of the same monitor. And because the happen-before relationship is transitive, all operations of the thread before the lock is unlocked Happen-before All subsequent operations on any thread that locks the monitor.
    • Write volatile fields Happen-before Each subsequent read the same field. volatilethe Read and write of fields have similar memory consistency effects to entry and exit monitors, but do not require mutexes.
    • Call start happen-before on any thread in a thread that has been started.
    • All operations in the thread happen-before join Any other threads that were successfully returned from that thread.

java.util.concurrentThe methods and their child packages of all classes in the class extend these guarantees for higher-level synchronization. Especially:

  • An action in a thread that puts an object into any concurrent collection Happen-before to access or remove subsequent actions from the collection in the other path.
  • The thread Executor Runnable happen-before its execution before committing to the previous operation. The same applies to ExecutorService submissions Callables .
  • The action taken by the asynchronous computation ( Future represented by) Happen-before Future.get() The subsequent operation by getting the results from another thread.
  • Actions before "releasing" synchronous storage methods (such as Lock.unlock , Semaphore.release and CountDownLatch.countDown ) Happen-before the same synchronous storage object in another thread successfully "gets" methods (such as, Lock.lock Semaphore.acquire ,, Condition.await and CountDownLatch.await ) for subsequent operations.
  • For Exchanger each thread pair that passes a successful interchange object, the previous operation in each thread exchange() Happen-before The subsequent action in the other path exchange() .
  • The action performed by the CyclicBarrier.await Happen-before barrier operation is called before the action is invoked, and the action performed by the barrier operation Happen-before The await subsequent action returned successfully from the other thread.
Condition

ConditionThe Object monitor methods ( wait , notify and notifyAll ) are decomposed into distinct objects to Lock provide multiple wait sets (Wait-set) for each object by combining these objects with any implementation. This replaces the use of Lock synchronized methods and statements, Condition replacing the use of the Object monitor method.

A condition (also known as a conditional queue or condition variable ) provides a means for a thread to suspend the thread (that is, let it "wait") until another thread that a state condition might now be true notifies it. Because access to this shared state information occurs in different threads, it must be protected so that a form of lock is associated with that condition. The primary property for waiting to provide a condition is to atomically release the associated lock and suspend the current thread as if it were Object.wait done.

ConditionAn instance is essentially bound to a lock. To obtain an instance for a specific Lock instance Condition , use its newCondition() method.

As an example, assume that there is a binding buffer that it supports put and take methods. If an attempt is made to perform an operation on an empty buffer, the take thread will block until an item becomes available, and if an attempt is made to perform an operation on a full buffer, the put thread will block until the space becomes available. We like to save threads and threads in a separate wait set put take , so that you can take advantage of the best planning when an item or space in the buffer becomes available, notifying only one thread at a time. You can use two Condition instances to do this.

Class Boundedbuffer {Final Lock lock = new Reentrantlock ();Final Condition notfull =lock.newcondition ();Final Condition Notempty =lock.newcondition ();Final object[] items = new OBJECT[100];   int Putptr, takeptr, Count; public void put (Object x) throws Interruptedexception {Lock.lock (); try {while (count = = items.length)notfull.await ();ITEMS[PUTPTR] = x;       if (++putptr = = items.length) putptr = 0; ++count;notempty.signal (); } finally {Lock.unlock (); }} public Object take () throws Interruptedexception {Lock.lock (); try {while (count = = 0)notempty.await ();Object x = items[takeptr];       if (++takeptr = = items.length) takeptr = 0; --count;notfull.signal ();return x;} finally {Lock.unlock (); }}  }

(The ArrayBlockingQueue class provides this functionality, so there is no reason to implement this example class.) )

ConditionImplementations can provide Object behavior and semantics that differ from the monitor method, such as a guaranteed sort of notification, or a lock that does not need to be maintained when the notification is executed. If an implementation provides such a special semantics, the implementation must record these semantics.

Note that the Condition instances are just plain objects that themselves can be used as synchronized targets in the statement, and can invoke their own wait and notification monitor methods. Gets Condition The monitor lock for an instance or uses its monitor method, with no specific relationship to getting and the Condition related Lock or using its waiting and signalling methods. To avoid confusion, it is recommended that you do not use instances in this way except in their own implementations Condition .

Unless otherwise noted, passing a value for any parameter null will result in a throw NullPointerException .

Implementation considerations

When waiting Condition , " spurious wakeup " is allowed, which usually serves as a concession to the underlying platform semantics. For most applications, this has a small practical impact because it Condition should always be waiting in a loop and testing the status declaration that is waiting. An implementation is free to remove possible spurious wakes, but it is recommended that application programmers always assume that these spurious wakes can occur and therefore always wait in a loop.

Three forms of conditional wait (interruptible, non-disruptive, and timed out) implementations on some platforms and their performance characteristics may vary. In particular, it may be difficult to provide these features and maintain specific semantics, such as ordering guarantees. Further, the ability to actually suspend the interrupt process is not always feasible on all platforms.

Therefore, an implementation is not required to define exactly the same guarantees or semantics for all three forms of waiting, nor does it require the actual suspension of the thread to be supported.

Requires implementation to clearly document the semantics and guarantees provided by each wait method, which must conform to the interrupt semantics defined in this interface when an implementation does not support the suspend of a thread.

Since interrupts usually mean cancellation, and often very little interruption checking, implementations can respond to interrupts before the return of the normal method. This is true even if the interrupt that occurs after another operation may release the lock. The implementation should log this behavior.

"Go" Java High concurrency Basics

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.