Java Concurrent Notes

Source: Internet
Author: User
Tags instance method visibility

Why do we need parallelism?        – Business Requirements – Performance parallel computing is also needed for the business model – not to improve system performance, but to actually require multiple execution units on the business. – such as an HTTP server, create a new processing thread for each socket connection – let different threads take on different business tasks – Simplify task scheduling Linus Torvalds: Parallel computing only works in * image processing * and * Server programming * 2 areas To use, and it does have a wide range of uses in these 2 areas.    But in any other place, parallel computing is nothing! Computationally intensive in the multicore era, there is generally no need to differentiate between concurrent and parallel synchronization (synchronous) and asynchronous (asynchronous) in particular. Concurrency (Concurrency) and parallel (Parallelism)? Critical section? Blocking (Blocking) and non-blocking (non-blocking) – Blocking and non-blocking are often used to describe the interaction between multiple threads. For example, if a thread occupies a critical section resource, all other threads that need the resource must wait in this critical section, waiting for the thread to hang. This situation is blocking.        At this point, if the resource-hogging thread has been reluctant to release resources, all other threads that are blocking the critical section will not work. – Non-blocking allows multiple threads to enter the critical section at the same time? Lock (Deadlock), hunger (starvation) and live lock (livelock)? Parallel level concurrency level – blocking – non-blocking – barrier free – no lock – no waiting for 2 important laws about parallelism? Amdahl Law (Amdahl law)? Gustafson Law (Gustafson) process switching is a heavyweight operation that consumes a lot of compute resources in Java threads that map directly to the thread of the operating system threads T1 = new Thread (); T1.start ();   The thread t2 = new Thread (), T2.run () is turned on in a ready state; Cannot open thread, execute Run method within this thread inherits thread override run method, MyThread extends Threadnew thread (new Runnable () {}). Start (); Thread.stop () Not recommendedUsing @deprecated, too violent, similar to the Linux forced kill process: kill-9 thread_idpublic static native void sleep (long Millis) throws            interruptedexceptionpublic void Run () {while (true) {if (Thread.CurrentThread (). isinterrupted ()) {            System.out.println ("interruted!");        Break        } try {thread.sleep (2000);            } catch (Interruptedexception e) {System.out.println ("interruted when Sleep");        Sets the interrupt state, and after throwing an exception, clears the interrupt mark bit ******** thread.currentthread (). interrupt ();    } Thread.yield ();        }}suspend (), resume () waits for thread end (join) and humility (yeild) join nature: while (IsAlive ()) {wait (0);   After the thread finishes executing, the system calls Notifyall (); ===> can I use the wait () and notifyall () daemon threads on the thread instance? Silently in the background to complete a number of systematic services, such as garbage collection thread, JIT thread can be understood as a daemon thread?        In a Java application, the Java virtual machine naturally exits the thread t = new Daemont () when only the daemon is in use;        T.setdaemon (TRUE);    T.start (); High priority threads are more likely to compete in the winning basic thread sync operation?     synchronized– Specifies the lock object: Locks the given object and obtains the lock for the given object before entering the synchronization code.           public void Run () {for (int j = 0; J < 10000000; J + +) {Synchroniz                        Ed (instance) {i++;                }}}– acts directly on the instance method: it is equivalent to locking the current instance and getting the lock of the current instance before entering the synchronization code.                Public synchronized void Increase () {i++;                }– acts directly on a static method: It is equivalent to locking the current class and getting the lock of the current class before entering the synchronization code. public static synchronized void increase () {i++}?                    Object.wait () obejct.notify () public static class T1 extends Thread {public void run () {                        Synchronized (object) {System.out.println (System.currenttimemillis () + ": T1 start!"); try {System.out.println (System.currenttimemillis () + ": T1 wait for Obje                            CT ");                        Object.wait ();  } catch (Interruptedexception e) {                          E.printstacktrace ();             } System.out.println (System.currenttimemillis () + ": T1 end!");}} } public static class T2 extends Thread {public void run () {SYN Chronized (object) {System.out.println (System.currenttimemillis () + ": T2 start! notify one thread")                        ;                        Object.notify ();                        System.out.println (System.currenttimemillis () + ": T2 end!");                        try {thread.sleep (2000);                        } catch (Interruptedexception e) {e.printstacktrace ();            }}}} 1425224592258:t1 start! 1425224592258:T1 wait for object 1425224592258:t2 start!            Notify one thread 1425224592258:t2 end! 1425224594258:t1 endnotify () VS Notifyall () notify () randomly wakes up a thread, notifyall () wakes up all the threads waiting for this monitor, and lets them compete for the right to use the monitor, "" "" "" "" "" "" "" "" " Atomic atomicity means that an operation is non-interruptible. Even when multiple threads are executed together, an operation will not be disturbed by other threads once it is started.        When the order is concurrent, the execution of the program may occur in a disorderly order. The execution of an instruction can be divided into many steps (* Data bypass Technology *&* command Reflow * can make pipelining smoother, no semantic problem with command rearrangement) – refer to if– decoding and taking register operand ID – Line or valid address calculation ex– memory Access mem– write back WB?            Visibility visibility is when a thread modifies the value of a shared variable, and whether another thread can immediately know the change. – Compiler Optimizations – hardware optimizations (such as write absorption, batch operations) Java Virtual Machine level visibility (volatile)? Happen-before? Program Order principle: a thread that guarantees the serialization of semantics? Volatile rule: The write of a volatile variable, which occurs first in the read, which guarantees the visibility of the volatile variable? Lock rule: Unlocking (unlock) must occur before the subsequent locking (lock)? Transitivity: A precedes b,b before C, so a must precede C? The start () method of a thread precedes each of its actions? All the operations of a thread precede the end of the thread (Thread.Join ())? Is the thread's Interrupt (interrupt ()) preceded by the code of the interrupted thread? Object constructor execution ends before the Finalize () method? The concept of thread safety is that when a function or library is called in a multithreaded environment, it is able to correctly handle each thread's officeTo make the program function complete correctly. Reentrantlock (re-entry lock) is an enhancement of the Synchronized keyword, which is now comparable in performance. Reentrantlock can be re-entered multiple lock can be interrupted when a deadlock occurs, you can enable a daemon to interrupt a thread to achieve the unlock purpose can be limited to avoid deadlocks and long-term wait lock fair lock Fair Lock Although there is no hunger but because the fair lock needs to solve the queuing problem (first-come first-served), so performance than the non-fair lock, no special requirements do not need to use a fair lock. JDK and contract--concurrent containers and typical source code analysis?? Collection packing? HashMap--For small concurrency, serial solutions, non-high concurrency solutions public static <K,V> map<k,v> Synchronizedmap (MAP&LT;K,V&G T                m) {return new synchronizedmap<> (m); }        ?  List public static <T> list<t> synchronizedlist (list<t> list) {return                            (List instanceof randomaccess?)                New Synchronizedrandomaccesslist<> (list): New synchronizedlist<> (list); }        ? Set public static <T> set<t> Synchronizedset (set<t> s) {return new Sy                Nchronizedset<> (s); }        ?? ConcUrrenthashmap Concurrenthashmap (HashMap is implemented using arrays, so for large-scale high concurrency, the entire array can be divided into n segments <segment&gt, and one can write data to n threads at the same time.        Theory increase efficiency n times) put () method each segment has its own lock, the Get () method is unlocked, but the size () method needs to get all the segment locks before statistics can be counted, but the size () method is not a function called by a high frequency. It is high-performance, because it will not be a random lock, but after the spin waiting to be locked when necessary. Blockingqueue block queue, is an interface, thread-safe, not a high-performance container, but Blockingqueue is a very good * * * in multiple threads sharing data container * * If it is an empty queue, when the thread tries to read the data, then this read threads will wait for        , until another thread writes data to the queue, the read thread wakes up and reads the data, and if the queue is already full, the written thread waits until the threads read the data has free space to write to the queue, so blocking causes the thread to block.            Blockingqueue is convenient as a producer consumer container. Put (), Take () implementation: Arrayblockingqueue, Linkedblockingqueue, Priorityblockingqueue. Concurrentlinkedqueue a high-performance queue similar to Concurrenthashmap, with a large number of lock-free operations inside. Offer (), poll ()? Blockingqueue VS Concurrentlinkedqueue in concurrent programming we sometimes need to use a thread-safe queue. If we are to implement a thread-safe queue There are two implementations: one is to use the * * Blocking algorithm * *, and the other is to use the * * Non-blocking algorithm * *. A queue that uses a blocking algorithm can be implemented with a lock (Enqueue and outbound with the same lock) or two locks (queued and out-of-band with different locks), and non-blocking implementations can be implemented using a cyclic CAS approach.

Java Concurrent Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.