Java Concurrency Basics (vi)---activity, performance and scalability

Source: Internet
Author: User

Java concurrency Programming in the 9th chapter of the main introduction of GUI programming, in the actual development is rarely seen, so this chapter of the notes temporarily first put, from the 10th chapter to the 12th chapter is the third part, that is, activity, performance, and testing, this part of the knowledge of the partial theory more, But as far as possible to use the code to understand the problem without words, words do not say more, into the topic.

One, deadlock

In the learning of Java Foundation when listening to the teacher told the "philosopher Dining" example, time-specific is how the matter is also easy to forget, here to re-organize. 5 philosophers went to eat Chinese, sitting at a round table, they have 5 chopsticks (not 5 pairs), and every two people in the middle of a chopstick, everyone needs a pair of chopsticks to eat things, if everyone immediately grabbed their left chopsticks, and waiting for their right chopsticks empty, but also refused to give up the chopsticks they have got , then everyone will starve to death. In Java, where thread A holds the lock L and wants to acquire the lock m, thread B holds the lock M and tries to acquire the lock L, then a deadlock is generated, and a summary is that multiple threads are forever waiting because of the lock dependency of the loop, so there is a deadlock, and the Java program cannot recover from the deadlock. Therefore, the development should pay particular attention to.

//easy to happen deadlock Public classleftrightdeadlock{Private FinalObject left =NewObject (); Private FinalObject right =NewObject ();  Public voidleftright () {synchronized(left) {synchronized(right) {System.out.println ("Left-right"); }        }    }         Public voidRightleft () {synchronized(right) {synchronized(left) {System.out.println ("Right-left"); }        }    }}

The above form is the simplest deadlock, called the lock sequence deadlock, which causes the deadlock to occur: Two threads try to access the same lock in a different order . Conversely, in other words, if all threads acquire locks in a fixed order, there is no lock order deadlock in the program. but it is almost impossible to achieve in real-world development.

Sometimes a deadlock is not so easy to find, such as the following code, which transfers funds from one account to another, before starting the transfer, the first to obtain the lock of the two account objects, to ensure that the balance in two accounts by atomic means to update, there seems to be nothing wrong, However, if there are two threads calling TransferMoney at the same time, where one thread transfers from X to Y and the other thread transfers from Y to X, a deadlock can occur.

//a thread: TransferMoney (myaccount,youraccount,10);//b Thread: TransferMoney (youraccount,myaccount,20); Public voidTransferMoney (account Fromaccount, account Toaccount, Amount am Ount)throwsinsufficientfundsexception{synchronized(fromaccount) {synchronized(toaccount) {if(Fromaccount.getbalance (). CompareTo (account) < 0) {//The account balance cannot be 0                Throw Newinsufficientfundsexception (); }Else{fromaccount.debit (amount);            Toaccount.credit (amount); }        }    }}

As mentioned earlier, if you can make the thread access locks in the same order, you can avoid lock sequence deadlock, can be determined by the hashcode of two accounts, modify the code:

Private Static FinalObject Tielock =NewObject (); Public voidTransferMoney (account Fromaccount, account Toaccount, Amount am Ount)throwsinsufficientfundsexception{classhelper{ Public voidTransfer ()throwsinsufficientfundsexception{if(Fromaccount.getbalance (). CompareTo (account) < 0) {//The account balance cannot be 0                Throw Newinsufficientfundsexception (); }Else{fromaccount.debit (amount);            Toaccount.credit (amount); }        }    }        intFromhash =System.identityhashcode (Fromaccount); intTohash =System.identityhashcode (Toaccount); if(Fromhash <Tohash) {        synchronized(fromaccount) {synchronized(toaccount) {NewHelper (). Transfer (); }        }    }Else if(Fromhash >Tohash) {        synchronized(toaccount) {synchronized(fromaccount) {NewHelper (). Transfer (); }        }    }Else {        synchronized(tielock) {synchronized(fromaccount) {synchronized(toaccount) {NewHelper (). Transfer (); }            }        }    }}

This avoids the problem of transferring funds to each of the two accounts mentioned above, and if the hashcode of two objects is the same, a lock is added to ensure that only one thread accesses the lock in an unknown order at a time. You may ask, why not directly in the outermost add a layer of lock, the judgment of the hashcode directly omitted, certainly cannot do so, if so, then the transfer is a serial, so there is no concurrency can say, Fortunately, very few system.identityhashcode occur when the hash code is the same. Add code that solves the problem at least, so that the method can be used at least.

It is possible to perform billions of lock and release locks per day in a large web site, and the program will bounce off once an error occurs, and sometimes the application cannot find all potential deadlocks even through stress testing. Many of the potential deadlocks are more obscure than the above examples, and here is no example, so what's the best way to avoid lock sequence deadlocks, the answer is to always use open calls in your program.

What is an open call, my own understanding is that when an external method is called in the method, no lock is held, that is, the lock is run out and released quickly . This is why it is not advocated directly with synchronized direct modification method of the most important reason. I want to get which lock, I get, run out, hurriedly release, even if a few lines of code I still need to use this lock, I will not occupy this lock for a second. This can only be said to be a good programming habit, but it can avoid lock sequence deadlocks as much as possible.

Second, performance and scalability

1, why should the system be made distributed

The performance of a program is measured by several metrics, such as service time, latency, throughput, efficiency, scalability, and capacity. Service time and wait time refers to how quickly a task needs to be completed, and throughput is how much work can be done with a certain amount of computing resources. Scalability refers to the increased throughput or processing capacity of a program when computing resources are increased (such as CPU, memory, storage capacity, or I/O bandwidth).

  The performance of how fast and how much is completely independent, and sometimes contradictory, we are familiar with the MVC three-tier structure is independent of each other, and may be handled by different systems, this example is a good illustration of the increase in scalability is often the cause of performance loss, if the three layer in a single system, When processing the first request, the performance is certainly higher than the performance of dividing the program into layers and distributing the different levels across multiple systems. (LZ in the interview was asked to specifically say where is the delay, LZ at that time only answer is network communications, system calls, data replication between the delay, the results of the interviewer is not satisfied, repeatedly asked, we can self-Baidu, as an interview preface down, prevent the interviewer embarrassed. However, when this single system reaches the limit of its own processing power, it encounters a serious problem: it can be very difficult to further process a large number of requests, so it is often accepted that each request executes longer or consumes more resources in exchange for a higher load, which is why the system is distributed.

2, the problem caused by the thread

(1) Context switch

If the number of threads that can be run is greater than the number of CPUs, then the OS will dispatch a running thread so that other threads can use the CPU, which will result in a context switch and incur some overhead.

(2) Lock competition

When a thread is blocked by waiting for a lock, the JVM typically suspends the thread and allows it to be swapped out, which can also lead to context switching.

3. Reduce lock Competition

As described above, serial operations reduce scalability, context switches degrade performance, lock contention causes both problems, so reducing lock contention can improve performance and scalability. In concurrent programs, the most significant threat to scalability is the exclusive way of resource locks . How to do it, mentioned above, open call, use the lock, hurriedly release, there is to reduce the granularity of the lock: Lock decomposition and lock segmentation. This piece of content LZ intends to write a separate note.

This piece of content theory is a lot of things, a lot of things difficult to introduce, first look at these points, in fact, all the knowledge points to how to avoid the lock competition this thing, open call is a way, but the role of no lock segment large. LZ is also just beginning to use a blog to record some new knowledge, may often write things can not write the focus, I myself try to avoid this problem, slowly, I envy the face of new knowledge can seize the focus of the big God, LZ is just a want to go on a step more rookie, you more comments, thank you.

Java Concurrency Basics (vi)---activity, performance and scalability

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.