Java concurrent Programming (a) on thread safety

Source: Internet
Author: User

First we need to figure out what is called thread safety.

"Thread safety" means that when multiple threads access a class, this class behaves correctly regardless of the scheduling mode of the running environment or how those threads are alternately executed , and does not require any additional synchronization or collaboration in the keynote code . , then call this class thread-safe.

Here are three key points,

First, the "thread-safe" issue is that when multiple threads are accessed, a thread-safe problem occurs when a thread is not competing.

Second, the "Thread safety" of the class does not depend on the order of execution of multithreading.

Third, the keynote code does not require synchronization or collaboration. A class "Thread-safe" attribute does not depend on external callers.

Is it not thread-safe for all classes to be processed without synchronization or synchronization? Not also.

We know that there are two types of objects, stateful objects (Stateful beans) and stateless objects (stateless beans).

Stateful objects are data storage functions, which are objects with instance variables that can hold data and are non-thread safe.

A stateless object is an object that has no instance variables. Cannot save data, is immutable class, is thread safe.

For example, in a servlet-based WEB service, if the following servlet exists, it has no member variables and cannot store the data. Regardless of how many threads use it, each thread will be in its own separate Java stack, and all local variables within the method are new and exclusive. It is a stateless object and is thread-safe.

@ThreadSafepublic class Statelessfactorizer implements servlet{public void Service (ServletRequest req, servletresponse RESP) {BigInteger i = extractfromrequest (req); biginteger[] factors = factor (i); Encodeintoresponse (resp,factors);}}

If you add a variable count for the number of access users for it, the following code, because the servlet is accessed by multiple threads, will also be shared by the count variable. When Count is self-increment, because count++ actually needs to be executed in two steps, tem = Count +1; Count = temp. Assume the initial value of Count = 1; Threads A and B access it. When a takes count=1, before count++, thread B then takes count = 1; Then after the A-B access is complete, the count result is 2, which is obviously wrong. Then this servlet becomes non-thread-safe.
@NotThreadSafepublic class Statelessfactorizer implements Servlet{private Long Count = 0;public long GetCount () {return CO UNT;} public void Service (ServletRequest req, Servletresponse resp) {BigInteger i = extractfromrequest (req); biginteger[] factors = factor (i); ++count;encodeintoresponse (resp,factors);}}

Incorrect execution results due to inappropriate execution order is a headache, it has a formal name: race Condition(Race Condition). To avoid the problem of race conditions, we need to change the self-increment action into atomic operation. Assuming that there are two operations A and B, if you look at the thread that executes a, when another thread executes B, either the B is completely executed, or the B is not executed at all, then A and B are atomic to each other. Atomic manipulation means that this operation is an atomic way of doing all operations that access the same state, including the operation itself.

To achieve the atomicity of Count self-increment, we can use the Atomiclong class to implement it.

@ThreadSafepublic class Statelessfactorizer implements Servlet{private atomiclong count = new Atomiclong (0);p ublic Atomiclong GetCount () {return count;} public void Service (ServletRequest req, Servletresponse resp) {BigInteger i = extractfromrequest (req); biginteger[] factors = factor (i); Count.incrementandget (); Encodeintoresponse (resp,factors);}}

When there is only one state variable in the servlet, a thread-safe object can be used to manage the state of the servlet to maintain thread security for the servlet. If you add more states to the servlet, even adding more thread-safe state variables does not guarantee the security of the servlet. As in the code below.
@ThreadSafepublic class Unsafecachingfactorizer implements Servlet {private final atomicreference<biginteger> Lastnumber = new atomicreference<biginteger> ();p rivate final atomicreference<biginteger[]> lastFactors = New Atocmicreference<biginteger[]> ();p ublic void Service (ServletRequest req, Servletresponse resp) {BigInteger I = Extractfromrequest (req); if (I.equals (Lastnumber.get ())) Encodeintreponse (resp, Lastfactors.get ()); else { biginteger[] factors = factor (i); Lastnumber.set (i); lastfactors.set (factors); Encodeintoresponse (resp, factors);}}

This code is intended to add a caching function for factorization. Although these atomic references are inherently thread-safe, this can produce incorrect results due to race conditions.

In an object that has more than one state variable, if the variables are not independent of each other (for example, Lastnumber and lastfactors must guarantee consistent correspondence), the value of one of the variables will constrain the values of the other variables. Therefore, when you update a variable, you need to update the other variables simultaneously in the same atomic operation.

Built-in lock

Java provides a built-in lock mechanism to support atomicity: Synchronous code block (synchronized block), synchronization code block contains 2 parts: 1, as the object reference of the lock, 2, the code block protected by the lock

Therefore, we can synchronize the service of Unsafecachingfactorizer, which is declared as public synchronized void service (ServletRequest req, servletrespons E resp); In this way, all access to the service will be serialized in a serial order. But this approach is too extreme and the responsiveness of the service is very low and unacceptable. This is a performance issue, not a thread safety issue.

The built-in lock is a reentrant lock. "Re-entry" means that the granularity of the operation of the lock is "thread" rather than "call". One way to do this is to associate a fetch value and an owner thread for each lock. When the count value is 0 o'clock, this is considered not to be held by any thread. When a thread requests a lock that is not held, the JVM notes the holder of the lock, and the Get count value is set to 1. If the same thread acquires the lock again, the count value is incremented, and the counter decrements when the thread exits the synchronization code block. When the count value is 0 o'clock, the lock is released.

Because locks enable their protected code to be accessed serially, some protocols can be constructed through locks to achieve exclusive access to shared state. A common locking convention is to encapsulate all mutable states inside an object, and then synchronize all methods that access the mutable state through the object's built-in lock, so that concurrent access does not occur on the object. For each non-deformation condition that contains multiple variables, all of the variables involved need to be protected by the same lock.

If synchronization avoids race condition problems, why not use the keyword synchromized for each method declaration? In fact, if indiscriminate abuse can lead to excessive synchronization in the program. Furthermore, if you just use each method as a synchronous method, such as a vector, there is no guarantee that the compound operation on the vector is atomic:

if (!vector.contains (Element)) Vector.add (element);
Although methods such as contains and add are atomic, there are still race conditions in the above "Add (put-if-absent)" Operation if not present. Although the Synchronized method ensures the atomicity of a single operation, an additional locking mechanism is required if multiple operations are to be combined into one composite operation. Also, as with the previous practice of synchronizing the entire service method, synchronizing each method can also lead to active and performance issues.

Fortunately, by narrowing the scope of the synchronization code block , it is easy to ensure that the concurrency of the servlet is maintained while maintaining thread safety. To ensure that the synchronization blocks are not too small, and do not split operations that should be atomic into multiple synchronized blocks of code, other threads can access the shared state during the execution of these operations.

Below, we will modify to use two separate synchronized blocks of code, each containing only a small piece of code. One of the synchronization blocks is responsible for protecting the "check and execute" sequence of actions that simply returns the cached results, and the other synchronization code block is responsible for ensuring that the cached values and the quoted decomposition results are synchronized. In addition, we have reintroduced "hit counter", added a "cache hit" counter, and updated both variables in the first synchronization code block. Because these two counters are also part of a shared mutable state, synchronization must be used in all locations where they are accessed. Code outside the synchronization code block will exclusively access local (on-stack) variables that are not shared across multiple threads and therefore do not require synchronization.

public class Cachedfactorizer implements Servlet {private BigInteger lastnumber;private biginteger[] lastfactors; Private long Hits;private long cachehits;//can also declare hits and cachehits as volatile to achieve visibility public synchronized long Gethits () {R Eturn hits;} Public synchronized Double Getcachehitratio () {return (double) Cachehits;} public void Service (ServletRequest req, Servletresponse resp) {BigInteger i = extractfromrequest (req); biginteger[] factors = null;synchronized (this) {++hits;if (I.equals (Lastnumber)) {++cachehits;//using clones, Shorten the synchronization code block Size factors = Lastfactors.clone ();}} if (factors = = NULL) {factors = factor (i); synchronized (this) {lastnumber = I;lastfactors = Factors.clone ();}}}}

Java concurrent Programming (a) on thread safety

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.