Java Programming Ideas Lessons (eight) 21st-concurrency

Source: Internet
Author: User
Tags mutex

sequential programming , that is, all things in a program can only perform one step at any time. concurrent programming , programs can execute multiple parts of the program in parallel.

21.2.1 Defining tasks

A thread can drive a task, so you need a way to describe the task, which can be provided by the Runnable interface. To define a task, simply implement the Runnable interface and write run() a method so that the task can execute your command.
When Runnable a class is exported from, it must have a run() method, but there is nothing special about this method-it does not produce any intrinsic threading capability. To implement threading behavior, you must explicitly attach a task to the thread .

21.2.3 using Executor

  FixedThreadPoolWith the CachedThreadPool

    • FixedThreadPool, you can pre-execute expensive thread allocations at once, so you can limit the number of threads. This saves time because you do not have to pay for each task to create the thread overhead. In an event-driven system, you need a thread's event handler to get the thread directly from the pool, or you can get the service as you wish. You will not misuse the available resources, because the number of thread objects used by Fixedthreadpool is bounded.

Note that in any thread pool, existing threads are automatically reused whenever possible.

    • Although this book will be used CachedThreadPool , it should also be considered in the code that generates the thread FiexedThreadPool . CachedThreadPoolIt is a reasonable preference to create a thread with the same number of threads as required during program execution, and then stop creating a new thread when it recycles the old thread Executor . You only need to switch to when this approach causes problems FixedThreadPool .

    • SingleThreadExecutorIt's like 1 the number of threads FixedThreadPool . (It also provides an important concurrency guarantee that other threads will not be called (i.e. no two line routines). This will change the lock requirement of the task)
      If SingleThreadExecutor more than one task is submitted, the tasks are queued and each task ends before the next task starts, and all tasks will use the same thread. In the following example, you can see that each task is in the order in which they were submitted, and that it was done before the next task started. Therefore, SingleThreadExecutor all the tasks that are committed to it are serialized, and the queue of suspended tasks is maintained for itself (hidden).

21.2.4 to generate a return value from the task

  Runnableis an independent task that performs work, but it does not return a task value. If you want the task to return a value when it is finished, you can implement the Callable interface instead of the Runnable interface. Introduced in Java SE5 Callable is a generic type parameter with a type parameter that represents the value returned from the method call() (not run() ) and must be called with a ExecutorService.submit() method.

Variants of 21.2.9 encoding

Another idiom that you might see is self-administered Runnable .

Threadthere is no particular difference from inheritance, just a little bit more obscure. However, implementing an interface allows you to inherit another different class, and inheriting from it Thread will not work.

Note that self-managed Runnable is called in the constructor. This example is fairly simple and may be safe, but you should be aware that starting a thread in the constructor can become problematic because another task may start executing before the end of the constructor, which means that the task has access to an object that is in an unstable state. This is Executor another reason to preferably rather than create Thread对 the image explicitly.

21.2.13 Thread Group

A thread group holds a collection of threads. The value of a thread group can be summed up by quoting Joshua Bloch: "It's best to think of a thread group as an unsuccessful attempt, just ignore it." ” 

If you spend a lot of time and effort trying to find the value of a thread group (just like me), then you might wonder why there is no official statement from Sun about the subject, and over the years the same question has been asked countless times about other changes in Java. Nobel economics will use the life philosophy of the winner Joseph Stiglitz to explain the problem, which is called the Promise escalation theory (the theory of escalating commitment): "The cost of continuing the error is borne by others, And the cost of admitting the mistake is borne by oneself. ”

21.2.14 Catching exceptions

Because of the intrinsic nature of the thread, you cannot catch an exception escaping from the thread. Once an exception is taken out of the task run() , it is propagated outward to the console unless you take special steps to catch the exception of this error.

21.3 Sharing restricted resources

You can treat a single-threaded session as a single entity solved in the problem domain, and only do one thing at a time.

21.3.1 access to resources incorrectly

Because a canceled flag is boolean of type, it is atomic, that is, simple operations such as assignment and return values are not interrupted when they occur, so you do not see this field in the middle of the process of performing these simple operations.

It is important to note that incrementing the program itself also requires multiple steps, and that the task may be suspended by a purebred mechanism during the increment process-that is, in Java, incrementing is not atomic. Therefore, if the task is not protected, even a single increment is not secure.

21.4 End Task

21.4.3 Interrupt

  ExecutorUp shutdownNow() , it will send a interrupt() call to all the threads it started.

  Executorsubmit() excutor() You can hold the context of the task by invoking it instead of starting the task. submit()will return a generic, the Future<?> key to holding this Future is that you can call on it cancel() , and therefore can use it to interrupt a particular task. If you true pass it on cancel() , it will have permission to call on that thread interrupt() to stop the thread. Therefore, cancel() it is a way of interrupting a Excutor single thread by starting.

  SleepBlock()is interruptible blocking, IOBlocked and SynchronizedBlocked is non-interruptible. An example of the above three classes proves that I/O and synchronized wait on the block are non-interruptible. No processor is required, either I/O or the attempt to invoke synchronized the method InterruptedException .
As you can see from the output of the example of the above three classes, you can break the call of the pair sleep() (or any InterruptedException call that requires a throw). However, you cannot interrupt a thread that attempts to acquire synchronized a lock or attempt to perform an I/O operation. This is a bit annoying, especially in the case of a pregnancy I/O task, because it means that IO has the potential to lock your multithreaded threads. Especially for Web-based programs, this is a matter of interest.

For this kind of problem, there is a slightly clumsy but sometimes effective solution, that is, shutting down the underlying resource on which the task is blocked:

21.5 Collaboration between threads

21.5.1 wait () and Notifyall ()

  wait()Allows you to wait for a condition to change, and change the condition beyond the control of the current method. Typically, this condition will be changed by another task. You certainly do not want to test this condition at the same time as your task, and continue to make an empty loop, which is called busy waiting, which is usually a bad cycle usage. As a result, when the wait() outside world changes, the task hangs, and only notify() when or where something notifyAll() of interest occurs, the task is awakened and checked for changes. Therefore, it wait() provides a way to synchronize activities between tasks.

It sleep() is important to understand that the lock is not released at the time of invocation and is yield() also in this case.
wait(), notify() and notifyAll() there is a very special aspect, that is, these methods are part of the base class Object , not part Thread of it.

A missed signal.

21.5.2 notify () and Notifyall ()

In the discussion about the threading mechanism of Java, there is a confusing description: notifyAll() "All tasks under, etc." will be awakened. Does this mean that any task that is in wait() the state will be awakened by any call in the program anywhere notifyAll() ? There is an example to illustrate that this is not the case-in fact, when notifyAll() a particular lock is called, only the task that waits for the lock is awakened.

21.6 dead Lock

The question of dining philosophers raised by Edsger Dijkstrar is a classic example of deadlock.

To fix a deadlock problem, you must understand that a deadlock occurs when the following four conditions are met:

    • mutually exclusive conditions. At least one of the resources used by a task is not shared. Here, a chopstick can only be used by one philosopher at a time.

    • At least one task it must hold a resource and is waiting to acquire a resource that is currently held by another task. In other words, to have a deadlock, philosopher must hold a chopstick and wait for the other.

    • A resource cannot be preempted by a task, and the task must dispose of the resource as an ordinary event. Philosopher is very polite and they will not seize chopstick from other philosopher.

    • There must be a loop waiting, when a task waits for the resources held by another task, while the latter waits for the pulp held by the other, so that it continues until a task waits for the resources held by the first task, so that everyone is locked. In Deadlockingdiningphilosophers.java, because each philosopher try to get the right chopstick first, and then get to the left of the chopstick, so send 徨 the loop wait.

So to prevent deadlocks, just destroy one of them. The easiest way to prevent deadlocks is to break a 4th condition.

21.7 components in a new class library

21.7.1 Countdownlatch

Scenario: It is used to synchronize one or more tasks, forcing them to wait for a set of operations performed by another task to complete. That is, one or more tasks need to wait, waiting for other tasks, such as the initial part of a problem, to complete.

You can CountDownLatch set an initial value to the object, and any method that calls wait () on the object will block until the count reaches 0. Other due to the end of their work, you can call Countdown () on the interviewee to reduce this count. CountDownLatchis designed to only be sent once, the value of the meter cannot be reset. If you need a version that can reset the count value, you can use it CyclicBarrier .

The called countDown() task is not blocked when the call is generated, and only the call to the call is await() blocked until the count value arrives 0 .

  CountDownLatchA typical usage is to divide a program into n separate, resolved tasks and create values n CountDownLatch . When each task is completed, it is called on the latch countDown() . Wait for the problem resolved task to be called on this latch await() , suspend themselves until the latch count ends.

21.7.2 Cyclicbarrier

Apply to situations where you want to create a set of tasks that perform work in parallel, and then wait until the next step, until all tasks are completed (it looks like join ()). It allows all parallel tasks to be queued at the fence so that they can move forward in a consistent way.

For example program Jockey program: Horserace.java

21.7.3 Delayqueue

  DelayQueueis an unbounded BlockingQueue (synchronous queue) that is used to place an Delayed object that implements an interface in which an object can only be taken away from the queue when it expires. This queue is ordered, that is, the team header object is the first object that expires. If there are no expired objects, then the queue has no header elements, so it poll() will be returned null (and because of this, we cannot null place them in this queue). As described above, DelayQueue it becomes a variant of the priority queue.

21.7.4 Priorityblockingqueue

This is a very basic priority queue, and it has a blocking read operation. This blocking feature of the queue provides all the necessary synchronizations, so you should notice that there is no explicit synchronization required--regardless of whether there are elements when you read from this queue, because this queue will block the reader directly when there are no elements.

21.7.5 using the Scheduledexecutor room temperature controller

"Greenhouse control systems" can be seen as a concurrency problem, and each expected greenhouse event is a scheduled time-running task.
ScheduledThreadPoolExecutorcan solve this problem. where schedule () is used to run a task, scheduleatfixedrate () repeats the task every specified time. Two methods receive the Delaytime parameter. You can set the Runnable object to be executed at some point in the future.

21.7.6 Semaphre

21.8 emulation

21.8.1 Bank Teller

21.8.2 Hotel Simulation

  BlockingQueue: Synchronization queue, when the first element is empty or unavailable, executes. Take (), waits (blocking, Blocking).

  SynchronousQueue: is a blocking queue with no internal capacity, so each put () must wait for a take () and vice versa (that is, each take () must wait for a put ()). It's like you're handing over an object to someone--no table can hold the object, so you can only work when the person reaches out and is ready to receive the object. In this example, Synchronousqueue represents a position set in front of the diner to strengthen the concept of only one dish at any time.

One of the most important things to observe about this example is the management complexity of using queues to communicate between tasks. This single technique greatly simplifies the process of concurrent programming by reversing control: Tasks do not interfere directly with each other, but rather send objects through queues. The receiving task processes the object and treats it as a message, rather than sending a message to it. If you follow this technique whenever possible, the likelihood of building a robust concurrency system increases dramatically.

21.8.3 Distribution Work

21.9 Performance Tuning (performance Tuning)

21.9.1 comparison of various mutex technologies (comparing mutex technologies)

"Micro benchmark Test (microbenchmarking)" Hazard: This term usually refers to performance testing of an attribute in isolated, out-of-context situations. Of course, you still have to write tests to verify assertions such as "lock is faster than synchronized," but you need to be aware of what is actually going on during the compilation and at runtime when writing these tests.

Different compilers and runtime systems differ in this respect, so it is difficult to know exactly what will happen, but we need to prevent the compiler from predicting the likelihood of the results.

Using lock is often much more efficient than using synchronized, and the overhead of synchronized appears to vary in scope, while lock is relatively consistent.
Does this mean you should never use the Synchronized keyword? Here are two factors to consider:

    • The first is the size of the method body of the mutex method.

    • The second is that the code generated by the Synchronized keyword is significantly more readable than the code generated by the lock--try/finally--unlock idiom required by lock.

The code is read much more often than it was written. When programming, communicating with others is much more important than communicating with a computer, so the readability of the code is critical. Therefore, it is meaningful to start with the Synchronized keyword and only replace it with the lock object when performance tuning is done.

21.9.2 Lock-free container (lock-free containers)

The common strategy for these lock-free windows is that modifications to the container can occur concurrently with the read operation, as long as the reader can only see the result of the completed modification Ada. Modifications are performed on a separate copy (sometimes a copy of the entire data structure) in a portion of a container's data structure, and the copy is not considered during the modification process. The modified structure is automatically swapped with the master data structure only when the modification is complete, and then the reader can see the change.

Optimistic lock

As long as you read primarily from a lock-free container, it is much faster than its synchronized counterpart because the cost of acquiring and releasing locks is omitted. If you need to perform a small number of writes to a lock-free container, this is still the case, but what is a "small amount"? This is a very interesting question.

21.11 Summary

One additional benefit of threading is that they provide a lightweight execution context switch (about 100 instructions) instead of heavyweight process context switches (thousands of instructions). Because all threads within a given process share the same memory space, a lightweight context switch simply alters the execution sequence and local variables of the program. Process switching (heavyweight context switching) must change all memory space.

Related articles:

Java Programming thought Lesson (vi) 19th-enumeration type

Java Programming thought Lesson (VII) 20th chapter-Annotations

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.