java--multithreaded Programming "1"

Source: Internet
Author: User
concurrent processing with a multithreaded decomposition task 1. Transition from single-threaded tasks to multithreaded TasksIn this chapter we will deal with two types of data, one is IO-intensive, and the other is compute-intensive. Divide and conquerIf we have hundreds of stocks to deal with, you can do a linear deal on one stock, but that might be a stupid act. To enable our programs to run faster, we can split the task into multiple tasks and process it in parallel. However, we can not be divided into too many threads, because the computer's resources are limited, the open thread will consume additional thread resources. determining the number of threadsFor a large program, we can open up at least the number of threads that are running the machine's CPU cores. In the Java program we can get this number from the following line of code:
	Runtime.getruntime (). Availableprocessors ();
So the minimum number of threads is the number of instant CPU cores. If all the tasks are computationally intensive, the minimum number of threads is the number of threads we need. Opening more threads only affects the performance of the program, because switching between threads consumes additional resources. If the task is an IO-intensive task, we can open up more threads to perform the task. When a task performs an IO operation, the thread will be blocked and the processor will immediately switch to another appropriate thread to execute.         If we only have as many threads as the kernel, even if we have a task to execute, they can't do it because the processor has no threads to schedule. If a thread has 50% of the time blocked, the number of threads should be twice times the number of cores. If fewer ratios are blocked, they are computationally intensive, and fewer threads need to be opened. If more time is blocked, then IO-intensive programs can open up more threads. So we can get the following number of threads calculation formula: Number of threads = kernel number/(1-blocking rate) we can use the corresponding analysis tool or Java management package to get the number of blocking rate. decide how many tasks to separateWe already know how to determine the number of threads, and now we're going to talk about how to divide the tasks into subtasks that are most appropriate for each subtask to execute concurrently. So, the first thing we thought about was that it was the right thing to allocate as many subtasks as the number of threads, which seemed appropriate but not enough, and we ignored the nature of the specific subtasks.         For example, in a stock-processing program, it is sufficient to divide into subtasks as many as the number of threads. But for a program that gets prime numbers, that's a problem. Since even numbers are easily processed, large primes can take more time to process than small primes. It is not possible to improve the performance by separating all the numbers from the big to the small to the same number of threads. Some threads run faster than other threads, so the kernel cannot be used effectively. In other words, if we want to separate this way, we have to spend a lot of effort to properly separate the data into these tasks so that these tasks can have equal load balancing. But there are two problems with this: 1, it's really hard to divide. 2, this separation in the program is also more complex. It turns out that keeping each CPU kernel busy is more effective than trying to make each part a burden. At the processor's point of view, there is no processor idle when there are still tasks to be processed. So instead of trying to divide the task into tasks that are equal in load, it's better to keep the processor busy by separating more threads than the number of threads.
2. Ways to get efficient concurrency performance On the one hand, we want to ensure consistency and accuracy of concurrency, on the other hand, we want to ensure better performance on a given machine. Below, we explore ways to satisfy both. As long as we can completely eliminate the sharing of variable state variables, we can easily avoid conditional competition or consistency problems. When multiple threads are not competing to obtain variable data, the variable is not subject to variability. We don't have to worry about the order in which threads are executed. If possible, the immutable variables that are shared by multiple threads are provided. Otherwise, follow the orphan variability principle to ensure that only one thread can access this variable variable. We're not talking about synchronizing thread state right now, we're only making sure that only one thread can access mutable variables at the moment. How many threads we create and how many tasks we divide into tasks can affect the performance of concurrent programs. First, to capitalize on the benefits of concurrency, we must be able to divide tasks into subtasks. If a problem has an important part that cannot be segmented, then the program cannot achieve better performance by introducing concurrency. If the task is IO-intensive or has a large number of IO operations, then opening more threads will bring performance improvements. In this case, opening more threads than the CPU kernel would be good for the performance of the program. For a compute-intensive task, opening more threads than the CPU kernel is actually not conducive to performance improvement. However, we can at least improve performance by opening up the same performance as the number of cores. While developing threads can affect performance, this is not the only case. By splitting tasks, you can also affect performance.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.