. NET Surface question series [16]-Multithreading concept (1)

Source: Internet
Author: User
Tags semaphore switches

. NET Face Question series catalogue

This article is mainly a few excerpts from each encyclopedia, outlining the source of processes and threads, and why processes and threads have occurred.

Implementation of processes and threads in the operating system level history of operating systems

Until the mid 1950s, the operating system has not yet appeared, and the computer works by manual operation mode. The programmer will load the perforated paper tape (or card) corresponding to the program and data into the input machine, and then start the input machine to input the program and data into the computer memory, and then start the program by the console switch to run against the data; After the user takes the result and unloads the paper tape (or card), Glimpse the next user on the machine.

Two characteristics of manual operation mode:

(1) User exclusive full machine. There is no wait for resources to be consumed by other users, but the utilization of resources is low.

(2) The CPU waits for manual operation. The utilization of the CPU is not sufficient.

In the late the 1950s, there was a contradiction between human and machine: The slow speed of manual operation and the high speed of the computer formed a sharp contradiction, the manual operation method has seriously impaired the utilization of the system resources (to reduce the utilization of resources to a few percent, or even lower), can not tolerate. The only solution: to get rid of the manual operation of people, to achieve automatic transition of the job. In this way, batch processing occurs.

Batch processing system (late 50)

The goal of batch processing system is to improve the system resource utilization and system throughput, as well as the automation of the work flow. an important disadvantage of batch processing system is that it does not provide human-computer interaction ability, which brings inconvenience to users.

Batching refers to the user who submits a batch of jobs to the operating system and then no longer intervenes, and the operating system controls them to run automatically. This operating system, which uses batch processing techniques, is called a batch operating system. batch processing operating system is divided into single-channel batch processing system and multi-channel batch processing system. the batch operating system is not interactive, it is an operating system proposed to improve CPU utilization.

The early batch processing system, which is a single-channel batch processing system, is designed to reduce manual operation during conversion between jobs, thus reducing CPU the waiting time. It is characterized in that only one job is allowed in memory, that is, a job that is currently running in order to host memory, and the order in which the jobs are executed is FIFO, which is executed sequentially.

Because in a single-channel batch processing system, a job alone into the memory and exclusive system resources, until the end of the run to the next job to enter the memory, as the industry for I/O operations, the CPU can only wait state, so the CPU Low utilization, especially for jobs with long I/O operations. in order to improve the utilization of the CPU, the multi-channel program design technology is introduced on the basis of the single-channel batch processing system, which forms multi-channel batch processing system, that is, there are several operations in memory simultaneously, the order of job execution has no strict correspondence with the order of entering memory, because these jobs are With a certain job scheduling algorithm to use the CPU , while one job waits for I/O processing, the CPU schedules Another job to run, so the CPU significantly increased utilization.

In a batch system, a job can occupy the CPU for a long time. In the time-sharing system, a job can only use the CPU in a time slice. Batch systems are not strictly operating systems.

Multi-channel program design and multi-channel batch processing system (mid 60)

Multi-Channel program technology (Multiprogramming Running characteristics: Multi-channel, macro-parallel, micro-serial. The multi-channel program technology is proposed to improve the CPU utilization. It requires hardware support and allows the CPU to schedule other hardware to work.

(1) Multi-channel: In the memory of the computer to store several independent programs ( initially called the job, and later evolved into a process );

(2) macro-parallel: At the same time into the system of several procedures are in operation, that is, they have started their own operation, but have not finished running;

(3) Micro-serial: In fact, the various procedures with the CPU rotation, and alternately run.

The so-called multi-Channel program is designed to allow multiple programs to enter the main memory of a computer system at the same time and start the calculation method. In other words, the computer can store multiple lanes (more than two independent) programs at the same time, both at the beginning and the end. From the macro perspective is parallel, multi-channel programs are in operation, and are not running the end, from the microscopic view is serial, the various programs take turns using the CPU, alternating execution. The basic purpose of introducing multi-channel program design technology is to improve CPU utilization and to give full play to the parallelism of computer system components. Modern computer systems have adopted multi-channel program design Technology.

For example, if a process has 20% of the time it uses the CPU to compute, and another 80% of the time is used for I/O, the CPU utilization is only 20% for single-channel programming because the CPU has nothing to do in the I/O time (only one process).

If the operating system supports two processes, even with 2-channel programming, the CPU will be idle only when the two processes are simultaneously I/O, so the CPU utilization will be increased to: 1-0.8*0.8=0.36=>36%. Similarly, if you run more processes at the same time, CPU utilization will gradually increase until a certain point is reached. (PS: The example here ignores the system consumption required for process switching)

Although the user exclusive full-machine resources, and direct control of the operation of the program can be at any time to understand the operation of the process, but this mode of operation due to exclusive full-machine resource efficiency is very low. So a new pursuit of the goal emerged: both to ensure the efficiency of the computer, but also convenient for users to use the computer. In the the mid 1960s, the development of computer and software technology made this pursuit possible.

Time-sharing operating system

The time-sharing operating system is a kind of operating system that enables a computer to serve several, dozens of, or even hundreds of users simultaneously in the same way.

To connect the computer with many end users, the time-sharing operating system will change the system processor and memory space at a certain time interval, and switch to the application of each end user in turn. because of the short time interval, each user feels like he is an exclusive computer. The feature of time-sharing operating system is that it can effectively increase the utilization rate of resources.

Time-sharing system is one of the most commonly used operating systems in today's computer operation system. There are preemptive (preemption) and cooperative methods for multitasking. In a collaborative environment, the next process is scheduled on the premise that the current process is actively abandoning the time slice ; In a preemptive environment, the operating system completely determines the process scheduling scheme, which can deprive the time slices of the lengthy process to other processes.

Processes (Process)

each program will eventually be loaded into memory to run. The advent of the multi-channel program provides a prototype of the process – more than one job can be stored inside the system. In the time-sharing operating system, there can be more than one program inside the system, and the programs that exist in the memory of the computer are called processes. But when we run a program on a modern system, we get an illusion as if our program is the only program that is currently running in the system.

implementation of the process: memory allocation and scheduling through the operating system. The OS manages the process through the process Control block. Process isolation is achieved through virtual memory, and one process cannot access the resources that other processes are occupying. But insecure operating systems such as DOS, any process can access resources from other processes.

Processes let each user feel themselves in exclusive CPU.

Process is a natural product of multi-channel programming, and the purpose of multi-channel programming is to improve the utilization of computer CPU (or the throughput of the system).

A three-point view of process model

(1) Physical view: From the allocation of physical memory, each process occupies a memory space, from this point of view, the process is actually a piece of memory space. Because at any one time, a CPU can only execute one instruction, so there is only one process executing on the CPU at any given time, and exactly which instruction to execute is specified by the physical program counter. Therefore, at the physical level, all processes share a program counter, but the CPU is constantly switching processes.

(2) Logical view: From a logical level, each process can be executed, or temporarily suspended for other processes to execute, and can then be executed. Therefore, the process needs to find a way to hold the state in order to start from the correct location (that is, the context) the next time it executes. Therefore, each process has its own counter that records where its next instruction is located. (Logically, the program counter can have multiple)

(3) Timing perspective: In terms of time, each process must move forward. After a certain amount of time has elapsed, the process should have done some work. In other words, each time the process returns, it is behind the last return point.

In modern operating systems, process management and scheduling is one of the functions of the operating system, especially in the context of multitasking, which is necessary. The operating system allocates resources to individual processes, allowing processes to share and exchange information, protect the resources owned by each process, not be snatched from other processes, and enable synchronization between processes (if necessary). To meet these requirements, the operating system assigns a data structure to each process that describes the state of the process and the resources that the process has. This data structure is used by the operating system to control the operation of each process.

How to implement a process

(1) Physical basis: the physical basis of the process is the program, the program is running on the computer, so the computer to run the program first to solve the process of storage: allocate memory to the process to make it settle down. because multiple processes may coexist simultaneously, you need to consider how to let multiple processes share the same physical memory without conflicts. the OS solves this problem through memory management (virtual memory and process isolation).

(2) Process switching: The process is actually running on the CPU, so how to switch the CPU between multiple processes is also a problem. OS This issue is addressed through process scheduling. The so-called process scheduling, is to decide when to let what process to use the CPU, and in the scheduling switch process, will not lose the previous work of the data.

Virtual Memory and process isolation

virtual Memory is a technology of computer system memory management. It allows the application to assume that it has contiguous available memory (a contiguous, complete address space), and in fact, it is usually separated into multiple physical memory fragments, and some are temporarily stored on external disk storage and exchanged for data when needed. Currently, most operating systems use virtual memory, such as "virtual memory" for the Windows family, " swap space" for Linux, and so on.

The programs running in the computer need to be executed through memory, and if the executed program consumes a large amount of memory or many, it will cause the memory to be exhausted. When memory is exhausted, Windows uses virtual memory to compensate for the combination of the computer's RAM and the temporary space on the hard disk. When RAM runs slowly, it moves data from RAM to a space called a paging file. Moving data into a paging file frees up RAM to complete the work. In general, the larger the computer's RAM capacity, the faster the program runs. If the rate of the computer slows due to a lack of available RAM space, you can try to compensate by increasing the virtual memory. However, the rate at which the computer reads data from RAM is faster than the rate at which data is read from the hard disk, so expanding the RAM capacity (plus the memory stripe) is the best choice.

Virtual memory is a part of the hard disk space that Windows uses as memory, on the hard disk is actually a gigantic file, the filename is Pagefile.sys, usually is not seen in the state. You must turn off the protection of the system files by the resource Manager to see this file. Virtual memory is sometimes referred to as a "paging file" from the file name.

process Isolation is a set of different hardware and software technologies designed to protect processes in the operating system from non-interference. This technique is designed to avoid the occurrence of process a writing process B. The isolated implementation of the process uses virtual memory . The virtual address of process a differs from the virtual address of process B, which prevents process a from writing data information to process B.

the security of process isolation can be easily implemented by prohibiting access to inter-process memory. In contrast, some unsafe operating systems, such as DOS,can allow any process to write to the memory of another process.

Process Scheduler (scheduling)

In a multi-process environment, although more than one process is conceptually executed at the same time, in a single CPU , only one process can be in execution at any time, while other processes are in a non-executing state. so the question is, how do we determine which process is executing at any given moment and what does not? This involves an important part of process management: process scheduling.

process scheduling is an important part of operating system process management, and its task is to select the next process to run. The occurrence of a clock interrupt is one of the possible triggering schedules. When the dispatch is finished, after selecting the next process to run, if the selected process is not a process that is currently running, the CPU is being context-switched (switching).

We must address the following issues:

    1. Use some scheduling algorithm to schedule multiple processes for sequential execution
    2. For a process that is currently resting, you need to let it run again, knowing the state before it hangs. So as the process itself, it does not feel that it has been interrupted, that is, the context Exchange
Basic Scheduling algorithm

General program tasks are divided into three types: CPU compute-intensive, IO intensive and balanced (COMPUTE and IO each half) type, for different types of programs, scheduling needs to achieve the purpose is different. For IO-intensive, the response time is the most important; for CPU-intensive, turnaround time is the most important, and for a balanced type, it is important to balance the response with a turnaround. Therefore, the goal of the process scheduling is to achieve the minimum average response time, the maximum system throughput rate, to keep the system's various functional components are busy and provide some kind of seemingly fair mechanism.

First come first service (FCFS) algorithm

First come first service (FCFS) algorithm is one of the most common algorithms, it is a kind of fairness in human nature. The advantage is simple and easy to implement, the disadvantage is that short work may become very slow, because there is a long way ahead of the implementation of the work, which will result in the user's interactive experience is also poor. For example, when queuing for business, you need to handle the business only a few minutes can be done, but the person in front of you to handle the matter is very complicated to take 1 hours, then you need to wait for a long time behind him, so you think: if everyone in turn for 10 minutes of business, then how good! Then there is the time slice rotation algorithm.

Time Slice rotation algorithm

Time-slice rotation is an improvement to the FCFS algorithm, whose main purpose is to improve the response time of short program, and to realize the process switching periodically. the focus of time-slice rotation is on the choice of time slices, which need to consider many factors: if the running process is long, the time slices need to be shorter, and the number of processes is small, the time slice can be longer. Therefore, the choice of time slice is a comprehensive consideration, weigh the interests of all parties, make an appropriate compromise.

However, the system response time of the time slice rotation is not always shorter than the FCFS response time. Time-slice rotation is a kind of same big pot practice, but the real life is to walk "some people first rich, first rich drive after the rich" route. For example, if there are 30 tasks, one of which takes only 1 seconds to execute, while the other 29 tasks take 30 seconds to execute, and if for some reason this task of 1 seconds is rotated behind a further 29 tasks, it will need to wait 29 seconds to execute (assuming the time slice is 1 seconds). As a result, the response time and interaction experience of this task becomes very poor. Therefore, the short task priority algorithm is presented.

Short Task first algorithm

The core of the short-task-first algorithm is that all tasks are not the same, but have priority distinctions. Specifically, shorter tasks have a higher priority than long tasks, and we always prioritize high-priority tasks (which can lead to starvation).

The short task priority algorithm is divided into two types: one is non-preemptive and the other is preemptive. Non-preemptive when a task that is already running on the CPU ends or blocks, select the process that executes the shortest execution time from the candidate task. The preemption, however, is that each additional process requires a check of all processes, including processes running on the CPU, and who is running short.

Because the short task first always runs the program which needs the shortest execution time, the average response time of the system is optimal in the above algorithms, which is also the advantage of the short task priority algorithm. But the short-task-first algorithm also has drawbacks: one is that it can cause long tasks to never get CPU time and thus cause "hunger". How long does it take to know how much each process is going to run? Therefore, in order to solve the first disadvantage, the priority scheduling algorithm is proposed. And the second disadvantage can take some heuristic method to estimate, many AI algorithms can do this now.

Priority scheduling algorithm

Priority scheduling algorithms give each process a priority, and each time a process switch is required, a process with the highest priority is called for scheduling. This way, if the long process is given a high priority, the process will no longer be "hungry". In fact, the short-task priority algorithm itself is a priority scheduling, but it gives a shorter process a higher priority.

The advantage of this algorithm is that it can give important processes high priority to ensure that important tasks can get CPU time, the disadvantage is two: one is low priority process may be "hungry", second, response time is not guaranteed. The first disadvantage can be resolved by dynamically adjusting the priority of a task, such as if a process waits too long, its priority will continue to rise, exceeding the priority of other processes, thus obtaining CPU time. The second disadvantage can be resolved by setting a process priority to the highest, but even if the priority is set to highest, the response time is not guaranteed if everyone sets their process priority to the highest.

Hybrid scheduling algorithm

Before the algorithm has some shortcomings, then can have an algorithm to mix their advantages, discard their shortcomings, this is called the hybrid scheduling algorithm. The hybrid scheduling algorithm divides all processes into different large classes, each of which is a priority. If the two processes are in different large classes, the processes in the high-priority class will take precedence, and if they are in the same large class, they will be executed using a time-slice rotation algorithm.

Process State Save

When a context switch occurs, the process and CPU interaction information (stored in the registers of the CPU) needs to be saved. Because this information is quickly overwritten by another process (interacting with the CPU). This data is typically stored in a data structure called the Process Control block (BLOCK,PCB). In general, this information should include: Register, program counter, status Word, stack pointer, priority, process ID, creation time, CPU time consumed, various handles currently held, and so on.

The process's own information is not required because it is stored in a fixed space in memory, and process isolation has ensured that the data is secure. When the process resumes running, it only needs to return to the space it belongs to to find all the last data.

Context switch (contextual switching)

Simply put, context switching is when a process is scheduled to switch processes, the currently running process stores the data into the process control block, then selects the next process through the scheduling algorithm, the selected (currently in sleep) process from the process Control block to obtain their previous work information, and then restart the process of work.

Context switches can occur in three situations: interrupt processing, multitasking, and user-state switching. In interrupt handling, the behavior of other programs interrupts the currently running program. When the CPU receives an interrupt request, a context switch is made between the running program and the program initiating the interrupt request. In multitasking, the CPU switches back and forth between the different programs, each with a corresponding processing time slice, and the CPU switching context between the two time slices. User-State switching is the user's own behavior, such as cutting out from the game, to see the behavior of the site.

Context switches are often computationally intensive. That is, it requires considerable processor time, which can adversely affect performance. Similarly, a thread also has a context switch.

interprocess communication (inter-process communication)

interprocess communication (ipc,inter-process communication), which is a technique or method that transmits data or signals between at least two processes or threads. each process has its own part of a separate system resource that is isolated from each other. Inter-process communication is achieved in order to enable different processes to access resources and coordinate work with each other. to cite a typical example, two applications that use interprocess communication can be categorized as clients and servers, client processes requesting data, and server-side replies to client data requests. Some applications are both servers and clients, which are often seen in distributed computing. These processes can run on the same computer or on different computers that are connected to the network.

Pipe (Pipeline)

In Unix-like operating systems (and some extensions such as Windows), the pipeline (Pipeline) is the original software pipeline: A collection of processes that are linked by standard input output, so that each process's output (stdout) is directly input to the next process (stdin). The space occupied by a pipe can be either memory or disk.

To create a pipeline, a process only needs to invoke the system call (System API) created by the pipeline, and what the system call does is to carve a space on some kind of storage medium, assign one of the processes to write the right, and another process to read the right.

Similar communication mechanisms in C # can refer to this article: http://www.cnblogs.com/yukaizhao/archive/2011/08/04/system-io-pipes.html

Socket (socket)

Sockets are powerful and can support different layers, different applications, and cross-network communication. Using sockets for communication requires both parties to create a socket, one party acting as the server side and the other party as the client side. The server must first create a service area socket and then listen on the socket, waiting for a remote connection request. The client also creates a socket and sends a connection request to the server side. After the server socket is connected, a new client socket is created on the server side of the machine, and a point-to-end communication channel is formed with the remote client socket. After that, the client and the server are able to communicate directly on the created socket pipeline via commands similar to send and recv.

Signal and Signal Volume

The signal is similar to the telegram in our life, if you want to send a telegram to someone, make a good message, and then give the message and the information of the receiving person to the Telegraph company. The Telegraph Company sends the telegram to the post office where the addressee is located and notifies the receiving journalist to take the telegram. in this case, the sender does not need to know the reporter beforehand, and does not need any coordination. If the other party chooses not to respond to the signal, it will be run by the OS termination.

in a computer, a signal is either a kernel object or a kernel data structure. The sender fills in the contents of the data structure and indicates the target process of the signal, issuing a specific software interrupt (this is the operation of a telegram). After the OS receives a specific interrupt request, it knows that there is a process to send the signal, and then finds the receiver in the specific kernel data structure and notifies it. The process receiving the notification handles the signal accordingly.

The signal is derived from the operation of the railroad: on a monorail, only one train is allowed to travel on the railway at any time, and the system that manages the railway is the semaphore. Any train must wait for a signal indicating that the railway can travel before it enters orbit. When the train enters, it is necessary to change the signal into a forbidden state to prevent other trains from entering at the same time. And when the train has pulled out of the monorail, it needs to turn the signal back into the allowed state, much like the previous semaphore. Of course, this is easier to understand by thinking about the locks that we use often in our actual development.

In the computer, the semaphore is actually a simple integer. A process advances when the signal becomes 0 or 1, and turns the signal to 1 or zero to prevent other processes from advancing at the same time. When the process finishes the task, the signal is changed to 0 or 1, allowing other processes to execute. So we can also see that the signal volume is not just a communication mechanism, is a synchronization mechanism.

In the system, each process is given a semaphore, which represents the current state of each process, and the uncontrolled process is forced to stop at a specific place, waiting for a signal to continue to come. If the semaphore is an arbitrary integer, it is commonly referred to as the Count Semaphore (counting semaphore), or general semaphore, if the semaphore is only binary 0 or 1, which is called binary semaphore (binary semaphore). In the Linux system, binary semaphores (binary semaphore) are also known as mutexes.

The count Semaphore has two operation actions, formerly known as V (also known as signal ()) and P (Wait ()). The V operation increases the value of the semaphore S, and the p operation reduces it.

Mode of operation:

    1. Initialize, give it a non-negative integer value.
    2. Run P (Wait ()), and the value of the semaphore s will be reduced. The process of attempting to enter a critical block requires that P (Wait ()) be run first. When the semaphore S is reduced to a negative value, the process is blocked and cannot continue, and when the semaphore S is not negative, the process is allowed to enter the critical block. (Semaphore through this mechanism, control the number of processes entering the critical block)
    3. Run V (also known as signal ()), the value of the semaphore s will be increased. End the process of leaving the critical block and will run V (also known as signal ()). When the semaphore S is not negative, other processes that were previously blocked will be allowed to enter the critical block.

C # 's implementation of semaphores is mutexes and semaphore.

Shared memory

Two processes collectively have the same piece of memory. Any content in this memory can be accessed. To communicate using shared memory, process a first needs to create a piece of memory space for communication, while other process B maps the memory to its own (virtual) address space. Thus, when process a reads and writes to the region of the shared memory in its own address space, it communicates with process B.

Message Queuing

Message Queuing is a list of message permutations with headers and tails, new messages placed at the end of the queue, and read messages starting from the head of the queue.

So it looks like a pipe, a reading, a writing? Yes, it looks like a pipe, but it's not a pipe:

(1) Message Queuing has no fixed read-write process, any process can read and write, while the pipeline needs to specify who reads and who writes;

(2) Message Queuing can support multiple processes at the same time, multiple processes can read and write message queues, that is, many-to-many, and pipelines are point-to-point;

(3) Message Queuing is implemented only in memory, and pipelines can also be implemented on disk;

Thread

thread is CPU of a virtual.

Microsoft decided to run each instance of the application in one process. A process is a collection of resources to be used by an application. When a program starts running in a process, it's like being locked into a confined space with everything it needs. There is no relationship between the different confined spaces, and any process that dies will not cause the whole system to crash. The process has its own virtual address space, ensuring that the code used by the process is not accessed by other processes. And when the process loses its response, the system can still work, and other processes can be used to kill the unresponsive process.

Doesn't that sound like a problem? But although the application and the operating system have been through the process to achieve isolation and protection of the effect, but they still have shared resources: CPU. if the machine has only one CPU, then when an application enters an infinite loop, the only CPU will be busy running endless loops without having to take care of other applications, locked up. As a result, users will find that each application is unresponsive, regardless of where the mouse point is. In order to solve the problem of the CPU can not be one, thread is born.

Windows uses threads to virtualize CPUs. Threads allow each process (at least one thread) to have a "clone" of the CPU (called the logical CPU, the real CPU is called the physical CPU). When a thread enters an infinite loop, other threads can still continue to run.

The benefits of threading are as follows:

1. In many modern large-scale programs (such as word), a program is doing many things at the same time. When we use Microsoft Word, we are actually opening multiple threads. One of these threads is responsible for displaying, one is responsible for receiving input, and one timing to disk. These threads work together, allowing us to feel that the input and display occur simultaneously without having to type some characters to wait for a moment to appear on the screen. Inadvertently, Word can also automatically save it on a regular basis. This requires multiple threads to work in parallel with each other or mutually exclusive, and it is no doubt that the programming model is simplified by decomposing the work into different threads.

2. Thread creation and destruction are less expensive because threads are lighter than processes.

3. Threading improves performance, although threads are parallel on the macro, but are serial (through time slices) on the micro. Threads cannot improve performance from a CPU angle, but if some threads are involved in waiting for resources (such as IO, waiting for input), multithreading allows other threads in the process to continue executing rather than the entire process being blocked, thus increasing CPU utilization, which improves performance (as with the multi-channel program design).

4. In the case of multi-CPU or multicore, the use of threads is not only parallel on the macro level, but also in the micro-parallel.

In thread mode, a process can have at least one or more threads. Threads and processes are both similar and different.

Similar:

    1. Threads can be seen as lightweight processes. So after the process is used, the introduction of threads does not improve performance.
    2. Different threads and different processes can communicate.
    3. Both threads and processes have priority and require the system to dispatch.
    4. Both threads and processes have a state.

The difference:

    1. The task of a thread is to virtualize the CPU so that the CPU is no longer a mutually exclusive component shared by all processes, and the task of the process is to improve the efficiency of CPU usage.
    2. Threads Act as the basic unit of dispatch and dispatch, while processes act as the basic unit of resource ownership. in the same process, the switch of the thread does not cause the process to switch, but when a thread in one process switches to a thread in another process, it causes the process to switch.
    3. A process can include multiple threads, and at least one foreground thread.
    4. Processes are isolated through virtual memory, but threads in one process share the resources of all processes.

As with processes, threads also require scheduling, communication, and synchronization.

Resources

Http://www.cnblogs.com/edisonchou/p/5022508.html

Https://zh.wikipedia.org/wiki/%E8%A1%8C%E7%A8%8B%E9%96%93%E9%80%9A%E8%A8%8A

Http://www.cnblogs.com/edisonchou/p/5037403.html

Http://www.cnblogs.com/CareySon/archive/2012/05/04/ProcessAndThread.html

Baidu Encyclopedia

. NET Surface question series [16]-Multithreading concept (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.