This is a creation in Article, where the information may have evolved or changed.
First, the process
The core concept of the operating system is the process, and the most important problem in the distributed system is interprocess communication.
A process is " An instance of program execution" that acts as an entity that allocates system resources. Process creation must be assigned a complete, independent address space.
Process Switching occurs only in the kernel state, two steps: 1 switch the page global catalog to install a new address space 2 toggles the kernel-state stack and the hardware context. Another way to say something like: 1 Save the CPU environment (register value, program counter, stack pointer) 2 Modify memory management Unit MMU's Register 3 conversion fallback buffer the address translation cache content in the TLB is marked as invalid.
Second, the thread
The definition in the book: A thread is an execution stream of a process that executes its own program code independently.
Wikipedia: threading ( English: Thread) is the smallest unit that the operating system can perform operations on.
The thread context typically contains only the CPU context and other thread management information. The overhead of thread creation depends primarily on the overhead of allocating memory for the thread stack to be built, which is not very expensive. A thread context switch occurs when two threads need to be synchronized, such as entering a shared data segment. Switching only the CPU register value needs to be stored and then reloaded into the CPU register with the previously stored value of the thread that will be switched to.
The main disadvantage of a user-level thread is that the call to a system call that causes blocking immediately blocks the entire process that the thread belongs to. A kernel implementation thread can cause thread context switching to be as expensive as a process, so the tradeoff is a lightweight process (lightweight). In Linux, a thread group is basically a lightweight set of processes that implement multithreaded applications. I understand that there are user threads, lightweight processes, kernel threads in the process.
The language level of lightweight processes is less, stackless Python,erlang support, Java is not supported.
Third, the co-process
What is the definition of a co-process? Outskirts, Xu Xiwei are only said to be a lightweight thread, a process can easily create a hundreds of thousands of-meter association. Careful study, the personal feeling that these are the myth of people. From Wikipedia, the "subroutine is actually a special case of the association" from Knuth's basic algorithm volume. What is a subroutine? subroutine ( English:subroutine, procedure, function , routine, method, subprogram), is the function! So the process is not great, is a more general meaning of the program components, then you have large memory space, how many functions to create it is not with you?
The co-process can invoke other processes through yield. The process of transferring execution through yield is not the relationship between the caller and the callee, but the symmetry and equality between them. The start of the process is the first entry point, and in the process, the return point is followed by the entry point. The lifetime of the subroutine follows a LIFO (the last called subroutine returns first); instead, the lifetime of the process is determined entirely by the needs of their use.
The difference between a thread and a co-process:
Once you've created a thread, you can't decide when he gets the time slice, when it's time to make a movie, and you give it to the kernel. And the co-process writer can have a controllable switching time, and the second is a small switching cost. From the operating system has no scheduling rights, the process is because it does not need to switch the kernel state, so will use it, there will be such a thing. Rai Yonghao and dccmx This definition I feel relatively accurate in the process-user-state of the lightweight thread. (http://blog.dccmx.com/2011/04/coroutine-concept/)
Why use a co-process:
The co-process helps to achieve:
- State machine: Implement a state machine in a subroutine, where the state is determined by the current exit/entry point of the process, which can produce more readable code.
- Role model: A parallel role model, such as a computer game. Each role has its own process (which logically separates the code), but they voluntarily hand over control to the central scheduler that performs each role process sequentially (this is a form of cooperative multitasking).
- Generator: It facilitates input/output and general traversal of data structures.
Outskirts summarizes the support of the common language and platform, can do reference, but should be in-depth investigation under the good.
Iv. Goroutine in Go
The goroutine in Go is generally considered to be the implementation of the go language of the association. "Go language Programming" says Goroutine is a lightweight thread (that is, the Coroutine, the original book 90 pages). In the Nineth chapter of the Advanced topic, the author once again mentions, "fundamentally speaking, Goroutine is a go language version of the Association (coroutine)" (Original book 204 pages). But Rob Pike, the author, does not say so.
"A goroutine is a go function or method that runs concurrently with other goroutines in the same address space. A running program consists of one or more goroutine. it differs from thread, association, process, and so on. It is a goroutine. ”
On the implementation of the stack, the implementation of its compiler branch GCCGO is a multiplexed threads on thread pthread,6g (6g/8g/5g represents 64-bit, 32-bit, and arm schema compilers respectively)
Infoq an article about the feature also said: Goroutine is the function of the go language runtime, not the functionality provided by the operating system, Goroutine is not implemented with threads. Refer to the PKG/RUNTIME/PROC.C in the Go language source
Lao Zhao thinks that Goroutine is to put the function of class library into the language.
Concurrency problems with Goroutine: Goroutine runs in shared memory, communication networks can deadlock, multithreaded problems are poorly debugged, and so on. A good recommendation rule: Do not communicate through shared memory, instead, share memory through communication.
Parallel Concurrency differences:
Parallelism refers to the running state of the program, to have two threads executing in order to be considered parallelism, and to refer to the logical structure of the program, concurrency as long as more than two threads are still in the process of execution. Simply put, parallelism is required to be done in multicore or multiprocessor cases, while concurrency does not. (http://stackoverflow.com/questions/1050222/concurrency-vs-parallelism-what-is-the-difference)