Process and thread and handle

Source: Internet
Author: User
Statement 1: processes have certain independent functions. Program A process is an independent unit for the system to allocate and schedule resources for a running activity of a dataset.

A thread is an entity of a process. It is the basic unit for CPU scheduling and scheduling. It is smaller than a process and can run independently. the thread itself basically does not have system resources, and only has a few resources (such as program counters, a set of registers and stacks) that are essential for running ), however, it can share all resources of a process with other threads of the same process.

One thread can create and withdraw another thread. multiple threads in the same process can be concurrently executed.

Statement 2: both processes and threads are the basic units for running programs that the operating system understands. The system uses this basic unit to implement the system's concurrency for applications. The difference between a process and a thread is:

In short, a program has at least one process, and a process has at least one thread.

The thread division scale is smaller than the process, making the multi-thread program highly concurrent.

In addition, the process has independent memory units during execution, and multiple threads share the memory, which greatly improves the program running efficiency.

The execution process of a thread is different from that of a process. Each Independent program has a program running entry, sequence execution sequence, and program exit. But the thread cannot be executed independently. It must exist in the application and the application provides multiple thread execution control.

Logically, multithreading means that multiple execution parts in an application can be executed simultaneously. However, the operating system does not view multiple threads as multiple independent applications to implement process scheduling, management, and resource allocation. This is an important difference between processes and threads.

Statement 3: coexistence of multiple threads in applications is a basic feature and an important symbol in modern operating systems. Readers who have used UNIX operating systems know the process. In UNIX operating systems, each application executes a process sign in the operating system kernel, the operating system schedules the application execution and allocates system resources based on the allocation flag. What is the difference between processes and threads?

Processes and threads are the basic units for running programs that the operating system understands. The system uses this basic unit to realize the system's concurrency for applications. The difference between a process and a thread is:

The division scale of threads is smaller than that of processes, making multi-threaded programs more concurrent.

In addition, the process has independent memory units during execution, and multiple threads share the memory, which greatly improves the program running efficiency.

The execution process of a thread is different from that of a process. Each Independent thread has a program running entry, sequence execution sequence, and program exit. But the thread cannot be executed independently. It must exist in the application and the application provides multiple thread execution control.

Logically, multithreading means that multiple execution parts in an application can be executed simultaneously. However, the operating system does not view multiple threads as multiple independent applications to implement process scheduling, management, and resource allocation. This is an important difference between processes and threads.

A process is a concept initially defined to indicate the basic execution unit of an application in a memory environment in a multi-user, multi-task operating system environment such as UNIX. Taking a UNIX operating system as an example, a process is a basic component in a UNIX operating system environment and a basic unit for system resource allocation. Almost all user management and resource allocation tasks completed in UNIX are implemented through the control of application processes by the operating system.

The source programs written in C, C ++, Java, and other languages are compiled into executable files by the corresponding compiler and submitted to the computer processor for running. An application in an executable state is called a process. From the user's perspective, a process is an execution process of an application. From the core perspective of the operating system, processes represent the basic units of the memory allocated by the operating system, CPU time slice, and other resources, and are the runtime environment provided for running programs. The difference between a process and an application is that an application is stored as a static file in a hard disk or other storage space of a computer system, A process is a system resource management entity maintained by the operating system under dynamic conditions. The main features of application processes in a multitasking environment include:

● The process has an initial entry point for memory units during execution, and it always has an independent memory address space during process survival;

● The lifetime status of a process includes creation, readiness, running, blocking, and death;

● The Process status can be divided into user and core states based on the running commands sent to the CPU during the execution of the application process. In the user State, processes execute application commands, and in the core State, application processes execute operating system commands.

During UNIX startup, the system automatically creates Swapper, init, and other system processes to manage memory resources and schedule user processes. In a Unix environment, no matter the process created by the operating system or executed by the application, it has a unique process identifier (PID ).

Statement 4: an application has an initial entry point address of the memory space during executionCodeThe execution sequence and the memory exit point address used to identify the process end. Each time point during the process execution has a unique processor command corresponding to the memory unit address.

The thread defined in Java also includes a memory entry point address, an exit point address, and a code sequence that can be executed sequentially. However, the main difference between a process and a thread is that the thread cannot be executed independently and must run in an active application process, therefore, it can be defined that the thread is a concurrent sequential code stream within the program.

The UNIX and Microsoft Windows operating systems support multi-user and multi-process concurrent execution, while the Java language supports concurrent execution of multiple execution threads within the application process. Multithreading means that multiple logical units of an application can be executed concurrently. However, multithreading does not mean that multiple user processes are being executed, and the operating system does not allocate independent system resources to each thread as an independent process. A process can create its child processes. The child process and the parent process have different executable code and data memory space. In the process used to represent the application, multiple threads share the data memory space, but each thread must have an independent execution stack and context ).

Based on the preceding differences, a thread can also be called a light weight process (lwp ). Different threads allow task collaboration and data exchange, making computer system resource consumption very cheap.

Threads must be supported by the operating system. Not all types of computers support multi-threaded applications. The Java programming language combines thread support with the language runtime environment to provide multi-task concurrent execution capabilities. This is like a person in the Process of housework, put the clothes in the washing machine, automatically wash, put the rice in the rice cooker, and then start cooking. When the food is ready, the food is cooked and the clothes are washed.

Note that using multithreading in applications does not increase the CPU data processing capability. Only when Java programs are divided into multiple concurrent execution threads on a multi-CPU computer or under the network computing architecture, multiple threads can be started to run simultaneously, only by running different threads in Java virtual machines based on different processors can the execution efficiency of applications be improved.

The so-called handle is actually a data, which is a long data.

A handle is a unique integer used by wondows to identify objects created or used by applications. Windows uses various handles to identify application instances, windows, controls, bitmaps, and GDI objects. A Windows handle is a bit like a file handle in C.

From the above definition, we can see that a handle is an identifier used to identify an object or project. It is like our name, and each person has a name, different people have different names, but there may be people with the same name as you. In terms of data type, it is only a 16-bit unsigned integer. Applications almost always obtain a handle by calling a Windows function. Then other Windows functions can use this handle to reference the corresponding objects.

To better understand the handle, I can tell you that the handle is a pointer to the pointer. We know that a pointer is a memory address. After the application starts, the objects that make up the program stay in the memory. If we simply understand it, it seems that we only need to know the first address of the memory, then we can use this address to access the object at any time. However, if you really think so, you are very wrong. We know that Windows is an operating system based on virtual memory. In this system environment, Windows Memory Manager often moves objects back and forth in memory to meet the memory needs of various applications. The object is moved, which means its address has changed. If the address is always changing, where should we find this object?

To solve this problem, the Windows operating system frees up some internal storage addresses for each application to specifically register the address changes in the memory of each application object, and this address (the location of the storage unit) it remains unchanged. After the Windows Memory Manager moves the object's location in the memory, it notifies the new address of the object to save the address. In this way, we only need to remember this handle address to indirectly know the location of the object in the memory. This address is assigned by the system during object loading. When the system is detached, it is released to the system.

Handle address (stable) → record the address of the object in the memory ---- → address of the object in the memory (unstable) → actual object

Nature: Windows programs do not use physical addresses to identify a memory block, file, task, or dynamic loading module. On the contrary, Windows API assigns a definite handle to these projects, return the handle to the application, and then use the handle to perform the operation.

However, it must be noted that each time the program is restarted, the system cannot ensure that the handle assigned to the program is the original one, and the vast majority of cases are indeed different. If we regard going to a cinema to watch a movie as the startup and running of an application, the handles assigned by the system to the application are always different, this is the same as the fact that tickets sold to us at a cinema are always different.

A thread is a command execution sequence of a program. The Win32 platform supports multi-threaded programs and allows multiple threads in the program. In a single CPU system, the system schedules the CPU time slice accordingAlgorithmTherefore, each thread is actually executed in a time-based manner. In Windows NT systems with multiple CPUs, different threads of the same program can be allocated to different CPUs for execution. Since each thread of a program runs in the same address space, it is necessary to deal with the synchronization problem between threads by setting and having problems such as memory sharing and communication, this is a difficulty in multi-threaded programming.

Thread, also known as lightweight processes ). Computer Science terminology refers to the scheduling unit of a running program.

A thread is an entity in a process. A process can have multiple threads. A thread must have a parent process. A thread has no system resources, but only some data structures required for running. It shares all the resources of the process with other threads of the parent process. Threads can create and cancel threads to implement concurrent program execution. Generally, a thread has three basic states: Ready, blocked, and running.

In a multi-central processor system, different threads can run on different central processors at the same time, even when they belong to the same process. Most operating systems that support multiple processors provide programming interfaces to allow processes to control their threads and affinity between each processor ).

A process is a process in which a program runs on a dataset (Note: A program may belong to both

Multiple processes). It is an independent unit for the operating system to allocate and schedule resources. processes can be simply divided into system processes (including General

Windows programs and service processes) and user processes

Processes and threads in Linux

an executable file consists of commands and data. A process is an instance of an executable file running on a computer for specific input data. If the same executable program file operates on different input data, it is two different processes.
A thread is an execution path of a process. It contains an independent stack and CPU register status. Each thread shares all the resources of its affiliated processes, including open files, page tables (thus sharing the entire user-state address space), signal identification, and dynamically allocated memory. The relationship between a thread and a process is as follows: a thread belongs to a process and runs in the process space. The threads produced by the same process share the same physical memory space, when a process exits, all threads generated by the process are forcibly exited and cleared.
linux adopts the thread model outside the kernel, that is, a core process (Lightweight Process) corresponds to a thread, and thread scheduling is equivalent to process scheduling and handed over to the core for completion, other tasks, such as thread cancellation and inter-thread synchronization, are completed in the off-Core Thread library. Therefore, we can regard the process as a group of threads, which have the same thread group number (tgid). This tgid is the ID of the processes affiliated to this group of threads, the ID number of each thread is the lwp number we see using the ps command.
for convenience, from now on, we have used tasks to replace processes and threads. Each time we mention a task, we mean threads and processes, unless the difference between threads and processes is emphasized. The task cycle starts from fork until it disappears from the Progress table. A process includes text, Data, stack, and shared memory segments ). reprinted statement: this article from the http://www.cnitblog.com/Patrick/archive/2006/12/23/20997.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.