Operating system processes, threads

Source: Internet
Author: User
Tags message queue mutex semaphore

1. Process five state and state transition diagram

Five states: New, ready, run, block, exit

The problems and solutions of the five-state process:

When multiple processes compete for resources, it can lead to insufficient memory and the ready queue is full. CPU speed is much faster than IO speed, many processes are in a blocking state, the CPU is idle, the utilization is low. Workaround, program and data (not including PCB) that partially run the process (blocking the process) to free up memory space for use by other new processes.

2. Process hangs: process is swapped to external memory, status changes to suspended state

Reason for process hangs:

    • Large system load, memory resources tight, let other processes first execute
    • Many processes are in a blocking state and the CPU is idle
    • User or operating system requirements, some processes may need to be suspended

Characteristics of Process hangs:

    • cannot be executed immediately
    • Blocking and suspending are not linked. The process hangs may be blocked, but a timely blocking process occurs, and the suspend process cannot execute.
    • Only the process of suspending a process is suspended from the suspended state to a different state

State transition diagram with process suspend

3. Process scheduling and Concurrency

Process is the entity and foundation of concurrency, scheduling is the means of implementing concurrency mechanism.

Initially, the scheduled object is a process, but the modern operating system introduces the concept of threading, which transforms the process into a resource and a managed object, and the thread becomes the scheduled object . Although the scheduling object has changed, but there is no substantial change in the strategy and methods of scheduling, and some small operating systems do not have the concept of threading at all, so the scheduled objects are process-based objects.

The so-called process scheduling, refers to the system in all the ready process, according to a certain strategy to determine the appropriate process to let the processor run it.

Choosing a process scheduling algorithm requires a principle to consider:

    • resource utilization from (CPU utilization)

CPU utilization = CPU Effective time/CPU total run time

Total CPU uptime = CPU Active time + CPU idle wait time

    • throughput Rate: Number of jobs processed per unit of time
    • Fairness: Ensure that every process can be scheduled and not starve to death
    • Response Time: The interactive process from submitting a request command (process) to a professor to a response time interval is called response time.

To make the response time of interactive users as short as possible, or to deal with real-time tasks as soon as possible, this is an important index of time-sharing system and realtime system measurement scheduling performance.

    • turnaround time: When a job is submitted to the system, the time interval until the job is completed is called the response time.

Scheduling algorithm:

    • First come first served (FCFS)
    • Short Job priority algorithm (shortest job first)
    • Priority scheduling algorithm (deprivation, non-deprivation, static priority, dynamic priority)
    • time slice rotation scheduling algorithm (Round robin,rr)
    • The actual OS scheduling algorithm is implemented synthetically by the above algorithm. The priority scheduling algorithm based on time slice rotation is adopted FCFS when the priority phase is the same.

4. The difference between a process and a thread

Process: A process is a running program that is an independent unit of the system's resource allocation. Processes are independent of each other, sharing data segments (global variables) between threads of the same process, but each thread has its own program counters and stacks that support the context of thread execution.

Threads: Threads are part of a process and are the basic unit of CPU dispatch and dispatch. The basic unit, which is smaller than the process, can run independently, with a thread that does not have system resources, has only a few resources (program counters, a set of registers, and stacks) that are essential for running, but it can share all of the resources owned by a process with other threads of the process. Each thread has its own stack.

5. How the Process communicates
  1. Pipe: A pipe is a half-duplex mode of communication in which data can only flow in one direction and can only be used between processes that have affinity. A process's affinity usually refers to a parent-child process relationship.
  2. Famous pipe (named pipe): A well-known pipe is also a half-duplex mode of communication, but it allows communication between unrelated processes.
  3. Message Queuing: Message Queuing is a linked list of messages, stored in the kernel and marked by Message queue representations. Message Queuing overcomes the disadvantages of less signal delivery information, and pipelines can only host unformatted byte streams and buffer size restrictions.
  4. Shared memory: Shared memory is the mapping of memory accessed by other processes within a section, and shared memory is created by a process, but can be accessed by multiple processes. Shared memory is the fastest IPC and is designed specifically for inefficient operation of other process communication methods. It often communicates with other mechanisms. such as semaphores, together with the use, to achieve synchronization and communication between processes.
  5. Sockets: Sockets are also inter-process communication mechanisms, unlike other communication mechanisms, which can be used for process communication between different machines.
  6. Signal (signal): A signal is a more complex form of communication used to inform the receiving process that a process has occurred at some time.
  7. Semaphore (semaphore): Semaphore is a counter that can be used to control access to shared resources by multiple processes. It is often used as a mechanism for locking, which prevents a process from accessing the shared resource while other processes are accessing the resource. Therefore, it is primarily used as a means of synchronizing between different threads or processes in the same process.
6. Thread Synchronization

What needs to be aware of thread synchronization

    1. The order in which time occurs, such as B time, must be performed after a time has occurred.
    2. Resource sharing visits, mutually exclusive access, only one thread can access at a time
    3. Producer and consumer issues

How to do thread synchronization:

    1. Mutex (semaphore, mutex variable)
    2. Read/write Lock
    3. Condition variable
7. Deadlock (Deadlock)

Deadlock: Refers to two or more than two threads or processes. The waiting phenomenon caused by competing for resources. If there is no external force, they will not be able to advance, this time said the system is in a deadlock state or create a deadlock, these forever waiting for each process called the deadlock process, thread.

Causes of deadlocks:

(1) Insufficient system resources, (2) The sequence of progress of the process is not appropriate; (3) Improper allocation of resources

The four necessary conditions for a deadlock: as long as the system generates a deadlock, these four condition must be established, as long as one of the above conditions is not satisfied, there will be no deadlock

(1) Mutually exclusive condition: A resource can only be used by one process, thread at a time

(2) Request and hold: When a process is blocked by a request for resources, the acquired resources are left

(3) Conditions of non-deprivation: the resources that the process has acquired cannot be forcibly deprived of before being used

(4) Cyclic waiting condition: a relationship between several processes to form a cyclic waiting resource.

Four ways to handle deadlocks:

(1) Carefully allocate resources to avoid deadlocks (avoid policies)

(2) To prevent the occurrence of deadlocks by breaking the four necessary conditions of the deadlock

There are two approaches: one is that when resources are not met, the resources that were originally occupied must be discarded, and the other is that the process that is applied to the request resource has a higher priority than the process that occupies the resource, if the resource requested by a process is occupied by another process, and the application process has a higher priority, Then it is possible to force the process of appropriation of resources to abandon.

(3) Detect deadlocks and restore

(4) Ignore the problem (this problem is not very related in PC)

Http://www.cnblogs.com/simonhaninmelbourne/archive/2012/11/24/2786215.html

8. Process user state and kernel state

The code of a process generally includes the user state code and the kernel state code, when the process executes the user-state code is called the process is in the user state, when the process due to call system calls, the exception or the perimeter break into the kernel, execute kernel code, called in the kernel state.

When the kernel creates a process, it is created into a. Process Control block and its own stack. A process has a user stack and a kernel stack, the kernel is low, the stack of space points to the kernel address space, and the user stack points to the user address space. The stack pointer register of the CPU, when the process is running in a different state, points to a different stack.

Reasons to set two stacks for a process:

    • Security reasons, if there is only one stack, users can modify the stack content to break through the kernel security
    • Kernel code and data are shared for all processes, and if you do not set the corresponding kernel stack for each process, you cannot implement different code for different processes.

Operating system processes, threads

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.