RHCA Study notes: rh442-unit8 process and scheduling

Source: Internet
Author: User
Tags valgrind

UNIT 8 Processes and the Scheduler process and Scheduling Learning GoalsA. The relationship between the CPU cache and service time B. Analyze the situation when the application uses CPU cache (CPU utilization vs. CPU cache Hit rate) C. Preemption (preemption based on priority) D schedules and sorts e according to the order of process precedence. Monitoring the performance of cores and processes 8.1 characterizing prosess states process StatusView process Status: PS Axo pid,comm,stat–sort=-stat task_interruptible (interruptible): The process is hibernating while waiting for certain conditions to be reached (such as waiting for an I/O request). Once certain conditions are reached, the kernel sets the process state to run. Processes in this state are also woken up and run in advance because they receive a signal. Task_uninerruptible (non-interruptible): This state is typically used when a process must wait without interference or wait for an event to occur quickly, and the process in this state does not respond to the signal. Task_runnable (Run): The process is executable, is executing, or waits for execution in the run queue. Task_stopped (STOP): The process stops executing or is suspended, and the process is not running or running. Task_zombie (Zombie): The process is over. The PID still exists for the parent process to be informed of its message. The parent process will release all resources (including PID) of the process after it knows its end message. Note: Each process is bound to be one of five process states. Related terms: Process:The name of the program in which it is executed and the resources it contains (code snippets, data segments, address spaces, open files, signals, one or several threads) Threads:The object that is active in the process. Each thread has a separate program counter and process stack and a set of process registers.

The parent process creates a new child process
Task_zombie (Process aborted)
Task_runnable (ready but not running)
Task_runnable (running)
Task_interruptible or Task_uninterrptibl (wait) E
Scheduler puts process into operation according to scheduling algorithm
Exits after execution completes, but the PID is still there until it is released by the parent process
A process is preempted by a higher-priority process
In order to wait for a specific event, the process sleeps on the waiting queue.
The waiting event occurs after the process is awakened and is reset to the run queue.
After the parent process has been created, the child process resumes execution.

The parent process creates a new child process

Task_zombie (Process aborted)

Task_runnable (ready but not running)

Task_runnable (running)

Task_interruptible or Task_uninterrptibl

Wait

E

Scheduler puts process into operation according to scheduling algorithm

Exits after execution completes, but the PID is still there until it is released by the parent process

A process is preempted by a higher-priority process

In order to wait for a specific event, the process sleeps on the waiting queue.

The waiting event occurs after the process is awakened and is reset to the run queue.

After the parent process has been created, the child process resumes execution.

process status Conversion diagram 8.2 Getting ready to run How the program works  A. The program must first read the data into the CPU cache Cache-hit (cache hit Ratio) when it starts running: The ratio of the data that the CPU is accessing to the cache.    Cache-miss (cache loss rate): The percentage of data that the CPU is accessing is not in the cache. B. Move data from memory to cache line fill (cache fill): When the cache loss rate occurs, the data is read from disk to main memory, and then to the CPU cache. C   Move data from the cache to memory Write-through (straight): When hit, not only to the new content to the cache memory, you must also write to the main, so that the main memory and the contents of the cache at the same time, to ensure that main memory and sub-page content consistent. Write-back (writeback): When missed, the system only writes information to main memory without having to transfer the entire contents of main memory into the cache memory. D. Cache read-write caches snooping (cache sniffing) high-speed connections between CPUs in the Smp/numa architecture. 8.3 Types of CPU cache CPU Cache of typeA. The effect of the CPU cache type on service time term address Image: The main memory block image to the cache block address transformation: The main memory address transformation to the cache address (because the cache storage space is small, Therefore, a storage block in the cache should correspond to several memory blocks in main memory, that is, several main memory block mappings to a cache block) according to the different address correspondence method, the way of address image usually has direct mapping (direct mapped), the full-phase-linked image (Fully     Associative) and group-linked mappings (set associative) three species.     B. Multi-level caches (L1,L2, some may have L3) L2 and L3 are often cached by multiple CPUs that may only store instructions, or only put data, or both. C View cache Information Getconf-a | Grep–i Cache x86info–c can also view cached information in/VAR/LOG/DMESG. 8.4 Locality of reference AddressingA. Cache Unit Stride Buffer units step the application writes data to the cache in sequence when it reads data.     You can reduce service time by using the data that is already loaded in the cache.        B. There is no benefit in reading non-cyclic data from the cache.     Some processor instructions bypass the cache directly. C. Analysis of cache usage valgrind--tool=cachegrind program_name parameters--i1--d1--l2 valgrind The program will run slower when used 。   8.5 improving locality of reference improved addressing capabilities
    1. By manually optimizing the code
Determine if the data structure is adaptive to the cashe when it is working: A. Expand Loops B. Restrict the scope of the IF loop structure B. Use automated compilation options A. Generally adopted in a way that is conservative B. The time run time code size example must be compiled in a compromise way: Gcc-f-o1 reduce code size and execution time-o2 reduce space required for runtime, increase speed-O3 increase online library functions and reset registers   8.6 multitasking and the run queue multi-process and run queueA. Each CPU has two running queues: Active and Expiredb.     A process is joined to active queue A when the following occurs.     The process state must be task_runnableb.   The first process in the active queue is placed in the CPU the queue is sorted by priority C. The current process runs until it is preempted by C.   After the process is preempted, it is moved to the expiration queue D.     When there are no processes in the active queue, the active queue and the expired queue are switched (in order to recalculate the time slice). Related commands: grep ' CONFIG.*SMP '/boot/config-* View system CPU number grep config_nr_cpus/boot/config-* See how many bits of system 8.7 preempting The current process preempt the current processA. Standard Preemption rules: (Processes are preempted in the following cases) a.     The CPU receives a hard interrupt when B.     The process waits for the IO request when C.     The CPU time D is automatically discarded by calling the Sched_yield function. Scheduling algorithm determines that a process is preempted Note: Linux is called through the Sched_yield () system to provide a mechanism for explicitly giving processor time to other waiting execution processesB      View process policy and Priority Chrt-p PID PS Axo pid,comm,rtprio,policy TopC.     The init process is a that starts with the sched_other algorithm. Each process inherits the scheduling policy and priority of the parent process when it is created. View the init process scheduling policy and priority. Chrt–p ' pidof init ' or Chrt-p 1 8.8 Sorting the run queue sort of Run queueA. Each process can schedule policies and priorities with it. A.     Static priority (1-99): Real-time scheduling policy ==>sched_fifo and SCHED_RRB.    Static priority (0) and dynamic priority (100-139): general scheduling strategy = = "Sched_other and Sched_batchb." Sched_fifo first-in, first-out note: not based on a time slice, once this level of process is in the executable state, it will continue to execute until it itself is preempted or explicitly released by the processor. A. Simple policy: Applies only to standard preemption rule B. The rearrangement is always at the front of the same priority queue.     C. Sched_rra. It is roughly the same as Sched_fifo, except that it is based on time slices. Note: The SCHED_RR-level process cannot be executed until it runs out of time allocated to it beforehand. B. The higher the priority process, the longer the time slice is allocated. C. When the process time slice is exhausted, it is preempted. D. The rearrangement will be placed on the last side of the same priority queue.         D. Sched_other A. Each time the process is preempted, a new priority value is calculated. B. The priority value range is from 100 to 199 (corresponding to the nice value-20 to +19) ps:a. The real-time priority range is from 0 to Max_rt_prio minus 1, and by default the value of Max_rt_prio is 100, so the default real-time priority is 0 to 99,sched_ The nice value of the Otherf process shares this value space, and its value range is from Max_rt_prio to (MAX_RT_PRIO+40):     That is, by default, the nice value from 20 to +19 corresponds directly from 100 to 140. B. Sched_fifo and SCHED_RR These two real-time algorithms implement a static priority, and the kernel does not compute dynamic precedence for real-time processes. This guarantees that a real-time process with a given priority can always preempt a process with a lower priority. 8.9 sched_other Scheduling Algorithm  A. Priority changes in the following situations: A. When two processes have the same priority, round-robin every 20 seconds to prevent the CPU from starving. B. When the process takes too much CPU time to run, the priority will be punished by 5 when the process is preempted.          B. The time it takes for an interactive task to wait for an IO request a. The scheduling algorithm tracks the time that the process spends waiting for an IO request and calculates its average sleep time. B. A higher average sleep time indicates the process: (1) The interactive process is reinserted into the active queue (2) if not, its process priority is 5 and moved to the expired queue. 8.10 Tuning Scheduler Policy Scheduling PolicyA. Sched_fifo chrt–f [1-99]/path/to/prog ARGUMENTSB.     Sched_rrchrt–r [1-99]/path/to/prog ARGUMENTSC. Sched_othernicerenice 8.11 viewing CPU performance data viewing CPU performance dataA. Average load: The average length of the running queue a. Only consider the task status in Task_runnable and task_uninterruptable cases Related commands:Sar-q 1 2 top w Uptimeb. CPU Utilization Mpstat 1 2sar-p all 1 2iostat-c 1 2cat/proc/stat View the CPU utilization of the specified thread with the PS command: Ps-amo USER,PI D,tid,psr,pcpu,pri,vsz,rss,stat,time,comm.

RHCA Study notes: rh442-unit8 process and scheduling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.