Reprint: http://blog.csdn.net/a_ran/article/details/43759729
Context switching between thread schedules
What is context switching.
If the main thread is the only one, then he is basically not scheduled to go out. On the other hand, if the number of threads that can run is greater than the number of CPUs, then the operating system will eventually dispatch a running thread to
enable other threads to use the CPU. This results in a context switch. In this procedure, the execution context of the currently running thread is saved and the execution context of the newly dispatched thread is set to the current context.
Switching contexts requires a certain amount of overhead, while thread scheduling requires access to the data structures shared by the operating system and the JVM. Applications, operating systems, and JVMs use a set of identical CPUs. The more CPU clock cycles are consumed in the JVM and operating system code, the less the application's available CPU clock cycles. But the overhead of context switching is not just the JVM and operating system open. When a new thread is switched in, the amount of data it needs may not be in the current processor's local cache, so the context switch will cause some cache deletions, and the thread will be slower when the first schedule runs. This is why the scheduler assigns a minimum execution time to each running thread, even though many other threads are waiting to execute: it allocates the cost of context switching to more uninterrupted execution time, providing overall throughput (at the expense of loss responsiveness).
When a thread is blocked by waiting for a competing lock, the JVM typically suspends the thread and allows it to be swapped out. If threads are frequently blocked, they will not be able to use the full schedule time slice. The more congestion that occurs in a program (including blocking IO, waiting to acquire a competing lock, or waiting on a conditional variable), the more context switches that occur with CPU-intensive programs, increasing the scheduling overhead and thus reducing throughput. (No blocking algorithm also helps reduce context switching)
The actual cost of context switching varies depending on the platform, but empirically: in most general-purpose processors, context switching costs the equivalent of 5000-10000 clock cycles, or microseconds.
Both the Vmstat command for UNIX systems and the Perfmon tools for Windows systems can report information such as the number of context switches and the percentage of execution time in the kernel. If the kernel occupancy rate is higher (over 10%), it usually means that the scheduling activity occurs frequently, possibly due to blocking by IO or competitive locks.
Three scheduling strategies for the Linux kernel:
1,sched_other time-sharing scheduling strategy,
2,SCHED_FIFO Real-time scheduling strategy, first to first service. Run as soon as the CPU is occupied. Run until a higher priority task arrives or abandons itself 3,SCHED_RR real-time scheduling strategy, time slice rotation. When the time slice of the process is exhausted, the time slice is reassigned and placed at the end of the ready queue. Placed at the end of the queue guarantees scheduling of all RR tasks with the same priority level Linux thread priority setting
First, you can use the following two functions to get the highest and lowest priority that a thread can set, and the policy in the function is the macro definition of the above three policies:
int Sched_get_priority_max (int policy);
int sched_get_priority_min (int policy);
Sched_other is not supported for priority use, and SCHED_FIFO and SCHED_RR support priority use, they are 1 and 99, respectively, the higher the number of priority.
Set and get precedence through the following two functions
int Pthread_attr_setschedparam (pthread_attr_t *attr, const struct Sched_param *param); int Pthread_attr_getschedparam (const pthread_attr_t *attr, struct sched_param *param); Param.sched_priority = 51; Set Priority
|
When the system creates a thread, the default thread is Sched_other. So if we're going to change the thread's scheduling strategy, we can do that with the following function.
int Pthread_attr_setschedpolicy (pthread_attr_t *attr, int policy); |
The param above uses the following data structure:
struct Sched_param { int __sched_priority; The thread priority that you want to set }; |
We can use the following test program to illustrate the priority of the support of our own system:
#include <stdio.h> #include <pthread.h> #include <sched.h> #include <assert.h>
static int Get_thread_policy (pthread_attr_t *attr) { int policy; int rs = Pthread_attr_getschedpolicy (Attr,&policy); ASSERT (rs==0); Switch (Policy) { Case SCHED_FIFO: printf ("policy= sched_fifo\n"); Break Case SCHED_RR: printf ("policy= sched_rr"); Break Case Sched_other: printf ("policy=sched_other\n"); Break Default printf ("policy=unknown\n"); Break } Return policy; }
static void Show_thread_priority (pthread_attr_t *attr,int policy) { int priority = Sched_get_priority_max (policy); ASSERT (Priority!=-1); printf ("max_priority=%d\n", priority); priority= sched_get_priority_min (Policy); ASSERT (Priority!=-1); printf ("min_priority=%d\n", priority); }
static int get_thread_priority (pthread_attr_t *attr) { struct Sched_param param; int rs = Pthread_attr_getschedparam (Attr,¶m); ASSERT (rs==0); printf ("priority=%d", param.__sched_priority); return param.__sched_priority; }
static void Set_thread_policy (pthread_attr_t *attr,int policy) { int rs = Pthread_attr_setschedpolicy (Attr,policy); ASSERT (rs==0); Get_thread_policy (attr); }
int main (void) { pthread_attr_t attr; struct Sched_param sched; int RS; rs = Pthread_attr_init (&attr); ASSERT (rs==0);
int policy = Get_thread_policy (&attr); printf ("Show current configuration of priority\n"); Show_thread_priority (&attr,policy); printf ("Show Sched_fifo of priority\n"); Show_thread_priority (&ATTR,SCHED_FIFO); printf ("Show sched_rr of priority\n"); Show_thread_priority (&ATTR,SCHED_RR); printf ("Show priority of the current thread\n"); int priority = Get_thread_priority (&attr);
printf ("Set thread policy\n"); printf ("Set Sched_fifo policy\n"); Set_thread_policy (&ATTR,SCHED_FIFO); printf ("Set SCHED_RR policy\n"); Set_thread_policy (&ATTR,SCHED_RR); printf ("Restore current policy\n"); Set_thread_policy (&attr,policy);
rs = Pthread_attr_destroy (&attr); ASSERT (rs==0); return 0; } |
The following is the result of the test program's operation:
Policy=sched_other Show current configuration of priority Max_priority=0 Min_priority=0 Show Sched_fifo of priority max_priority=99 Min_priority=1 Show SCHED_RR of priority max_priority=99 Min_priority=1 Show priority of the current thread Priority=0set Thread Policy Set SCHED_FIFO Policy policy= Sched_fifo Set SCHED_RR Policy policy= Sched_rrrestore Current Policy Policy=sched_other |
Here is a test of two of these features, Sched_other and SCHED_RR, there is the priority of the problem, is not able to ensure that the high priority of the thread, you can guarantee the first run.
The following test program, created three threads, the default created thread scheduling policy is sched_other, the remaining two threads of the scheduling policy set to SCHED_RR. The kernel version of my Linux is 2.6.31. SCHED_RR is based on the time slice to determine the scheduling of threads. The time slice is run out, no matter how high the priority of the thread will not be running, but into the ready queue, waiting for the next time slice, how long the time slice will last. In the "in-depth understanding of the Linux kernel" in the seventh chapter of the process scheduling, this is the case, Linux to take a single empirical approach, that is, to choose as long as possible, while maintaining a good corresponding time of the time slice. Here also did not give a specific time, may be based on different CPU to set, there is the case of multiple CPUs.
#include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <pthread.h>
void Thread1 () { Sleep (1); int i,j; int policy; struct Sched_param param; Pthread_getschedparam (Pthread_self (), &policy,¶m); if (policy = = Sched_other) printf ("sched_other\n"); if (policy = = SCHED_RR); printf ("Sched_rr 1 \ n"); if (POLICY==SCHED_FIFO) printf ("sched_fifo\n");
for (i=1;i<10;i++) { for (j=1;j<5000000;j++) { } printf ("Thread 1\n"); } printf ("Pthread 1 exit\n"); }
void Thread2 () { Sleep (1); int i,j,m; int policy; struct Sched_param param; Pthread_getschedparam (Pthread_self (), &policy,¶m); if (policy = = Sched_other) printf ("sched_other\n"); if (policy = = SCHED_RR); printf ("sched_rr\n"); if (POLICY==SCHED_FIFO) printf ("sched_fifo\n");
for (i=1;i<10;i++) { for (j=1;j<5000000;j++) {
} printf ("Thread 2\n"); } printf ("Pthread 2 exit\n"); }
void Thread3 () { Sleep (1); int i,j; int policy; struct Sched_param param; Pthread_getschedparam (Pthread_self (), &policy,¶m); if (policy = = Sched_other) printf ("sched_other\n"); if (policy = = SCHED_RR) printf ("SCHED_RR \ n"); if (POLICY==SCHED_FIFO) printf ("sched_fifo\n");
for (i=1;i<10;i++) { for (j=1;j<5000000;j++) { } printf ("Thread 3\n"); } printf ("Pthread 3 exit\n"); }
int main () { int i; i = Getuid (); if (i==0) printf ("The current user is root\n"); Else printf ("The current user isn't root\n");
pthread_t ppid1,ppid2,ppid3; struct Sched_param param;
pthread_attr_t attr,attr1,attr2;
Pthread_attr_init (&ATTR1); Pthread_attr_init (&ATTR); Pthread_attr_init (&ATTR2); Param.sched_priority = 51; Pthread_attr_setschedpolicy (&ATTR2,SCHED_RR); Pthread_attr_setschedparam (&attr2,¶m); Pthread_attr_setinheritsched (&attr2,pthread_explicit_sched); To make the priority its function must have this sentence
param.sched_priority = 21; Pthread_attr_setschedpolicy (&ATTR1,SCHED_RR); Pthread_attr_setschedparam (&attr1,¶m); Pthread_attr_setinheritsched (&attr1,pthread_explicit_sched);
Pthread_create (&ppid3,&attr, (void *) thread3,null); Pthread_create (&PPID2,&ATTR1, (void *) thread2,null); Pthread_create (&PPID1,&ATTR2, (void *) thread1,null);
Pthread_join (Ppid3,null); Pthread_join (Ppid2,null); Pthread_join (Ppid1,null); Pthread_attr_destroy (&ATTR2); Pthread_attr_destroy (&ATTR1); return 0; }
|
The following are the results of one of the program's operations:
sudo./prio_test The current user is root Sched_other Sched_rr SCHED_RR 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Pthread 1 exit Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Pthread 2 exit Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Pthread 3 exit |
Here we can see that since thread 3 's scheduling strategy is Sched_other, and thread 2 's scheduling strategy is SCHED_RR, in Thread3, Thread 3 is preempted by thread 1, thread 2. Because thread 1 has a priority greater than the priority of thread 2, threads 1 runs before thread 2, but there is a part of the code in thread 2 running before thread 1.
I thought that as long as the thread was high priority, it would have to run first, in fact, this understanding is one-sided, especially on the SMP PC will increase its uncertainty.
In fact, the normal process of scheduling, is the CPU based on process priority to calculate the time slice, so it does not necessarily guarantee that the high priority process must first run, but with a lower priority process, usually higher priority process to get the CPU time slice will be longer. In fact, if you want to ensure that a thread runs through another thread, it is necessary to use multithreading synchronization technology, semaphores, conditional variables and other methods. Instead of absolutely relying on priority level, to ensure that.
However, from the results of the run, we can see that the scheduling policy for SCHED_RR thread 1, thread 2 does preempt the scheduling policy for Sched_other thread 3. This is understandable, as SCHER_RR is a real-time scheduling strategy.
A real-time process is replaced by another process only if one of the following events occurs. The
(1) process is preempted by another real-time process with a higher real-time priority. The
(2) process performs a blocking operation and goes to sleep
(3) process stops (in task_stopped or task_traced state) or is killed. The
(4) process voluntarily discards the CPU by calling the system call Sched_yield (). The
(5) process is based on the real-time process (SCHED_RR) of the time slice rotation, and its time slice is exhausted. The
real-time process based on time slices is not really about changing the priority of the process, but of changing the length of the basic time slice of the process. Therefore, the process scheduling based on time slice rotation can not guarantee the high priority process to run first.
Below is another running result:
sudo./prio_test The current user is root Sched_other SCHED_RR 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Thread 1 Pthread 1 exit Sched_rr Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Thread 2 Pthread 2 exit Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Thread 3 Pthread 3 exit
|
As you can see, there are no high priority threads that are guaranteed to run at each time.