Basic thread knowledge

Source: Internet
Author: User
Tags sleep function

 

Basic thread knowledge

1. What are the differences and connections between processes and threads?
Each process requires at least one thread.
A process consists of two parts: the process kernel object and the address space. A thread is also composed of two parts: the thread kernel object, which is used by the operating system to manage the thread. Thread stack, used to maintain the execution of threadsCodeAll function parameters and local variables.
The process is not lively. A process never executes anything. It is just a thread container. A thread is always created in a process environment, and its entire life cycle is in the process.
If multiple threads are running in a single process environment, these threads will share a single address space. These threads can execute the same code and operate on the same data. These threads can also share the kernel object handle because the handle table depends on each process rather than each thread.
The process uses more system resources than threads. In fact, a thread has only one kernel object and one stack, and few records are retained. Therefore, a small amount of memory is required. Therefore, we should always try to add threads to solve Programming Problems and avoid creating new processes. But manyProgramIt is better to design multiple processes for implementation.

2. How to Use the _ beginthreadex function?
The usage is the same as that of the createthread function, but the type of the called parameter must be converted.

3. How to Use the createthread function?
when createthread is called, The system creates a thread kernel object. The kernel object of this thread is not the thread itself, but a small data structure used by the operating system to manage the thread. When using this function, you must call the closehandle function to close the thread handle without accessing the thread kernel. Because some C/C ++ Runtime library functions in the createthread function may have memory leakage, you should avoid using them as much as possible.
parameter description:
lpthreadattributes: If null is passed, this thread uses the default security attribute. If you want all sub-processes to inherit the handle of the thread object, you must initialize its binherithandle member to true.
dwstacksize: Specifies the address space of the thread stack. If it is not 0, the function retains all the memory and assigns it to the thread's stack. If it is 0, createthread, reserve a memory and allocate the memory capacity specified by the/stack link PROGRAM switch information embedded in the .exe file to the thread stack.
lpstartaddress: Address of the thread function.
the parameter that lpparameter passes to the thread function.
If dwcreationflags is 0, scheduling is performed immediately after the thread is created. If it is create_suincluded, the system initializes it and suspends the running of this thread.
lpthreadid is used to store the ID allocated by the system to the new thread.

4. How to terminate the running of a thread?
(1) return the thread function (this method is recommended ).
This is the only way to ensure that all thread resources are correctly cleared.
If the thread can return data, the following items can be ensured:
all c ++ objects created in the thread functions are correctly revoked through their undo functions.
the operating system correctly releases the memory used by the thread stack.
the system sets the exit code of the thread to the return value of the thread function.
the system will decrease the usage count of the thread kernel object.
(2) Call the exitthread function (this method is not recommended ).
This function terminates the running of the thread and causes the operating system to clear all operating system resources used by the thread. However, C ++ resources (such as C ++ class objects) are not revoked.
(3) Call the terminatethread function (this method should be avoided ).
terminatethread can cancel any thread. The usage count of the kernel objects of the thread is also decreased. The terminatethread function is a function that runs asynchronously. If you want to know exactly that the thread has been terminated, you must call waitforsingleobject or similar functions. The memory stack of a thread is also revoked when the thread is revoked by returning or calling the exitthread method. However, if terminatethread is used, the system will not undo the stack of the thread before the process with the thread stops running.
(4) terminate a process that contains a thread (this method should be avoided ).
because the entire process has been disabled, all resources used by the process must have been cleared. Just like calling terminatethread from every remaining thread. This means that the correct application Purge does not occur, that is, the C ++ object Undo function is not called, and the data is not transferred to the disk.
once the thread no longer runs, no other thread in the system can process the handle of the thread. However, other threads can call getexitcodethread to check whether the thread identified by the hthread has been terminated. If it has been terminated, determine its exit code.

5. Why not use the _ beginthread and _ endthread functions?
Compared with the _ beginthreadex function, this function has fewer parameters and more restrictions. A paused thread cannot be created and the thread ID cannot be obtained. The _ endthread function has no parameters. The thread exit code must be 0. Also, the _ endthread function closes the thread handle internally. Once exited, the thread handle cannot be accessed correctly.

6. How to reference the kernel of a process or thread?
Handle getcurrentprocess ();
Handle getcurrentthread ();
Both functions can return the pseudo handle of the process that calls the thread or the pseudo handle of the thread kernel object. The pseudo handle can only be used in the current process or thread, and cannot be accessed by other threads or processes. The function does not create a new handle in the Process Handle table. Calling these functions has no impact on the Use count of process or thread kernel objects. If you call closehandle and pass the pseudo handle as a parameter, closehandle ignores the call to the function and returns false.
DWORD getcurrentprocessid ();
DWORD getcurrentthreadid ();
These two functions enable the thread to query the unique ID of its process or its own unique ID.

7. How can I convert a pseudo handle to a real handle?
Handle hprocessfalse = NULL;
Handle hprocesstrue = NULL;
Handle hthreadfalse = NULL;
Handle hthreadtrue = NULL;

Hprocessfalse = getcurrentprocess ();
Hthreadfalse = getcurrentthread ();
Obtain the thread handle:
Duplicatehandle (hprocessfalse, hthreadfalse, hprocessfalse, & hthreadtrue, 0, false, duplicate_same_access );
Obtain the Process Handle:
Duplicatehandle (hprocessfalse, & hprocesstrue, 0, false, duplicate_same_access );
Because duplicatehandle increases the usage count of a specific object, you should pass the target handle to closehandle to reduce the usage count of the object after completing the application of the copy object handle.

8. What is the maximum number of threads that can be created in a process?
The maximum number of threads depends on the available virtual memory size of the system. By default, each thread can have a maximum of 1 MB of stack space. Therefore, you can create up to 2028 threads. If you reduce the size of the default stack, you can create more threads.

Thread Scheduling, priority, and affinity
1. How to pause and resume the running of threads?
There is a value inside the thread kernel object that specifies the thread pause count. When the CreateProcess or createthread function is called, the kernel object of the thread is created and its pause count is initialized to 1. Because the initialization of the thread takes time, the thread cannot be executed before the system is fully prepared. After the thread is fully initialized, run CreateProcess or createthread to check whether the create_suincluded flag has been passed. If this flag has been passed, these functions are returned and the new thread is paused. If this flag has not been passed, the function will decrease the thread's pause count to 0. When the pause count of a thread is 0, the thread is in the schedulable State unless the thread is waiting for something else to happen. Create a thread in the paused State to change the running environment (such as priority) of the thread before the thread has the opportunity to execute any code ). Once the thread environment is changed, the thread must become a schedulable thread. The method is as follows:
Hthread = creatthread (......, Create_suincluded ,...... );
Or
Bcreate = creatprocess (......, Create_suincluded ,......, Pprocinfo );
If (bcreate! = False)
{
Hthread = pprocinfo. hthread;
}
......
......
......
Resumethread (hthread );
Closehandle (hthread );
Resumethread succeeds. It returns the previous pause count of the thread; otherwise, 0 xffffffff is returned.
A single thread can be paused several times. If a thread is paused three times, it must be restored three times. When creating a thread, you can also call the suspendthread function to pause the thread running in addition to create_suincluded. Any thread can call this function to pause the running of another thread (as long as it has a thread handle ). The thread can pause the operation on its own, but cannot resume the operation on its own. Like resumethread, suspendthread returns the previous pause count of the thread. The maximum number of times a thread is paused can be maximum_suspend_count. The execution of suspendthread and kernel mode is asynchronous, but the user mode is not executed until the thread resumes running. You must be careful when calling suspendthread, because you do not know what operations it is doing when the thread is paused. Only by knowing exactly what the target thread is (or what the target thread is doing), and taking strong measures to avoid problems or deadlocks caused by thread suspension, suspendthread is safe.

2. Can I pause and resume the process?
For Windows, there is no concept of pausing or resuming a process because the process is never scheduled to get the CPU time. However, Windows does allow a process to pause the running of all threads in another process, but the process that is engaged in the pause operation must be a debugging program. In particular, a process must call functions such as waitfordebugevent and continuedebugevent. Due to competition, Windows does not provide other methods to pause the running of all threads in the process.

3. How to Use the sleep function?
The system will make the thread unschedulable within a specified millisecond. Windows is not a real-time operating system. Although a thread may be awakened at a specified time, whether it can be done depends on what operations are in progress in the system.
You can call sleep and pass infinite to the dwmilliseconds parameter. This tells the system never to schedule this thread. This is not something worth doing. It is best to let the thread exit and restore its stack and kernel objects. You can pass 0 to sleep. This tells the system that the calling thread will release the remaining time slices and force the system to schedule another thread. However, the system can reschedule the thread that just called sleep. This problem occurs if there are no scheduling threads with the same priority.

4. How to convert to another thread?
The system provides the switchtothread function. When calling this function, the system needs to check whether there is a thread that urgently needs CPU time. If no thread urgently needs CPU time, switchtothread will return immediately. If there is a thread that urgently requires CPU time, switchtothread schedules the thread (the priority of the thread may be lower than the thread that calls switchtothread ). This thread that urgently needs CPU time can run for a period of time, and then the system scheduling program runs as usual. This function allows a thread that requires a resource to force another thread with a lower priority to discard the resource. If no other thread can run the switchtothread function, the function returns false; otherwise, a non-0 value is returned. The call to switchtothread is similar to the call to sleep. The difference is that switchtothread allows lower-priority threads to run, and even if there are low-priority threads that urgently need CPU time, sleep can immediately reschedule the calling thread.

5. How to get the thread running time?
(1) obtain the approximate thread running time:
DWORD dwstarttime = 0;
DWORD dwendtime = 0;
DWORD dwruntime = 0;
dwstarttime = gettickcount ();
......
......
......
dwendtime = gettickcount ();
dwruntime = dwendtime-dwstarttime;
(2) Call the getthreadtimes function:
parameter description:
hthread handle
lpcreationtime Creation Time: UK Greenwich Mean Time
lpexittime Exit Time: UK Greenwich Mean Time. If the thread is still running, the exit time is undefined.
lpkerneltime Kernel Time: specify the number of CPU times that the thread has consumed for operating system code execution.
lpusertime user time: specify the number of CPU times that the thread has consumed to execute the application code
getprocesstimes is a function similar to getthreadtimes, applies to all threads in a process (or even threads that have terminated the operation ). The returned Kernel Time is the total time that all threads of all processes pass through the kernel code. The getthreadtimes and getprocesstimes functions do not work in Windows98. In Windows 98, there is no reliable mechanism to supply programs to determine how much CPU time a thread or process has consumed.

6. What are the priority types of processes?
description of the Priority Class Identifier
real-time realtime_priority_class responds to the event immediately to execute tasks of critical time. It will run before the operating system components.
high high_priority_class immediately responds to the event and executes the task of critical time.
if it is higher than the normal value of above_normal_priority_class, it runs between the normal priority and the high priority (Windows2000 ).
normal normal_priority_class has no special scheduling requirements.
lower than normal below_normal_priority_class, it runs between normal priority and idle priority (Windows2000 ).
idle idle_priority_class runs when the system is idle.
setting method:
bool setpriorityclass (handle hprocess, DWORD dwpriority);
DWORD getpriorityclass (handle hprocess);
when you start a program using the command shell, the starting priority of the program is the normal priority. If you use the start command to start the program, you can use a switch to set the initial priority of the application. For example, the
C: \> Start/low Calc. EXE
Start command can also identify/belownormal,/normal,/abovenormal,/High, And/realtime switches.

7. What are the relative priorities of threads?
relative priority identifier description
the key time is thread_priority_time_critical. For real-time priority threads, they run on priority 31. For other priority classes, threads run on priority 15.
the maximum thread_priority_highest thread runs at two levels above the normal priority.
higher than normal thread_priority_above_normal, the thread runs at the upper level of normal priority.
the normal thread_priority_normal thread runs normally on the priority class of the process.
A thread lower than normal thread_priority_below_normal runs at a lower level than normal.
the minimum thread_priority_lowest thread runs at two levels lower than the normal priority.
idle thread_priority_idle is used for real-time priority class threads running on priority 16 and other priority class threads running on priority 1.
setting method:
bool setthreadpriority (handle hthread, DWORD dwpriority);
DWORD getthreadpriorityclass (handle hthread);

8. How can we prevent the system from dynamically increasing the priority level of threads?
The system often needs to increase the thread priority level to respond to window messages or read I/O events such as disks. Or when the system finds that a thread has been eager to get the CPU time for about 3 to 4 s, it dynamically increases the priority of the thread eager to get the CPU time to 15, and let the thread run twice the time. When the time is doubled, the priority of the thread is immediately returned to its basic priority. The following functions can be used to set the scheduling mode of the system:
Bool setprocesspriorityboost (handle hprocess, bool bdisableboost );
Bool getprocesspriorityboost (handle hprocess, pbool pbdisableboost );
Bool setthreadpriorityboost (handle hthread, bool bdisableboost );
Bool getthreadpriorityboost (handle hthread, pbool pbdisableboost );
Setprocesspriorityboost informs the system to activate or disable the priority improvement function for all threads in progress, while setthreadpriorityboost activates or disables the priority improvement function for each thread. Windows98 does not provide the useful implementation code for these four functions.

Thread Synchronization in user mode
1. Why is thread synchronization unnecessary when only one statement is used?
When programming in advanced languages, we often think that a statement is the smallest atomic access, and the CPU does not run other threads in the middle of the statement. This is wrong, because even if a very simple high-level language statement is compiled by the compiler, multiple lines of code may be executed by the computer. Therefore, thread synchronization must be considered. No thread should call a simple C statement to modify shared variables.

2. What are the functions of mutual lock?
(1) Long interlockedexchangeadd (lplong addend, long increment);
addend is the address of the long integer variable, increment is the value (which can be a negative number) that you want to add to the long integer variable that addend points ). The main function of this function is to ensure that the addition operation is an atomic access.
(2) Long interlockedexchange (lplong target, long value);
Replace the value pointed to by the first parameter with the value of the second parameter. The Return Value of the function is the original value.
(3) pvoid interlockedexchangepointer (pvoid * target, pvoid value);
Replace the value pointed to by the first parameter with the value of the second parameter. The Return Value of the function is the original value.
(4) Long interlockedcompareexchange (
lplong destination, long exchange, long comperand);
If the third parameter points to the same value as the first parameter, then replace the value pointed to by the first parameter with the second parameter. The Return Value of the function is the original value.
(5) pvoid interlockedcompareexchangepointer (
pvoid * destination, pvoid exchange, pvoid comperand);
If the third parameter points to the same value as the first parameter, then replace the value pointed to by the first parameter with the second parameter. The Return Value of the function is the original value.

3. Why shouldn't a single CPU computer use a loop lock?
Example:
Bool g_bresourceuse = false;
......
Void threadfunc1 ()
{
Bool bresourceuse = false;
While (1)
{
Bresourceuse = interlockedexchange (& g_bresourceuse, true );
If (bresourceuse = false)
{
Break;
}
Sleep (0 );
}
......
......
......
Interlockedexchange (& g_bresourceuse, false );
}
Loop locks will waste CPU time first. The CPU must constantly compare two values until one value changes "wonderfully" due to another thread. In addition, all threads using this loop lock should have the same priority, and the setprocesspriorityboost function or setthreadpriorityboost function should be used to disable Dynamic Improvement of thread priority. Otherwise, threads with lower priority may never be called.

4. How to use volatile to declare variables?
If the IP address of the shared resource is used, such as & g_resource, volatile is not used, because when a variable address is passed to a function, the function must read the value from the memory. The optimizer does not have any impact on it. If you use variables directly, you must have a volatile-type qualifier. It tells the compiler that variables can be modified by something other than the application itself, including the operating system, hardware, or threads that execute at the same time. The volatile qualifier tells the compiler not to optimize the variable and always reload the value of the memory unit from the variable. Otherwise, the compiler saves the variable value to the CPU register and operates on the register each time. The thread will enter an infinite loop and will never wake up.

5. How to Use key code segments to synchronize threads?
If you need a small piece of code to be executed in atomic mode, then a simple mutual lock function can no longer meet your needs, you must use key code segments to solve the problem. However, when using a key code segment, it is easy to get stuck because the timeout value cannot be set while waiting to enter the key code segment. Key code segments are implemented by setting a flag for shared resources, just like the "Someone/nobody" sign on the toilet door. This flag is a critical_section variable. This variable should be initialized before any thread uses it. There are two methods for initialization: initializecriticalsection function and initializecriticalsectionandspincount function. Use the entercriticalsection function or the tryentercriticalsection function before each key code segment that uses a thread function that shares resources. Call the leavecriticalsection function after the key code segment is used. After All threads no longer use the shared resource, call the deletecriticalsection function to clear the flag. Example:
Const int max_times = 1000;
Int g_intindex = 0;
DWORD g_dwtimes [max_times];
Critical_section g_cs;

Void Init ()
{
......
Initializecriticalsection (& g_cs );
......
}

DWORD winapi firstthread (pvoid lpparam)
{
While (g_intindex <max_times)
{
Entercriticalsection (& g_cs );
G_dwtimes [g_intindex] = gettickcount ();
G_intindex ++;
Leavecriticalsection (& g_cs );
}
Return 0;
}

DWORD winapi secondthread (pvoid lpparam)
{
While (g_intindex <max_times)
{
Entercriticalsection (& g_cs );
G_intindex ++;
G_dwtimes [g_intindex-1] = gettickcount ();
Leavecriticalsection (& g_cs );
}
Return 0;
}

Void close ()
{
......
Deletecriticalsection (& g_cs );
......
}
Note the following tips when using key code segments:
(1) Each shared resource uses a critical_section variable.
In this way, when the current thread occupies one resource, another resource can be occupied by other threads.
Entercriticalsection (& g_cs );
For (intloop = 0; intloop <100; intloop ++)
{
G_intarray [intloop] = 0;
G_uintarray [intloop] = 0;
}
Leavecriticalsection (& g_cs );
Changed:
Entercriticalsection (& g_csint );
For (intloop = 0; intloop <100; intloop ++)
{
G_intarray [intloop] = 0;
}
Leavecriticalsection (& g_csint );
Entercriticalsection (& g_csuint );
For (intloop = 0; intloop <100; intloop ++)
{
G_uintarray [intloop] = 0;
}
Leavecriticalsection (& g_csuint );
(2) to access multiple resources at the same time, you must always request access to resources in the same order.
In this way, the deadlock status can be avoided. The order of departure is irrelevant.
Thread1:
Entercriticalsection (& g_csint );
Entercriticalsection (& g_csuint );
For (intloop = 0; intloop <100; intloop ++)
{
G_uintarray [intloop] = g_intarray [intloop];
}
Leavecriticalsection (& g_csint );
Leavecriticalsection (& g_csuint );
Thread2:
Entercriticalsection (& g_csuint );
Entercriticalsection (& g_csint );
For (intloop = 0; intloop <100; intloop ++)
{
G_uintarray [intloop] = g_intarray [intloop];
}
Leavecriticalsection (& g_csint );
Leavecriticalsection (& g_csuint );
Changed:
Thread1:
Entercriticalsection (& g_csint );
Entercriticalsection (& g_csuint );
For (intloop = 0; intloop <100; intloop ++)
{
G_uintarray [intloop] = g_intarray [intloop];
}
Leavecriticalsection (& g_csint );
Leavecriticalsection (& g_csuint );
Thread2:
Entercriticalsection (& g_csint );
Entercriticalsection (& g_csuint );
For (intloop = 0; intloop <100; intloop ++)
{
G_uintarray [intloop] = g_intarray [intloop];
}
Leavecriticalsection (& g_csint );
Leavecriticalsection (& g_csuint );
(3) do not run key code segments for a long time.
Entercriticalsection (& g_cs );
Sendmessage (hwnd, wm_somemsg, & G_s, 0 );
Leavecriticalsection (& g_cs );
Changed:
Entercriticalsection (& g_cs );
Stemp = G_s;
Leavecriticalsection (& g_cs );
Sendmessage (hwnd, wm_somemsg, & stemp, 0 );

6. What is the difference between initializecriticalsection/initializecriticalsectionandspincount?
the returned value of the initializecriticalsection function is null and does not create event kernel objects. This saves system resources. However, if two or more threads compete for key code segments, key code segments may be competing, and the system may not be able to create necessary event kernel objects. The entercriticalsection function generates an exception_invalid_handle exception. This error is rare. If you want to prepare for this situation, you can have two options. You can use the structured exception handling method to track errors. When an error occurs, you can neither access the resources protected by key code segments, nor wait for some memory to become available, and then call the entercriticalsection function again.
another option is to use initializecriticalsectionandspincount. In the second parameter dwspincount, the number of iterations that require loop Lock Loops when the thread tries to obtain resources before waiting. This value can be any number between 0 and 0x00ffffff. If this function is called when running on a single processor computer, this parameter is ignored and always set to 0. Use the initializecriticalsectionandspincount function to create key code segments. Make sure that the high bit of the dwspincount parameter is set. When this function finds that the high information bit has been set, it creates the event Kernel Object and associates it with key code segments during initialization. If the event cannot be created, the function returns false. This event can be properly handled in the code. If the event is successfully created, entercriticalsection will always run and will never generate exceptions (if the event kernel object is always pre-allocated, system resources will be wasted. The event kernel object can be pre-allocated only when the Code cannot allow entercriticalsection to fail, or if it is sure that there will be contention, or if the process is expected to run in an environment with a very short memory ).

7. What is the difference between tryentercriticalsection and entercriticalsection?
If entercriticalsection places a thread in the waiting state, the thread cannot be rescheduled for a long time. In fact, in poorly written applications, this thread will never be assigned CPU time again. The tryentercriticalsection function never allows the calling thread to enter the waiting state. Its return value indicates whether the calling thread can gain access to the resource. Tryentercriticalsection finds that the resource has been accessed by another thread and returns false. In all other cases, it returns true. With this function, the thread can quickly check whether it can access a shared resource. If it cannot, it can continue to execute some other operations without waiting. If the tryentercriticalsection function returns true, the member variable of critical_section has been updated. Windows98 does not have the implementation code of the tryentercriticalsection function that can be used.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.