Solaris2.4 multi-thread programming guide 4-Operating System Programming

Source: Internet
Author: User
Tags ranges sigint signal
Sender: McCartney (coolcat), email zone: Unix
Mailing site: BBS shuimu Tsinghua station (Sun May 17 16:31:05 1998)
 
4. Operating System Programming
This chapter discusses how multi-thread programming interacts with the operating system and how the operating system changes to support multi-line
.
Process-changes made for Multithreading
Alarm, interval timer, profiling)
Global jump -- setjmp (3C) and longjmp (3C)
Resource restrictions
Lwp and scheduling type
Extended traditional signal
I/O Problems
 
4.1 process-changes made for Multithreading
 
4.1.1 copy the parent thread
Fork (2)
Using the fork (2) and fork1 (2) functions, you can choose to copy all the parent threads to the Child threads, or
The thread has only one parent thread ???.
The fork () function copies the address space and all threads (and lwp) in the child process ). This is useful, for example,
If the child process never calls exec (2) but uses the copy of the parent process address space.
To explain, consider a thread in the parent process -- not the one that calls fork () -- to give a mutex lock.
The lock is applied. The mutex is copied to the sub-process, but the thread that unlocks the mutex is not copied. Institute
Any thread in the child process that tries to lock the mutex lock will wait permanently. To avoid this situation, use fork ()
Copies all threads in the process.
Note: If a thread calls fork (), the thread that is blocked in an interrupt System Call will return
Eintr.
Fork1 (2)
The fork1 (2) function copies the complete address space in the Child thread, but only copies
Thread. This is useful when the subprocess immediately calls exec () after fork. In this case, the sub-process does not
The thread other than the thread that calls the fork (2) function needs to be copied.
Do not call any library functions between fork1 () and exec (). The library function may use
The lock operated by multiple threads.
 
* Notes for Fork (2) and fork1 (2)
For fork () and fork1 (), be careful when using the global declaration after the call.
For example, if one thread reads a file sequentially and the other thread successfully calls fork (),
Every process has a thread for reading files. Because the file pointer is shared by two threads, the parent process obtains
Some data, while the sub-process gets others.
For fork () and fork1 (), do not create a lock used by the parent process and the child process. This only happens in
The memory allocated to the lock is shared (declared by map_shared of MMAP (2 ).
 
Vfork (2)
Vfork (2) is similar to fork1 (), and only the call thread is copied to the sub-process.
Note: Do not change the memory before the thread in the child process calls exec (2. Remember that vfork () changes the parent
The address space of the process is handed over to the sub-process. The parent process re-obtains the address null after the child process calls exec () or exits.
. It is very important that the sub-process does not change the status of the parent process.
For example, it is dangerous to create a new thread between vfork () and exec.
 
4.1.2 execution file and termination process
 
Exec (2) and exit (2)
Exec (2) and exit (2) are no different from single-threaded processes, but they destroy all threads.
. Two calls are blocked before the execution resources (and active threads) are destroyed.
If exec () re-creates a process, it creates an lwp. The process starts to execute the program from this initial thread.
As usual, if the initial thread returns, it calls exit () to destroy the entire process.
If all threads exit, the process exits with a value of 0.
 
4.2 alarms (alarm clock ???), Interval timers (timer), and profiling (configuration)
 
Each lwp has a unique real-time timer and an alarm clock bound to the lwp thread. Timer
And the alarm clock sends a signal to the thread at the time of arrival.
Each lwp has a virtual time or a configuration timer, which can be used by threads bound to the lwp.
. If the virtual timer reaches the time, it sends the signal sigvtalrm or sigprof to the lwp with the timer,
Which one to send depends on the situation.
You can use Profil (2) to pre-configure each lwp, private buffer for each lwp, or
The buffer area shared by lwp. The configuration data is updated by every clock unit of the lwp user time. Configuration when creating lwp
The status is inherited.
 
4.3 non-local jump -- setjmp (3C) and longjmp (3C)
 
The use range of setjmp () and longjmp () is limited to one thread and is suitable in most cases.
However, only the setjmp () and longjmp () threads in the same thread can execute longjmp () on a signal ().
 
4.4 Resource restrictions
 
Resources are restricted throughout the process. Each thread can add resources to the process. If a thread exceeds
Soft resource restrictions, it will send a corresponding signal. The total number of available resources in a process can be determined by getrusage (3b)
.
 
4.5 lwp and scheduling type
 
The Solaris kernel has three types of process scheduling. The highest priority is realtime RT ). Second
Is a system ). The System Scheduling type cannot be used in user processes. The lowest priority is time-sharing.
(Timeshare TS), which is also the default type.
The scheduling type is maintained in lwp. If a process is created, the initial lwp inherits the scheduling type and
Priority. If multiple lwp instances are created to run non-bound threads, they also inherit these scheduling types and priorities.
Level. All non-bound threads in the process have the same scheduling type and priority.
Each scheduling type maps the priority of lwp to a whole allocation based on the configuration priority of the scheduling type.
Priority .???
The binding threads have the same scheduling type and priority as the lwp they are bound. Every bound thread in the process
There is a kernel that shows the scheduling type and priority. The system schedules the binding thread according to lwp.
The scheduling type is set with priocntl (2. The first two parameters determine whether only the called lwp or
All lwp processes of one or more processes are affected. The third parameter is a command, which can be one of the following values.
· Pc_getcid -- get the type number and type attribute of the specified type
· Pc_getclinfo -- get the name and type attribute of the specified type
· Pc_getparms -- obtains the type identifier and the process, lwp, or a group of processes.
Scheduling Parameters
· Pc_setparms -- set the type identifier and the process, lwp, or a group of processes.
Scheduling Parameters
The use of priocntl () is limited to binding threads. To set the priority of a non-bound thread, use thr_setprio (3 T ).
 
4.5.1 time-based scheduling
 
Time-based scheduling distributes execution resources evenly to various processes. Other parts of the kernel can be exclusive within a short period of time
The processor does not make the user feel that the response time is extended.
Priocntl (2) call to set the nice (2) level of one or more threads. Priocntl () affects all
The nice level of the time-sharing type lwp. The level of Nice () that is supported by ordinary users ranges from 0 to 20, while the process of Super Users ranges from
-20 to 20. The smaller the value, the higher the level.
The allocation priority of time-sharing lwp is determined based on its lwp CPU usage and its nice () level. Nice ()
Level specifies the relative priority for the time-sharing scheduler reference in the process. If the nice () value of lwp is greater
The fewer row resources, but not 0. A lwp with more than one execution will be given a lower priority than the lwp with fewer execution.
 
4.5.2 real-time scheduling
 
The real-time type can be used by one or more threads in the whole process or process. This requires a Super User
Permission. Unlike the nice (2) level of the time-sharing type, lwp marked as real-time can be allocated independently or in combination with priority.
Level. A priocntl (2) Call affects all lwp attributes in the process in real time.
The scheduler always assigns the highest priority real-time lwp. If a high-priority lwp can be run, it will interrupt
Low-priority lwp. An lwp with preempt is placed on the head of the queue of this level. One instance
The lwp of the time (RT) controls the processor until it is suspended when it is interrupted by another thread, or the real-time priority changes.
RT-type lwp has absolute priority over ts-type processes.
A new lwp inherits the scheduling type of the parent thread or lwp. An RT-type lwp inherits its father's time slice,
Whether it is limited or unlimited. A lwp with a limited time slice continues to run until it ends, blocking (for example
Wait for an I/O event), and the process is interrupted by a higher-priority real-time process, or the time slice is used up. One has no time limit
The process of the slice does not have the fourth case (that is, the time slice is used up ).
 
4.5.3 lwp scheduling and thread binding
· The thread library automatically adjusts the number of lwp in the buffer pool to run non-bound threads. The goal is:
Avoid program blocking due to lack of non-blocking lwp.
For example, if you can run more non-bound threads than lwp, and all the active threads are waiting infinitely in the kernel
The process cannot continue, knowing that a waiting thread is returned.
· Effective use of lwp
For example, if the thread library creates an lwp for each thread, many lwp operations are usually idle and
Instead, the system is exhausted by useless lwp resources.
Remember that lwp runs by time slice, not by thread. This means that if there is only one lwp, the process
There is no time slice in it-it is ready to run until it is blocked (through synchronization between threads), interrupted, or the running ends.
You can use thr_setprio (3 t) To assign priority to a thread: only non-bound threads with no higher priority
When available, lwp will be allocated to low-priority threads. Of course, the bound thread will not participate in this competition, because
They have their own lwp.
You can precisely control scheduling by binding threads to lwp .??? However, this control is widely used in non-bound thread competitions.
It is impossible to compete for an lwp.
Real-time threads can respond to external events more quickly. Consider a thread for mouse control, it must
The mouse event responds promptly. By binding a thread to lwp, lwp is available when needed. Connect
By setting lwp as a real-time scheduling type, lwp can quickly respond to lwp events.
 
4.5.4 sigwaiting -- create lwp for the waiting thread
The thread library usually ensures that there is enough lwp in the buffer pool to ensure that the program runs. If all the lwp
The operating system sends a new
Signal, sigwaiting. This signal is controlled by the thread library. If there is a thread waiting for running in the process
A new lwp is created and assigned to an appropriate thread for execution.
The sigwaiting mechanism is implemented when one or more threads are bound to computing and new threads can be executed.
A computing binding thread can prevent multiple runnable threads from starting and running in the absence of lwp. This works.
Use the thr_new_lwp flag when calling thr_setconcurrency (3 T) or thr_create (3 T.
 
4.5.5 determine the idle time of lwp
 
If the number of active threads decreases, some lwp in the lwp pool will no longer be needed. If the number of lwp is greater than the activity
The thread library destroys unused lwp. The thread library determines the lwp idle time -- if the thread is
If it is not used for a long enough time (currently set to 5 minutes), they will be deleted.
 
4.6 extend traditional signals
 
To adapt to multithreading, the Unix signal model is extended in a quite natural way. Signal Distribution is used
The traditional mechanism is built inside the process (signal (2), sigaction (2), and so on ).
If a signal controller is marked as sig_dfl or sig_ign, the action taken after receiving the signal
(Exit, core dump, stop, continue, or ignore) is valid throughout the receiving process.
All threads in the process. For more information about the signal, see signal (5 ).
Each thread has its own signal mask. If the memory used by the thread or other states are also under Signal Control
The thread will block some signals. All threads in the process are shared by sigaction (2) and its variables.
The established signal controller ,??? As usual.
A thread in a process cannot send signals to the thread in another process. A
The signals sent by kill (2) and sigsend (2) are valid within the process and will be received by any one of the processes.
.
Non-bound threads cannot use interactive signal stacks. A bound thread can use an interactive signal stack because of its
The status is connected to the execution resource. An interactive signal stack must pass through sigaction (2) and
Sigaltstack (2) to declare and enable.
An application provides a signal controller for each process. On the basis of the controller, each thread has a wired
Number controller. One way is to create an index for each thread Controller in a table, which is controlled by the process signal.
To implement the thread controller through this table. There is no zero thread.
There are two types of signals: traps, exceptions, and interruptions.
(Interrupts, asynchronous signal ).
In traditional UNIX, if a signal is suspended (waiting for receiving ),
The signal will have no effect-the pending signal is represented by one digit rather than a counter.
As in a single-threaded process, if a thread receives a signal about system call blocking,
The thread will return in advance, or carry an eintr error code, or a number of bytes smaller than the request (if
In the I/O status ).
The special significance of multithreading programming is the effect of the signal on cond_wait (3 T. This call
Cond_signal (3 T) and cond_broadcast (3 t) are usually called in other threads. However, if
To a Unix signal, an eintr error code is returned. For more information, see "waiting for interruption of conditional variables ".
 
4.6.1 synchronous signal
 
Traps (such as sigill, sigfpe, and SIGSEGV) occur after the thread's own operations, such as Division by zero
Error or explicitly send a signal to itself. A trap is only caused by its Thread class control. Several
Threads can simultaneously generate and control similar traps.
The idea of extending signals to independent threads is easy for synchronous signals-threads that cause problems
. However, if a thread does not handle this problem, for example, creating a signal through sigaction (2)
Controller, the entire process will be terminated.
Because a synchronization signal usually means a serious error of the entire process, not just a thread, killing the process
It is usually a wise practice.
 
4.6.2 asynchronous Signal
 
Interrupt (such as SIGINT and sigio) is asynchronous with any thread, which comes from some operations outside the process.
They may be signals explicitly sent to other threads, or external operations such as control-C, processing asynchronous
Signal processing is much more complicated than synchronous signal processing.
An interrupt is processed by any thread, if the thread's signal mask permits. If multiple threads exist
Receive interruption, only one selected.
If multiple concurrent identical signals are sent to one process, each of them will be processed by different threads.
The signal mask of the thread is allowed. If all threads shield the signal, the signal will be suspended until there is a credit
To process them.
 
4.6.3 continuous semantics (continuation semantics)
 
Continuation semantics is a traditional method for processing signals. The idea is
The signal controller returns to the status before the interruption. This is very suitable for asynchronous signals of Single-thread processes, such
Same as in Example 4-1. In some programming languages (such as PL/1), this is also used in unexpected scenarios.
(Exception) processing mechanism.
 
Code example 4-1 Continuous Semantics
Unsigned int nestcocunt;
Unsigned int A (int I, Int J ){
Nestcount ++;
If (I = 0)
Return (J + 1 );
Else if (j = 0)
Return (A (I-1, 1 ));
Else
Return (A (I-1, A (I, J-1 )));
}
Void sig (int I ){
Printf ("nestcount = % d/N", nestcount );
}
Main (){
Sigset (SIGINT, sig );
A (4, 4 );
}
 
4.6.4 new operations on Signals
 
Several new signal operations for multithreaded programming are added to the operating system.
Thr_sigsetmask (3 T)
Thr_sigsetmask (3 T) for threads while sigprocmask (2) for processes -- It sets (thread)
Signal mask. If a new thread is created, its initial signal mask inherits from the parent thread.
Avoid using sigprocmask () in multi-threaded programming because it sets the lwp signal mask.
The threads affected by the operation can change after a period of time .???
Unlike sigprocmask (), thr_sigsetmask () is a relatively low-cost call because it does not
Generate System calls.
Thr_kill (3 T)
Thr_kill is the thread version of kill (2)-it sends signals to specific threads.
Of course, this is different from sending signals to processes. If a signal is sent to a process, the signal can be sent
Any thread in the process is controlled. A signal sent by thr_kill () can only be processed by a specified thread.
Note: You can only use thr_kill () to send signals to threads in the current process. This is because the thread identifier
Is local-it cannot be named for threads in other processes.
Sigwait (2)
Sigwait (2) causes the call thread to block until it receives all signals specified by the set parameter. When the thread is waiting,
The signal identified by the set should be unblocked, but the initial signal mask will be restored when it is called and returned.
Use sigwait () to separate threads from asynchronous signals. You can create a thread to listen to asynchronous messages.
Other threads are created to block the specified asynchronous signal.
If the signal is sent, sigwait () clears the pending signal and returns a number. Many threads can simultaneously
Call sigwait (), but after each signal is received, only one related thread returns.
Through sigwait (), you can process asynchronous signals at the same time-a thread calls
To process the signal. Once the signal is received, it is returned. If you ensure that all threads (including calling sigwait ()
Thread) to shield such a signal, you can ensure that such a signal is processed safely by the specified thread.
Generally, use sigwait () to create one or more threads to wait for the signal. Because sigwait () can receive
The blocked signal should ensure that other threads are not interested in such a signal, so that the signal is sent by accident.
Such a thread. If the signal arrives, a thread returns from sigwait (), processes the signal, and waits for other
Signal. The thread that processes signals is not limited to the use of asynchronous security functions. It can be the same as other threads in the normal way.
STEP (the asynchronous security function type is defined as "security level Mt interface safety levels ).
---------------------------------------
Note-sigwait () cannot be used to synchronize Signals
---------------------------------------
Sigtimedwait (2)
Sigtimedwait (2) is similar to sigwait (2), but if no signal is received within the specified time,
It has an error and returns it.
 
4.6.5 thread-oriented signal (thread-directed signals)
 
The UNIX signal mechanism extends the concept of "thread pilot signal. They are like normal asynchronous messages.
They are sent to the specified Thread instead of the process.
Waiting for a signal in a separate thread is safer and easier than installing a signal controller.
A better way to process asynchronous signals is to process them at the same time. By calling sigwait (2), a thread
Wait for a signal to occur.
Code example 4-2 asynchronous signal and sigwait (2)
Main (){
Sigset_t set;
Void Runa (void );
 
Sigemptyset (& set );
Sigaddset (& set, SIGINT );
Thr_sigsetmask (sig_block, & set, null );
Thr_create (null, 0, Runa, null, thr_detached, null );
While (1 ){
Sigwait (& set );
Printf ("nestcount = % d/N", nestcount );
}
}
Void Runa (){
A (4, 4 );
Exit (0 );
}
This example changes Example 4-1: The main function shields the SIGINT signal and creates a subthread to call
Use the function a in signature, and then use sigwait to process the SIGINT signal.
Note that the signal is blocked in the computing thread because the computing thread inherits the signal mask of the main thread. Unless
The main thread does not receive SIGINT because it is blocked by sigwait.
In addition, when sigwait () is used, the system call will not be interrupted.
 
4.6.6 completion Semantics)
 
Another way to process signals is to use complete semantics. Complete semantics when the signal shows extremely serious
The error occurs so that the current code block has no reason to continue running. The code will be stopped.
Instead, it is a signal controller. In other words, the signal controller completes the code block.
In Example 4-3, the problematic block is the then part of the IF statement. Call setjmp (3C) to protect jbuf
Store the current status of the register and return zero -- This executes the block.
 
Code example 4-3 complete Semantics
Sigjmp_buf jbuf;
Void mult_divide (void ){
Int A, B, C, D;
Void problem ();
Sigset (sigfpe, problem );
While (1 ){
If (sigsetjmp (& jbuf) = 0 ){
Printf ("three numbers, please:/N ");
Scanf ("% d", & A, & B, & C );
D = a * B/c;
Printf ("% d * % d/% d = % d/N", A, B, C, D );
}
}
}
Void problem (INT sig ){
Printf ("couldn't deal with them, try again/N ");
Siglongjmp (& jbuf, 1 );
}
If sigfpe (a floating point accident) occurs, the signal controller is awakened.
The signal controller calls siglongjmp (3C). This function saves the register status to jbuf, causing the program
Return again from sigsetjmp () (the saved registers include program counters and stack pointers ).
However, this time, sigsetjmp (3C) returns the second parameter of siglongjmp (), which is 1. Notice Block
Skipped. It will be executed only after the next repetition of the while loop.
Note that you can use sigsetjmp (3C) and siglongjmp (3C) in multi-threaded programming, but be careful,
The thread will never use the sigsetjmp () result of another thread to do siglongjmp (). And,
Sigsetjmp () and siglongjmp () Save and recover the signal mask, but sigjmp (3C) and longjmp (3C)
Will not do this. If you use a signal controller, it is best to use sigsetjmp () and siglongjmp ().
Completion semantics is often used to handle exceptions. Specifically, the ADA language uses this model.
--------------------------------------
Note-sigwait (2) should never be used to synchronize signals.
--------------------------------------
 
4.6.7 signal controller and asynchronous Security
 
There is a concept similar to thread security: asynchronous security. Asynchronous security operations are guaranteed not to be interrupted.
.
If the signal controller conflicts with the operation being interrupted, asynchronous security issues may occur. For example, assume that
A program is being called by printf. When a signal occurs, its controller also calls printf ():
The two printf () outputs are intertwined. To avoid this result, if printf is interrupted
Printf cannot be called.
This problem cannot be solved using the synchronization primitive, because the attempt to synchronize operations will immediately lead to a deadlock.
For example, suppose printf () uses mutex lock to protect itself. Now assume that a thread is calling
Printf (), the first printf will have to wait in the mutex lock, but the thread is suddenly interrupted by the signal. If
The Controller (called by the thread interrupted in printf) also calls printf (), and blocks
The thread tries again to get the right to use printf, which leads to a deadlock.
To avoid interference between the Controller and the operation, or to ensure that this situation will never happen (for example
All signals may be blocked at the time of the problem), or only asynchronous security operations are used in the signal controller.
Because the user-level operation sets the thread mask at a relatively low cost, you can easily design the code to make
It conforms to the scope of asynchronous security.
 
4.6.8 interrupt wait for conditional variables
 
If a signal is obtained when the thread waits for a condition variable, the previous practice is (assuming that the process does not
An eintr is returned for an interrupted call.
The ideal new condition is that when cond_wait (3 T) and cond_timedwait (3 t) are returned, the mutual exclusion will be obtained again.
Lock.
Solaris multithreading does this: if a thread is in the cond_wait or cond_timedwait () function
Number blocking, and obtain an unshielded signal, the (signal) controller will be enabled, cond_wait ()
Or cond_timedwait () returns eintr and locks the mutex lock .???
This means that the mutex lock will be obtained by the signal controller because the controller must clean up the environment.
See example 4-4.
Code example 4-4 condition variable and waiting for interruption
Int sig_catcher (){
Sigset_t set;
Void hdlr ();
 
Mutex_lock (& MUT );
 
Sigemptyset (& set );
Sigaddset (& set, siging );
Thr_sigsetmask (sig_unblock, & set, 0 );
 
If (cond_wait (& cond, & MUT) = eintr ){
/* Signal occurred and lock is held */
Cleanup ();
Mutex_unlock (& MUT );
Return (0 );
}
Normal_processing ();
Mutex_unlock (& MUT );
Return (1 );
}
Void hdlr (){
/* Lock is held in the handler */
.........
}
Assume that the SIGINT signal is blocked at the entrance of sig_catcher (), and hdlr () has been established
Call through sigaction () to become the controller of Sigint.
If the thread is blocked in cond_wait (), an unblocked signal is sent to the thread, line
First obtain the mutex lock, then call hdlr (), and then return eintr from cond_wait.
Note that specifying the sa_restart flag in sigaction () is ineffective -- cond_wait (3 T)
It is not a system call and will not be automatically restarted. If the thread is always called when cond_wait () is blocked
Eintr is returned.
 
4.7 I/O

One advantage of Multithreading is its I/O performance. Traditional UNIX APIs are not provided to programmers in this field.
Adequate assistance-you can use the assistance of the file system or skip the entire file system.
This section describes how to use I/O concurrency and multi-buffer in multiple threads for more flexibility. This
This section also discusses the similarities and differences between synchronous I/O (multithreading) and asynchronous I/O (either or not multithreading.
 
4.7.1 I/O as a remote process call
 
In the traditional UNIX model, I/O is synchronous, just as you are calling through a remote process.
(RPC) to manipulate peripherals. Once the call is returned, I/O is completed (or at least it seems to have been completed-for example, a write
Requests may only move data in the operating system ).
The advantage of this model is that it is easy to understand, because programmers are familiar with process calls.
An alternative method (not available in traditional UNIX) is asynchronous mode. I/O requests only start one
. The program needs to discover whether the operation is complete.
This method is not as simple as the synchronization method, but its advantage is that it allows concurrent I/O processing and traditional
.
 
4.7.2 tamed asynchrony)
 
You can achieve most of the benefits of asynchronous I/O by using synchronous I/O in multi-thread programming. In different
In step I/O, you send a request and then check whether the request has been completed. You can use Separation
To synchronize the I/O operations. Then, the main thread (maybe thr_join (3 t) checks whether the operation is complete.
 
4.7.3 asynchronous I/O
 
In most cases, there is no need to use asynchronous I/O, because its effect can be achieved through threads,
Each thread uses synchronous I/O. However, in a few cases, threads cannot fully implement asynchronous I/O.
Yes.
The most direct example is to write a tape using a stream. This technology writes data streams to tape, magnetic
Prevents the tape drive from stopping when the drive is running at high speed.
To achieve this, when the tape driver responds to a mark that the previous write operation has been completed interrupted,
The tape drive in the kernel must send a Write Request queue.
The thread cannot guarantee that asynchronous write is queued, because the execution sequence of the thread itself is uncertain. For example, try
It is impossible for a diagram to queue the write operations on the tape.
 
* Asynchronous I/O operations
# Include <sys/asynch. h>
Int aioread (INT filedes, char * bufp, int bufs, off_t offset,
Int whence, aio_result_t * resultp );
Int aiowrite (INT filedes, const char * bufp, int bufs,
Off_t offset, int whence, aio_result_t * resultp );
Aio_result_t * aiowait (const struct timeval * timeout );
Int aiocancel (aio_result_t * resultp );
Aioread (3) and aiowrite (3) are formally different from pread (2) and pwrite (2 ).
Parameters. Calling aioread () and aiowrite () causes initialization (or queuing) of an I/O operation.
The call will not be blocked. The call status will be returned to the structure directed by resultp. Its type is
Aio_result_t, including:
Int aio_return;
Int aio_errno;
If a call fails immediately, the error code is returned to aio_errno. Otherwise, this domain contains
Aio_inprogress indicates that the operation is successfully queued.
You can call aiowait (3) to wait for the end of a specific asynchronous I/O operation. It returns
Pointer to the aio_result_t data structure, which consists of the original aioread (3) or aiowrite (3)
. If these functions are called, aio_result contains a response similar to read (2) and write (2 ).
Value. aio_errno contains the error code, if any.
Aiowait () uses a timeout parameter, which specifies how long the caller can wait. General situation
A null pointer indicates that the caller wants to wait for a certain period of time. If the Pointer Points to the data structure package
Zero value indicates that the caller does not want to wait.
You can start an asynchronous I/O operation, do some work, and then call aiowait () to wait for the end
Request. Or you can use sigio for asynchronous notification after the operation is complete.
Finally, a pending asynchronous I/O operation can be canceled by calling aiocancel. This process is
Use the address that stores the result as a parameter during the call. This result area identifies the operation to be canceled.
 
4.7.4 shared I/O and new I/O system calls
 
If multiple threads use the same file descriptor for I/O operations at the same time, you will find that the traditional
The unix I/O interface is insecure. A problem occurs when a non-serial I/O (concurrency) occurs. It uses
Lseek (2) is called by the system to set the file offset for the subsequent read (2) and write (2) functions. If two or
When more threads use lseek (2) to move the same file descriptor, a conflict will occur.
To avoid conflicts, use the new pread (2) and pwrite (2) systems.
# Include <sys/types. h>
# Include <unistd. h>
Ssize_t pread (INT Fildes, void * Buf, size_t nbyte, off_t offset );
Ssize_t pwrite (INT filedes, void * Buf, size_t nbyte, off_t offset );
The Calling effects are similar to read (2) and write (2). The difference is that when a parameter is added
Offset. With this parameter, you do not need to use lseek (2) to specify the offset.
File descriptor for safe operations.
 
4.7.5 substitution functions of GETC (3 S) and putc (3 S)
 
A problem may occur in the case of standard I/O. Programmers can quickly get used to GETC (3 S) and
Putc (3 S) functions-they are implemented using macros. Because of this, they can be in the loop of the program.
Efficiency is not required.
However, if you use a thread-safe version, the cost will suddenly become expensive-they need (at least)
Two internal subroutines are called to lock and unlock a mutex lock. To solve this problem
The alternative versions of some functions -- getc_unlocked (3 S) and putc_unlocked (3 S ).
These functions do not lock mutex locks, so the speed is like GETC (3 S) and putc (3 S) of a non-thread-safe version)
Fast. However, if you use the thread-safe method, you must use flockfile (3 S) and
Funlockfile (3 S) explicitly locks and unlocks mutex locks to protect standard I/O streams. These two calls are placed in
Outside the loop, while getc_unlocked () or putc_unlocked () is placed inside the loop.
 
--
※Source: · bbs.net.tsinghua.edu.cn · [from: sys11.cic. Tsing]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.