UNIX OS kernel structure report

Source: Internet
Author: User

Unix operating system kernel structure report

1 , There is a code for the program as follows:

Main ()

{

int i;

for (i=0; i<3; i++)

Fork ();

}

How many processes has the program been set up to run? Use the process family tree to draw a relationship between parent and child processes.

Solution: Altogether 7 processes were established.

2 , A "least recently Used (LRU)" algorithm is used in UNIX systems to build a data buffer pool. If the core uses the "FIFO" algorithm to build the buffer pool, then the buffer algorithm getblk, it will cause the difference between the main functional?

Solution: GETBLK is an algorithm that allocates buffers to disk blocks. When building data buffer pools with LRU or FIFO, it is primarily the maintenance of the use and release of buffers on the idle list.

When using LRU, when getblk allocates a buffer, one is removed from the table header of the idle list each time, when the buffer is released, if the buffer is valid and the buffer is not "old", the buffer is placed at the end of the idle list, and when the buffer content is invalid to "old", the buffer is hung to the table header. If the block is allocated a buffer, it is found that the buffer is already in the hash column, if the block is idle, then the block is also removed from the free list, and then hung to the idle list footer. This allows buffers that are not used or used for the longest time to be used first, thus satisfying the local principle of the process and avoiding duplicate requests for buffers.

When FIFO is used, the GETBLK allocation buffer operates on a first-in-one buffer queue, where the first-used buffer is used first. For example, here or from the table header operation, when using getblk from the idle list header to get a buffer, after use, hang to the end of the table. After each reuse and release, the position of the idle list is not changed, and the idle list is used sequentially in the getblk. If the process of requesting I/O is much, and the process of requesting the same piece of data is getblk multiple times, then because the data buffer pool is built in FIFO, the buffers that the process will use frequently are used by other processes, resulting in frequent reallocation of buffers, which makes the system inefficient.

3 , a system open File table with a reference count of 2 indicates that there are two processes sharing the read-write pointer? Why?

Solution: Not necessarily.

The process of opening a file to a Unix system is usually to check the permissions of the open file after the core finds the index node in memory, and then assign a table entry in the file table for the file. There is a pointer in the file table entry that points to the index node of the file being opened, and a field that indicates the offset in the file, where the core expects the next read or write operation. In addition, a table entry is assigned to the user File descriptor table and the index of the table entry is noted, and the user file descriptor refers to the table entry in the corresponding global file table.

Each open call causes a unique table entry to be assigned to the User Descriptor table and the core file tables. But in the core Memory Index node table, each file and table entry is one by one corresponding.

A reference count value of 2 for a table entry in a file table means that there are 2 user file descriptor entries that point to the file table entry, which means that there are two file descriptor shared offset pointers.

You can also make the file Table entry reference count value 2 when there is only one process. For example, when a process originally has a file descriptor, the file descriptor is copied by the system invoke DUP, a new file descriptor is returned, and the corresponding file table entry reference count is +1. At this point the new descriptor points to the same file table entry as the old descriptor, which has a reference count of 2.

4 , the length of each catalog item may be different in the directory structure of the variable-length catalog item. Use pseudo-code to design an algorithm for a brief catalog entry request (Dir_get) and a catalog item release (Dir_release).

Solution:

#define Maxnamlen 255

struct direct{

Long D_into; Directory I node number

Short D_reclen; The directory entry from the starting position to the starting length of the catalog item

Short D_namelen; Catalog Item Name length

Char d_name[maxname+1]; FirstName string, +1 for string Terminator

}

Stuct Direct *p= (direct) malloc (sizeof (direct) *100);

Algorithm: Dir_get

Input: Directory name or path name plus new catalog entry (eg:.) /lw/unix/works)

Output: (New catalog entry) returned success or failed

If (path name)

Namei (path name),

Split out the directory name;

struct newdir= new Direct;

Add an item to the end of the corresponding table of contents found,

newdir.d_into= current Directory I node number,

newdir.d_reclen= current directory tail pointer address-directory start address +1,

Newdir.d_namelen=sizeof (Long) +2*sizeof (short) +sizeof ("directory Name"),

Newdir.d_name= "directory Name";

* (table item tail pointer + 1) =newdir;

Catalog table item tail pointer =p+newdir.namelen;

Recycling Newdir;

if (failed)

return-1;

return 0;

Algorithm: Dir_release

Input: Directory name or path name

Output: (delete directory name) succeeded or failed

If (path name)

Namei (path name),

Split out the directory name;

Locate the corresponding directory index node, and pointer d points to the directory entry;

Gets the length of the catalog item in the structure;

Clear the item;

* Pointer d=* (pointer d+ directory item length);//The directory entry after the deletion of the directory entry is moved to the front, merged with the catalog item

The length of the catalogue item d.namelen=d.namelen+ deleted;

if (success)

return 0;

return-1;

5 , Below is a procedure for capturing a "child process dead" soft interrupt signal:

#include <signal.h>

Main ()

{

extern catcher ();

Signal (SIGCLD, catcher);

if (fork () = = 0)

Exit ();

Pause (); /* suspend execution until a signal is received */

}

Catcher ()

{

printf ("Parent caught sig\n");

Signal (SIGCLD, catcher);

}

The program, like many soft interrupt signal capture procedures, after receiving soft interrupt signal, reset the soft interrupt signal capture function. May I ask what the result of the program might be?

Solution: The soft interrupt signal notifies the process that an asynchronous event has occurred. The subject is a soft interrupt signal associated with the process termination. This kind of soft interrupt signal is sent when the process exits or when the process calls the system call signal with the child process Dead (SIGCLD) as the parameter. The kernel processes soft interrupt signals only when a process returns to the user state from the kernel mentality.

The semantics of SIGCHLD are: The interrupt signal of the child process dead, when the signal is captured with signal registration, the parent process needs to call a wait function to clean up the child process in the zombie state. (signal can be understood to indicate to the process which interrupts can be kept in the soft interrupt signal domain of the process table)

There are two possible outcomes of a program run:

The program first uses the system call signal to capture the "subprocess dead" soft interrupt signal and print the parent process capture to the signal (parent caught SIG) and then call Fork () to create a child process, at which time the child process runs concurrently with the parent process.

1, if the parent process first runs, the parent Process Execution pause () is suspended waiting to receive a soft interrupt signal, the child Process Execution exit (), become a zombie state, and send a soft interrupt signal; When the parent process captures the soft interrupt signal, it executes the catcher function, After you define how SIGCHLD is handled, the parent process finishes executing.

2, Kawai process first run, that is, call exit () to become a zombie process, the parent process receives a signal of the child process dead, executes a catcher (), and cleans up the corresponding process table entry and then re-registers the child process dead interrupt signal capture, and then the parent process into a suspended state, The parent process is suspended because there are no child process dead interrupt signals coming up.

6 , the core tries to wake up all the sleep processes that are waiting to read and write to the hard drive. What happens if there is no such sleep process at this time?

Solution: The kernel attempts to reduce the frequency of access to the disk by maintaining a pool of internal data buffers called the data buffer cache. High-speed buffering contains data that is fast for recently used disks. When I/O is complete, the disk controller interrupts the processor, and the disk interrupt handler wakes the sleeping process. If there is no such sleep process at this time, it means that they no longer need to change the buffer, and the kernel frees the buffer so that other processes can access it.

The kernel wakes up all the processes that are waiting for the event "no matter which buffer becomes idle", and wakes up all the processes that are waiting for the "this buffer to become idle" event to occur.

7 , The terminal driver mainly uses the line rule system to buffer and process the data I/O between the process and the terminal. What happens if the data I/O between the process and the terminal is also buffered through the data buffer system, rather than using the line rule system?

Solution: The function of the line rule is to interpret the input and output data, in the standard way, the row rules to convert the input or output data into a standard form, in the original way, only in the process and terminal transfer data, do not convert. The CPU cache primarily provides a high-speed data cache area for the CPU and memory. The order in which the CPU reads the data is to look for it in the cache, read it directly after it is found, and read from the main memory if it cannot be found. The L1 cache is also called a first-level cache, which is used primarily for staging CPU instructions and data, with different CPU L1 caches. The L1 cache has a large impact on CPU performance, and the higher the capacity, the higher the CPU performance. The L2 cache is also called Level two cache, which is mainly used to store instructions, program data and address pointers for the operating system of the computer. CPU manufacturers are doing their best to increase the capacity of the L2 cache and make it work at the same frequency as the CPU to improve CPU performance. Caching is faster than memory.

Therefore, after changing the line rules into cache, the data transfer speed will be greatly improved in the process and terminal, but the input and output can not be interpreted. This makes the data transfer between users and processes too fast, but the data received is not a standard form, cannot be read, and needs to be re-interpreted before it can be used.

8 , It is assumed that the "fair sharing scheduling" strategy is used to construct a semi-real-time semi-timed process scheduling system, which can satisfy the scheduling of fast response (real-time) processes and the scheduling of ordinary ticks. Please outline the main design ideas.

Solution: Using time slice rotation algorithm can achieve fairness and fairness to some extent, so we use time slice rotation algorithm to dispatch when we are equal in priority. However, when there is an urgent need in the system, when the system can respond in time, the system can give a high priority to the change time, such as the priority of the normal process is 0, System priority is 5. The system turns to a high-priority process until the event is over, and then the process is judged to be higher than the normal priority, and if there is one, the other process uses the time-slice rotation scheduling algorithm.

UNIX OS kernel structure report

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.