Reprinted article-Learn the operating system from HelloWorld,

Source: Internet
Author: User
Tags field table

Reprinted article-Learn the operating system from HelloWorld,

Reprint address: https://my.oschina.net/hosee/blog/673628? P = % 7b % 7 bcurrentPage + 1% 7d % 7d

This article will systematically link those knowledge points to facilitate review and review. This article is suitable for students who already have an operating system foundation to review their knowledge. This article does not detail each algorithm. This article is intended to concatenate knowledge.

Use an example to link all the knowledge points:

I wrote a C Language Program:

# Include
Main ()
{
Puts ("Hello World! \ N ");
}

The purpose is to display the Hello World on the screen.

So what does the operating system do when running this C language program?

1. First, start the program execution. The user must tell the operating system to execute the program.

How to tell:

  • Double-click the program
  • Command Line input command
  • ......

2. after the operating system knows the user's request, it will find information about the program on the disk based on the file name provided by the user. After finding the information, it will check whether the program is an executable file, if it is an executable file, the operating system determines the location of code and data in the executable file based on the Program header information and calculates the corresponding disk block address.

A file system is a software used in the operating system to manage information resources in a unified manner. It manages file storage, retrieval, and updates, provides secure and reliable sharing and protection measures, and is convenient for users.

Files are classified by nature and purpose: common files, directory files, special files (device files), pipeline files, sockets.

File storage media: disks (including SSD), tapes, discs, USB disks ......

A physical block is an independent unit for storing, transmitting, and allocating information. Storage devices are divided into physical blocks of the same size, with uniform numbers.

One request to access the disk:

  • Seek: Move the head to the specified track.
  • Rotation delay: Wait for the specified sector to rotate from the bottom of the head.
  • Data transmission: the actual data transmission between the disk and the memory.

SSD does not consume time for seek and rotation delay, so the speed is fast.

File control block: The data structure set for file management to save all relevant information required for file management.

Common attributes: file name, file number, file size, file address, creation time, last modification time, last access time, protection, password, creator, current owner, file type, shared count, various logos.

File directory: manages the metadata of each file in a unified manner to support file name-to-file physical address conversion. Organizes the management information of all files to form a file directory.

Directory files: stores the file directories on disks as files.

Directory items: constitute the basic unit of the file directory. The directory items can be FCB, And the directory is an ordered set of file control blocks.

Physical Structure of a file: storage method of the file on the storage media.

Physical Structure:

1. Continuous Structure: The file information is stored in several consecutive physical blocks.

FCB stores the start block number and length of the file block.

Advantage: it supports random access and sequential access. It requires the least time and number of seek requests, and can read multiple blocks at the same time, making it easy to retrieve one block.

Disadvantage: the file cannot grow dynamically, which is not conducive to file insertion and deletion and has external fragments (tightening technology)

2. Link Structure: information of a file is stored in several discontinuous physical blocks. Each block is connected by a pointer. The previous physical block points to the next physical block.

FCB only needs to save the starting block number

Advantage: Improves disk space utilization, facilitates file insertion and deletion, and facilitates file dynamic expansion.

Disadvantages: slow access speed, not suitable for Random Access, reliability problems (such as pointer errors), more seek times and seek time, the link pointer occupies a certain amount of space.

The link structure can be deformed: The File Allocation Table (FAT), the structure used by earlier windows and USB flash disks.

FAT stores all the link pointers. Each physical block corresponds to a row of FAT.

0 indicates the idle physical block, and-1 indicates the last part of the file.

The starting block number of the file is stored in the FCB of the file.

3. index Structure: information of a file is stored in several discontinuous physical blocks. The system creates a dedicated data structure for each file-index table, and store the block numbers of these physical blocks in the index table.

The index table is an array of disk block addresses, where entry I points to the block I of the file.

FCB stores the index table location with a field.

Index Structure advantages: maintains the advantages of the Link Structure and solves the shortcomings of the link structure. It can both sequential access and random access, meeting the requirements of dynamic file growth and insertion and deletion, make full use of the disk.

Disadvantage: The index table brings about system overhead due to a large amount of track time and number of track searches.

The index table may be very large and requires multiple physical blocks to be saved, with multi-level indexes and integrated indexes.

Multi-level index:

UNIX three-level index structure:

Access a file: file name-> file directory-> FCB-> Disk

Improve file system performance:

Disk scheduling: when multiple disk access requests are waiting, adjust the service sequence of these requests according to certain policies. Reduces the average disk service time, which is fair and efficient.

Disk scheduling algorithm:

3. to execute the helloworld program, the operating system creates a new process and maps the executable file format of the program to the process structure, indicating that the process will execute the program.

A process is a running activity of a program with independent functions. It is an independent unit for the system to allocate and schedule resources.

The PCB, process control block, and operating system are used to manage a special data structure of the control process. The process corresponds to the PCB one by one.

PCB includes:

Process description: process identifier (unique), process name, User Identifier, process group relationship

Process control information: priority, code execution entry address, program disk address, running statistics (execution time, page scheduling), inter-process synchronization and communication, process queue pointer, the message queue pointer of a process.

Resources and usage: status of the virtual address space, open the file list

CPU field information: register value (General Register, program counter PC, Process status word PSW, stack pointer), pointing to the process page table pointer.

Progress table: a collection of all PCB.

Process status:

The operating system is for each type of process (ready, waiting ......) Establish one or more queues. The queue element is PCB. As the Process status changes, its PCB enters another queue from one queue.

Process Creation:

  • Assign a unique identifier and PCB to the new process
  • Allocate address space for processes
  • Initialize PCB (set the default value, for example, the status is NEW ......)
  • Set the corresponding queue pointer (for example, add the new process to the ready queue linked list)

The operating system allocates an address space for each process.

Because of performance and overhead, the thread concept is introduced.

The thread overhead is small. It takes less time to create a new thread, less time to switch between threads, and no need to call the kernel for inter-thread communication (the threads of the same process share the memory and files)

A thread is a running entity in a process and the CPU scheduling unit.

4. The operating system gives control to the scheduling program. If the scheduling program selects the helloworld program, the operating system sets the CPU Context Environment for the program, jumps to the beginning of the program, and prepares to execute the program. Then the next instruction cycle is to execute the helloworld program.

CPU scheduling: select a process from the ready queue according to a certain scheduling algorithm, and grant the CPU usage right to the selected process. If the process is not ready, the system will schedule a idle process.

Three problems need to be solved during CPU scheduling: scheduling algorithm, scheduling time, and scheduling process.

Scheduling Algorithm:

CPU occupation:

Preemptible and non-preemptible

Time slice Rotation

  • Service First (FCFS)
  • Minimum job priority (SJF)
  • Shortest time first (SRTN)
  • Highest response rate first (HRRN)
  • Multi-level Feedback queue (Feedback)
  • Highest priority scheduling
  • Round Robin (Round Robin) improves the average response time for short tasks. Each process allocates a time slice.

Typical System Scheduling Algorithms:

5. when the first command of the helloworld program is executed, a page exception occurs. (the program can be executed only when the code and data are read into the memory. When the first command is executed, the code data is not read into the memory, at this time, the hardware mechanism will capture the page missing exception and give the control to the operating system)

6. the operating system manages the memory in the computer system. (if it is page-based management) the operating system allocates a page of physical memory based on the disk block address of the previously calculated program, read the helloworld program from the disk into the memory, and then continue to execute the helloworld program. Sometimes the program is very large and the memory of one page is not enough. This will cause many page missing exceptions and then read the program from the disk to the memory.

We already know that each process has its own process address space, and the process must be loaded into the memory before it can run. Therefore, how to mount the process address space into the memory is a problem that must be solved.

The above description shows that the address of the process address space in the process is not the final physical address.

Therefore, address relocation (address translation, from process address to physical address) is required to load the experiment process.

We call the process address Logical Address/relative address/virtual address, and the address in the memory storage unit physical address/absolute address/real address. Obviously, only physical addresses can be directly addressed.

Static and Dynamic address relocation

Static relocation: when a user program loads data to the memory, it converts the Logical Address to the physical address at one time. However, the position of the program in the memory cannot be changed.

Dynamic Relocation: when the program is loaded into the memory, the address remains the logical address without changing. Address translation is performed during the execution of the program, that is, the conversion is completed one by one. MMU (Memory Management Unit) accelerates relocation.

Now that the program can be loaded into the memory, how can we efficiently allocate the memory to a process?

Memory Allocation Algorithm:

  • First adaptation (first satisfied)
  • Next adaptation (from the idle zone found last time)
  • Optimal adaptation (search for the entire idle zone table and find the minimum idle zone that meets the requirements)
  • Worst adaptation (always allocate the maximum idle zone meeting the process requirements)

When the memory is returned, it will face the recycling problem and merge the free space before and after.

A classic Memory Allocation Solution: Partner System

Divide the memory by a power of 2 to form several idle block linked lists. Search for the linked list to find the best matching block that can meet the needs of the process.

Basic Memory Management Solution

The process enters the continuous area of the memory:

Single Continuous zone, low memory utilization

Fixed partitions, waste of space

Variable partitioning, the remaining part leads to a large number of external fragments, fragment solutions, and tightening Technology (mobile programs merge all small idle areas into large idle areas)

The process enters the memory discontinuous area:

Page Storage Management:

The process address space is divided into equal parts, called pages or pages. The size of the memory address space is equal to the size of the page.

Memory Allocation Policy: allocates pages based on the number of pages required by the process. Logically adjacent pages are physically not necessarily adjacent.

 

The page table records the ing from the logical page number to the page box number.

Each process has a page table, which is stored in the memory. The starting address of the page table is in a register.

Address conversion process in page store management: the CPU obtains the logical address, which is automatically divided into page numbers and in-page addresses. The page number is used to query the page table to get the page number, splice the address in the page into a physical address.

Segmented Storage Management:

The user process address is divided into several segments based on the logic of the program. Each segment has a segment name.

The memory space is dynamically divided into unequal regions.

Memory Allocation Policy: segments are allocated. Each segment occupies continuous space in the memory, but the segments can be non-adjacent.

Different from the webpage, the segment numbers and segments cannot be automatically divided and must be displayed.

Similar to the webpage, the field table is used to record the association relationship.

The address conversion process is similar to the page table: After the CPU obtains the logical address, it queries the field table with the field number to obtain the starting address of the segment in the memory, and Concatenates the offset address in the segment to calculate the physical address.

Segment-and-page storage management:

User processes are divided by segment, and memory is divided into the same page-based storage management

Segment-and-page storage management requires both segment tables and page tables.

The starting address and length of the page table of each segment are recorded in the field table.

The page tables and page-based storage management are the same.

A process is divided into several segments and requires a field table. Each segment is allocated by page storage management and corresponds to a page table. Therefore, a process corresponds to one field table and multiple page tables.

Summary:

What should I do when the memory is insufficient?

Memory "Resizing" technology: Memory compression (variable partition), coverage technology, exchange technology (temporarily move some processes to external storage), virtual storage technology.

Virtual Storage Technology: when a process is running, it is first loaded into the memory, and the other part is left on the disk. When the command to be executed or the accessed data is not in the memory, the operating system automatically transfers them from the disk to the memory.

By combining the memory with the disk, a larger "Memory" is a virtual memory.

The combination of virtual storage technology and page-based storage management produces virtual page-based storage management.

Basic Idea of virtual page-based storage management:

Before the process starts to run, instead of loading all pages, it loads one or zero pages, and then dynamically loads other pages according to the process's needs. When the memory is full, it needs to load new pages, replace a page in memory based on an algorithm to load a new page.

Because the number of page tables is too large, a multi-level page table is introduced.

According to the traditional address conversion method: Virtual Address-> page query table-> page box number-> physical address, each process requires a page table. Page tables occupy a lot of space.

Solve this problem: Starting from the physical address, the entire system creates a page table (because the physical address size is fixed), and the page table items record the ing between a virtual address of process I and the page box number.

However, there is still a problem. Due to the existence of multi-level page tables, each access page table needs to access the memory, so it needs to be accessed multiple times, because the CPU instruction processing speed is significantly different from the memory instruction access speed, the CPU speed is not fully utilized.

To solve this problem, due to the principle of program access locality, a fast table (TLB) is introduced to speed up address conversion.

A quick table consists of cache, which provides fast access.

The address translation process after the quick table is introduced:

Virtual page number-> fast Table query (parallel comparison)

If hit, the page table item is found.

If no hit is found, use the virtual page number to view the page table items.

If the valid bit is 1, the page is already in memory.

If it is 0, a page missing exception is thrown.

When a page is missing, the operating system needs to transfer the page to the memory. If the memory is full, some pages must be temporarily transferred to the external memory.

So what are the replacement policies?

7. The helloworld program executes the puts function (called by the system) and writes a string on the display.

8. Because the puts function is called by the system, the control is handed over to the operating system. The operating system finds the display device to send the string to. Generally, the device is controlled by a process. Therefore, the string to be written by the operating system is sent to the process.

CPU and I/O:

Reduce or mitigate speed gaps-> Buffer technology

Make the CPU do not wait for I/O-> asynchronous I/O

Remove CPU from I/O operations-> DMA, Channel

9. the Device Control Process tells the device's window system that it wants to display a string. The window system determines that this is a legal operation, converts the string to a pixel, and writes the pixel to the storage image area of the device.

10. Video hardware converts pixels into a set of control/data signals acceptable to the display.

11. The monitor interprets the signal and stimulates the LCD screen.

12. You can see hello world on the monitor.

 

From the above process, we can find that the CPU runs user programs and the operating system programs. Run the operating system program. We call the CPU in the kernel state, run the user program, and we call the CPU in the user State.

The conversion between the two CPU states:

From the user State to the kernel state, it can only be interrupted, abnormal, or in hexadecimal format (a special command is used to provide interfaces for the user program, used to call functions/services of the operating system, such as int, trap, syscall, sysenter/sysexit)

The kernel state is changed to the user State, and the program State is set to PSW.

The interrupt/exception mechanism is the driving force of the operating system. We can say that the operating system is the Interrupt Drive.

Interrupt/exception concept: the CPU reaction to a system event.

The CPU stops the program being executed, and automatically forwards the program to execute the corresponding event after it is retained. After the processing is complete, return to the breakpoint and continue executing the program that was just interrupted.

The difference between an interrupt and an exception is that the interrupt is caused by the outside, and the exception is generated by the program itself.

When will the CPU respond to the interruption? The CPU scans the interrupt register at the end of each command to check whether there is any interruption. If an interrupt occurs, the interrupt hardware sends the content of the interrupt trigger to the corresponding position of the PSW according to the required code, which is called a center-breaking code. The interrupt processing program is triggered by querying the interrupt vector table.

In addition, process mutex synchronization also exists.

Processes are mutually exclusive: because each process requires the use of shared resources (variables, files, etc.), and these resources need to be exclusively used, the competition between processes to use these resources. This relationship is called process mutex.

Process mutex software solution:

Dekker algorithm:

P process:

Pturn = true;
While (qturn)
{
If (turn = 2)
{
Pturn = false;
While (turn = 2 );
Pturn = true;
}
}

Critical Section
Turn = 2;
Pturn = false;

Q process:

Qturn = true;
While (pturn)
{
If (turn = 1)
{
Qturn = false;
While (turn = 1 );
Qturn = true;
}
}

Critical Section
Turn = 2;
Qturn = false;

Pturn and qturn indicate that the corresponding process wants to enter the critical section. If both processes want to enter the critical section, they can turn to determine whether they want to let out the CPU. To achieve mutual exclusion.

Peterson algorithm:

Overcome the disadvantages of Forced Rotation of the Dekker algorithm.

I indicates the process number.

Process I:
......
Enter_region (I );
Critical Section
Leave_region (I );
......

Int turn; // who is the turn
Int interested [N]; // array of interests. The values are false at first, indicating that a process wants to enter the critical section.
Void enter_region (int process) // assume that the process numbers of the two processes are 0 and 1.
{
Int other; // indicates the process Number of another process.
Other = 1-process;
Interested [process] = true;
Turn = process;
While (turn = process & interested [other] = true );
}
Void leave_region (int process)
{
Interseted [process] = false;
}

Here, the turn variable should be noted that the turn indicates who is going to enter the critical section. If both processes want to enter the critical section, it can be found that the trun variable will be replaced by the Post-assigned process with the pre-assigned process.

If the Peterson algorithm first wants to advance the process into the critical section, a judgment will be generated in the while loop. If the turn is the current process number (indicating that the process is intended to enter the critical section ), wait in the while loop. Of course, you also need to determine whether another process wants to enter the critical section (interested [other] = true). If another process does not want to enter the critical section, there is no need to wait.

Java Implementation of the Peterson algorithm:

Public class Peterson implements Runnable {

Private static boolean [] in = {false, false };
Private static volatile int turn =-1;

Public static void main (String [] args ){
New Thread (new Peterson (0), "Thread-0"). start ();
New Thread (new Peterson (1), "Thread-1"). start ();
}

Private final int id;

Public Peterson (int I ){
Id = I;
}

Private int other (){
Return id = 0? 1: 0;
}

@ Override
Public void run (){
In [id] = true;
Turn = other ();
While (in [other ()] & turn = other ()){
System. out. println ("[" + id + "]-Waiting ...");
}
System. out. println ("[" + id + "]-Working ("
+ ((! In [other ()])? "Other done": "my turn") + ")");
In [id] = false;
}}

Process Synchronization: events that occur in multiple processes in the system have a time series relationship and need to cooperate with each other to complete a task.

Solution:

  • Semaphores
  • (Semaphore programming error-prone), synchronize in Java
  • Inter-process communication IPC (because semaphores and processes can only transmit a small amount of information, but cannot transmit a large amount of information, and the management process does not use a multi-processor), the basic method of Process Communication is 1. message Transfer 2. shared Memory 3. MPs queue 4. socket 5. remote process call

There are also deadlocks:

Conditions for deadlock:

Resource distribution chart: A directed graph is used to describe the status of system resources and processes.

For example:

If there is no loop in the resource allocation diagram, there is no deadlock in the system. If there is a loop in the diagram, there may be a deadlock.

If each resource class contains only one resource instance, the loop is a sufficient condition for deadlock.

Deadlock Prevention:

Deadlock Avoidance: The Banker algorithm ensures that no deadlock occurs in the security status.

In general, the banker's algorithm means that the amount of money lent to each user will not exceed the total amount of bank money, but the total amount of money lent to all users can exceed the total amount of bank money.

Deadlock Detection and removal: a deadlock is allowed, but the operating system constantly monitors whether a deadlock occurs. Once a deadlock occurs, special measures will be taken to minimize the deadlock and resume the operation of the operating system.

Let's summarize the operation of the HelloWorld program again.

When we run the HelloWorld program, the operating system will find the file directory based on the file name, then find FCB, and find the corresponding file on the disk through the physical address in FCB.

How does FCB obtain the physical address of a file? This is related to the physical structure of the file. the physical structure of the file includes the continuous structure, linked list structure, and index structure. The information stored by FCB in different structures is different.

After obtaining the physical address, you need to find the path, rotate the delay, and transmit data from the disk. So how to efficiently read files from the disk? You can use different disk scheduling algorithms, such as service first, shortest track time first, scan algorithm, and rotation scheduling algorithm.

After obtaining the file, the operating system will create a new process to execute this program. The process corresponds to the PCB one by one. The PCB stores various information about the process. The system allocates an address space for each process, which is a virtual address.

After a process runs the program, it waits for the CPU to schedule the process. The CPU scheduling algorithm provides the following features: service first, minimum job priority, minimum remaining time priority, highest response rate priority, and rotation scheduling.

When the CPU chooses to schedule this program and wants to run the first command of this program, a page missing exception occurs because the code data has not been read into the memory, and sometimes the program is very large, if one page of memory is insufficient, a page missing exception may occur multiple times. The process must enter the memory before it can be run. The virtual address of the process must be converted to a physical address through address relocation, different memory management methods have different conversion methods, such as page-based storage management, segment-based storage management, and segment-based storage management. After the virtual storage technology is added, there are also virtual page-based storage management, because of the use of virtual storage technology, when the memory is full, some pages need to be temporarily transferred to external storage, the replacement algorithm has the best page replacement algorithm, first-in-first-out algorithm, not recently used algorithm, the algorithm is used at least recently.

Now the process is loaded into the memory. How can we efficiently allocate the memory to this process? Memory Allocation Algorithm: First Matching, next matching, best matching, and worst matching. If the memory is full at this time, the replacement algorithm just mentioned will be called.

At this time, the CPU has successfully run this program. Then the string to be displayed will be handed over to the display device process. Finally, the processing on a series of hardware allows the display to display HelloWorld.

 

From https://my.oschina.net/hosee/blog/673628? P ={{ currentPage + 1 }}>

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.