Operating System Basics

Source: Internet
Author: User

Classification of operating systems:

Batch processing operating system, time-sharing operating system, RTOS, network operating system, distributed operating system, personal computer operating system.

Batch processing operating system:

Excellent: resource sharing, automatic scheduling, improve resource utilization and system sub-throughput.

Inferior: No interaction, turnaround time is longer.

issues to be addressed by multi-channel batch handlers: Synchronous mutex, memory size, usage efficiency, memory protection

Timeshare System: online Multi-user interactive operating system, interrupt technology, time slice rotation

Excellent: Good human-machine interaction, shared host, user independence

Real-time operating system : Online system, the external request can be completed within the specified time.

Features: Limited waiting

Limited response

User Control

High reliability

Strong error handling capability

Considerations : Real-time clock management, continuous man-machine conversation, overload, high reliability and security take redundancy measures

General operating system :

At the same time there are multi-channel batch processing system, time-sharing, realtime processing function.

PC Operating system:

Online interactive single user, with Windows and Linux as the main.

Network operating system:

Characteristics:

The interconnected computer community, physically dispersed

Autonomous, each computer has its own operating system, works independently, and works together under network protocols.

System interconnection by communication facilities (hardware and software) implementation

System through communication facilities for information exchange, resource sharing, interoperability and collaborative processing.

Distributed System:

Characteristics:

Functional distribution

Tough sex

High reliability

Operating system Features:

Processor management,

Storage Management (memory allocation, memory protection, memory expansion),

Device Management (channel, controller, input assignment and management, device independence management)

Information management (file System Management)

User interface Management

Operating System User interface:

Command interface, program interface, graphical interface

Channel:

Used to control the transmission of I/O devices and memory data, which is independent of the CPU, enabling CPU and I/O parallelism

Interrupt:

The CPU stops the original work after receiving the external request interrupt signal, saves the scene, goes to handle the interrupt event, and returns to the breakpoint to continue working after completion.

Interrupt request, interrupt response, onsite protection, interrupted processing, interrupt return

Computer Hardware composition :

processor, memory, I/O devices, peripherals

Several main registers:

Address Register

Data registers

Program counter

Instruction counter

Program Status Word PSW

Interrupt Field Protection Register

Procedure Call Stack

Access speed sequencing:

Register > Cache > Memory > Hard disk cache > HDD > Disc, Disk

Operating system boot order:

Boot power, interrupt signal, System boot, import memory execution code, operating system load, initializing hardware

System calls:

Device Management

File Management

Process Management

Process Communication

Storage Management

Thread Management

Sink Handler: the mechanism that invokes the service in the system for the control system is called a trap mechanism.


Trap instruction: The command that causes a processor interrupt due to a system call is called a trap instruction.

Features of the sequential program:

Sequential, closed, reproducible

implementation of environmental characteristics: Independence, randomness, resource sharing

Concurrent Program execution:

Overlap in Program execution time

Process:

Is the procedure for the execution of the data set, and is the basic unit of resource allocation.

The difference between a process and a program:

A process is an execution of a program, a life cycle, a dynamic concept

Program is a static concept that can be used as software data for long-term preservation

A process is a stand-alone unit that can be concurrently active with other processes

Process is the basic unit of competing computer resources and the basic unit of resource dispatch.

Different processes can contain the same program.

The difference between a job and a process:

A job is a task entity that the user submits tasks to the corresponding computer

A job can contain multiple processes

The concept of the job is mainly used in batch processing systems

Process Description:

1, Process Control block PCB:

(1) PCB description information: process identification, user identification, family relationship

(2) PCB control information: Process current status, priority, program entry, timing information, communication information

(3) Resource management information: memory, swap coverage information, shared program segment information, input, file pointers, etc.

(4) CPU field protected Area

2. Program section

3. Data structure Set

Process Status:

Execute, Ready, wait (block)

process state transitions:
Run to wait for an event to occur such as waiting for I/O to complete
Wait until the ready event has occurred, such as I/O completion
Run to ready time slices to for example two classes time to class
New process to ready the newly created process enters the ready state
Ready to run when the processor is empty, select a process from the ready process queue by the dispatch dispatcher to consume the CPU.

Process Control:

The system uses some program fields with specific functions to create, revoke, and complete the transitions of process states so as to achieve multi-process efficient concurrent execution and coordination, and realize resource sharing.
Primitive language:

Some program segments that perform in a system state are called primitives.
The primitives used for process control are:

Create Primitives, Undo primitives, Block primitives, Wake Primitives.
How the process was created:

Created uniformly by the System program module

Created by the parent process. Process creation System call

Create (NAME,PRIORITY,START-ADDR)

Unix System fork ()
Process revocation:

1, the process has completed the required functions and normal termination

2. Abnormal termination due to some kind of error

3, the ancestor process requires the revocation of a child process.

In the general operating system, the system call that the process revokes is the kill Unix system is exit () if the undo process has its own child process, the undo primitive undoes the PCB structure of its child process and frees the resources freed by the child process, then undoes the current process's PCB structure and frees its resources.

Process Mutex:

Cause: resource sharing, process collaboration

Critical Resources : Allow only one process to consume resources at a time

critical section: program segment for process access critical resources

Indirect restriction: The collaborative process cannot access critical resources at the same time

Direct restriction : Mutex processes cannot access critical resources at the same time

Process synchronization:
In the asynchronous environment, a group of concurrent processes are cooperating and waiting for each other to send messages to each other because of the direct constraints, so that the process of executing each process at a certain speed is called inter-process synchronization.


Process communication mode:

# Pipe: A pipe is a half-duplex mode of communication in which data can only flow in one direction and can only be used between processes that have affinity. A process's affinity usually refers to a parent-child process relationship.
# famous pipe (named pipe): A well-known pipe is also a half-duplex mode of communication, but it allows communication between unrelated processes.
# Semaphore (Semophore) : Semaphore is a counter that can be used to control access to shared resources by multiple processes. It is often used as a locking mechanism to prevent a process from accessing the shared resource while other processes are accessing the resource. Therefore, it is primarily used as a means of synchronization between processes and between different threads within the same process.
# Message Queue : Message Queuing is a linked list of messages, stored in the kernel and identified by message queue identifiers. Message Queuing overcomes the disadvantages of less signal transmission information, only unformatted byte stream, and limited buffer size.
# signal (sinal): A signal is a more sophisticated means of communication that notifies the receiving process that an event has occurred.
# Shared memory : Shared memory is the mapping of memory that can be accessed by other processes, which is created by a process, but can be accessed by multiple processes. Shared memory is the fastest IPC approach and is specifically designed for low-efficiency operation of other interprocess communication modes. It is often used with other communication mechanisms, such as signal two, to achieve synchronization and communication between processes.
# Socket : Socket is also an inter-process communication mechanism, unlike other communication mechanisms, it can be used for different and inter-process communication.

Deadlock:

Refers to a concurrent process that waits for each other's resources, and these concurrent processes do not release their own resources until they have their resources. Thus causing everyone to want resources without resources, a concurrent process can not continue to push forward state.

Four necessary conditions for deadlock generation:

Mutex, inalienable condition, request and hold, loop wait

To unlock a deadlock:

Deprivation of resources

Kill process

Processor scheduling mechanism:

Job scheduling: Advanced scheduling or macro scheduling

Switching scheduling: Intermediate scheduling

process scheduling: low-level scheduling

Thread Scheduling:

turnaround Time: the time of the job from the moment of submission to completion

Includes wait time and finish time

the function of process scheduling:
① use of PCB blocks to record the execution of all processes in the system
② according to a certain scheduling algorithm, select a process in the ready state, assign it to the processor (this is the most important function)
③ implementation of a process context switch
Causes of process scheduling:
1. Execution of the executing process is complete. This will waste processor resources if the new ready process execution is not selected.
2, the executing process itself calls the blocking primitive to block itself into a sleep wait state.
3, the executing process calls the P primitive operation, which is blocked due to insufficient resources, or invokes the V primitive to activate the process queue waiting for the resource.
4. The executing process is blocked after I/O request is made.
5. The time slice has been used up in the timeshare system.
6, after executing the system call in the System program returns the user process can be considered that the system process execution so that can be scheduled to select a new user program execution.
These are the reasons for the process scheduling caused by the CPU execution in an inalienable manner, when the CPU is being implemented in a way that is disenfranchised and
7, the priority of a process in the ready queue becomes higher than the priority of the current execution process, and process scheduling also occurs.
The deprivation method, that is, the ready queue, once there is a process with a priority higher than the current process priority, a process dispatch immediately occurs, transferring the processor.
Non-deprivation, an inalienable way, even if the ready queue exists with priority higher than the current execution process, the current process will continue to occupy the processor until the process occurs because it dispatches the call primitive operation or waits for the I/O into the blocking state or when the time slice is exhausted.


Process Scheduling Performance Evaluation:

1. Process scheduling performance is an important index to measure operating system performance.

2, can use the test or simulation system to evaluate the performance of the corresponding time schedule

Scheduling algorithm:

1, FCFS first come first service

2. RR time Slice Rotation method

3. Multilevel feedback Rotation method

4. Priority method

5, the shortest operation priority method

6, the highest response ratio priority method: Response than r= (wait time w+ execution time t)/execution time t

Memory Classification:

Memory, external memory

Virtual Storage :

A technology that provides users with a memory that is not constrained by physical memory structure and capacity is called virtual memory or virtual storage technology. Virtual memory requires a large volume of external memory to support or call the material base.

Program Address:

The address or logical address that the user uses to program, and the base unit of the virtual address can be the same as the basic unit of memory.


Program Address space:

Logical address space, virtual address space: A collection of user's program addresses is called a logical address space its addressing always starts from 0 and can be a one-dimensional linear space or multidimensional space.


Physical Address:

Divide the memory into several storage units of equal size, one for each unit, and this number is called memory address, Physical address, absolute address, real address
The storage unit occupies 8 bits, called byte bytes.


Physical Address space:

The collection of physical addresses is called the Physical address space, the main memory address space, which is a one-dimensional linear space.

Three ways to address mappings:

1. Determine the address mapping relationship when programming or compiling

2. Static address mapping

3. Dynamic Address Mapping

Memory Management Policy:

1. Distribution structure

2. Placement Strategy

3. Exchange Policy

4. Transfer into the policy

5. Recycling Strategy

Storage protection Policy:

(1) Upper and lower bounds protection

(2) Protection key

(3) The boundary register is combined with the CPU user state or the kernel mentality work mode

Partition Storage Management:

1. Fixed partition

2. Dynamic partitioning

Dynamic Partitioning Policy:

First Adaptation method

Best Fit Hair

Worst-fit approach

Advantages and disadvantages of partitioned storage management
Advantages:
1, the realization of multiple jobs or processes to the memory sharing, to facilitate the multi-channel programming, thus improving the system's resource utilization
2, the method requires less hardware support, simple management algorithm, so it is easy to implement the shortcomings
1, memory utilization is still not high
2. The size of the job or process is controlled by the size of the partition, unless combined with coverage technology and switching technology
3, unable to realize the information sharing between the partitions

Coverage technology:

Requires programmers to provide a clear overlay structure. That is, the programmer must complete the task of dividing a program into different sections and stipulating the order in which they are executed and overwritten. The operating system completes the coverage between program segments based on the overlay structure provided by the programmer.


Switching technology:

is a memory augmentation technique that writes the program or data of a part of memory to the external memory swap area, and then switches the specified program or data into memory from the external memory swap area and allows it to execute.


The switching technology does not require the programmer to give an overlay structure between the sections of the program, the exchange is mainly between the process or the job, overwriting is mainly in the same job or in-process execution overrides can only overwrite those that are not related to the overlay program segment.
The exchange process consists of two processes that are swapped in and out.

How to record the Idle store:

Bitmap method and Linked list method

Dynamic page management is divided into:

Request page-style management and pre-paging in-page management.

Jitter Phenomenon:

Substitution algorithm selection is not appropriate, it is possible to generate the page just recalled memory and immediately recalled to memory, recalled memory soon immediately recalled memory, so repeated situation. This makes the entire system page scheduling is very frequent, so that most of the time spent in main memory and secondary storage between the back and forth into the phenomenon.

Commonly used page replacement algorithms: replace pages from external memory to memory, increasing access speed

Random elimination method

Rotation method

Advanced First Out

Most recently not used

Least recently used elimination algorithms

Belady phenomenon:

For a job or a process, if the number of pages given to him is closer to the number of pages it requires, the smaller the number of page faults just occurred, but the use of FIFO algorithm is that there will be more allocation of pages, the more the strange phenomenon of page faults.

Page management can provide two ways of protecting memory:

1. Address cross-border protection

2, through the page table control to the memory information access operation mode to provide protection.

Advantages and disadvantages of page-style management

Advantages:
1, because it does not require the operation or process of the program segment and data in memory for continuous storage, so as to effectively solve the fragmentation problem
2, dynamic page management provides memory and external memory unified management of virtual storage implementation, so that users can take advantage of the memory space greatly increased. This not only improves the utilization of main memory, but also facilitates the organization of multi-channel program execution.
Disadvantages:
1, requires the corresponding hardware support. For example, the address transformation mechanism, the generation of page faults and the selection of obsolete pages require corresponding hardware support. This increases the cost of the machine.
2, increase the system overhead, for example, the fault processing of pages.
3, the request paging algorithm, such as improper selection, it is possible to produce jitter phenomenon.
4. Although fragments are eliminated, the last page of each job and process has a portion of the space that is not being exploited remains large.

Segment vs. page-style comparison:

Protection of segments:

1. Address cross-border protection law

2, Access mode Control protection law

Advantages and disadvantages of segment management
Advantages
Provides the virtual storage implementation of the unified management of internal and external storage.
The segment can grow dynamically as needed.
Facilitates sharing of information segments with full logic functionality.
Easy to implement dynamic linking.
Disadvantages
Additional hardware support is required.
Handling fragmentation is tricky.
Bring some difficulty and cost to the system management.
The length of each segment is limited by the size of the memory availability zone.
Choosing an inappropriate algorithm for elimination may cause jitter.

Integrated segment and page-page memory management combine the benefits of segment and page management.

Operating System Basics

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.