Operating System Basics

Source: Internet
Author: User

1 Operating System Overview

operating system : refers to the control and management of the entire computer system hardware and software resources, and reasonable organization of scheduling the work of the computer and the allocation of resources to provide users and other software convenient interface and environment of the program collection

features of the operating system : Concurrent, shared, virtual, asynchronous.

Concurrency: Two or more events occur within the same time interval.

Sharing: Resource sharing, there are two ways to share: mutually exclusive sharing method (only one process is allowed to access the resource for a period of time, only when the process has finished accessing and releasing the resource, allowing another process to access it, which is called a critical resource or exclusive Resource) Simultaneous access: Allows multiple processes to "simultaneously access" (in the macro sense, share-sharing) over a period of time.

Virtual: Refers to the physical entity into a number of logical correspondence. For example, a virtual processor (for a single core, which is actually called by ticks).

Async: In a multi-channel program environment, concurrency is allowed, but because of the limited resources, the execution of the process advances at an unpredictable speed.

the target and function of the operating system : As the manager of computer system resources; As a user interface (command interface, program interface)

the development and classification of the operating system: the manual operation phase (exclusive resources, the program's loading and running requires manual intervention), the offline input and output stage, the batch processing stage (single-channel batch processing, multi-channel batch processing), time-sharing operating system, real operating system, network operating system, distributed operating system, and human computer system.

kernel : The underlying software that is configured on the computer, including modules that are tightly tied to the hardware (clock management, interrupt processing, and so on) and programs that run more frequently (process management, device management).

Primitive : Some small programs of the underlying operations, close to the hardware, with atomicity, short operations frequently.

User state kernel mentality : The user program is called the User state instruction, kernel program is called the nuclear mentality instruction. Also refers to the working state of the CPU

2 Process Management

1 Threading Concepts

Process: Process is the process entity running process, is the system for resource allocation and scheduling of an independent unit. The so-called creation process, essentially the creation of a process image of the PCB, the removal process is essentially the removal of PCB. The process image is static and the process is dynamic.

PCB: Process Control block, in order to participate in concurrent execution of the program (including data) can run independently, it must be configured with a dedicated data structure, called the PCB. The system uses the PCB to describe the basic situation of the process and the state of operation.

process Image : Also known as process entity , consists of program segment, related data segment and PCB.

characteristics of the process :

Dynamic: A process is a program that executes sequentially, with the creation, activity, pause, and termination processes.

Concurrency: Refers to multiple process entities that are stored in memory and can be run concurrently at the same time.

Independence: Refers to the process entity is an independent operation, independent access to resources and independent acceptance of the dispatch of the basic unit.

Asynchrony: The process of mutual restriction, is the execution of the process is intermittent, that is, the process in accordance with their own unpredictable speed of progress.

Structural: Each process is configured with one PCB. Structurally, process entities are composed of program segments, data segments, and PCBs.

status of the process : Run, Ready (the process gets all the resources except the processor), block (wait), create, end.

Process Control :

Create process : 1 Assign unique process identification number, request blank pcb;2 allocate resources, allocate memory for program and data of new process and user stack; 3 Initialize pcb;4 if the process-ready queue can accommodate the new process, it will insert the new process into the ready queue and wait for it to be scheduled to run.

terminate the process : (normal end, abnormal end, external intervention can terminate the process) 1 according to the process flag number, the retrieval of PCB, read out the process state; 2 If the terminated process is in execution, immediately terminates the process and allocates the processor resources to other processes; 3 If the process is child process, All of its child processes should be terminated; 4 return all resources of the process to its parent process or system; 5 Remove PCB

process blocking and wakeup (process blocking is an active behavior of the process itself, so a running process (getting the CPU) can turn it into a blocking state)

Blocking: 1 Locate the pcb;2 corresponding to the identification number if the process is running, protect the scene, turn its state into a blocking state, stop running, and 3 insert the PCB into the waiting queue for the corresponding event.

Wakeup: The pcb;2 that finds the corresponding process in the waiting queue of the event moves it out of the waiting queue and puts it in a ready state, 3 then inserts the PCB into the ready queue and waits for the scheduler to dispatch.

process Switching : Refers to the processor running from one process to another. 1 Save processor context, including counters and other registers; 2 Update pcb;3 The PCB of the process into the appropriate queue; 4 select another process execution; 5 The data structure of the memory management; 6 replies to the processing machine context.

PCB: Process description information, Process Control and management information, resource allocation list, processor related information.

process Communication : refers to the exchange of information between processes. There are three types of advanced communication methods: Shared storage, messaging, and pipeline communication.

Threads : Direct understanding is a lightweight process, a basic CPU execution unit, and the smallest unit of program execution flow. a thread is an entity in a process that is the basic unit of dispatch and dispatch by the system, and the thread does not own system resources and has only a few resources that are necessary to run, but he can share all other resources owned by the process with other threads of the same process. when the thread is introduced, the connotation of the process changes, and the process acts as the allocation unit of the system resources except the CPU, and the thread acts as the allocation unit of the processor .

thread-to-process comparison :

Dispatch : In the traditional operating system, the basic unit with resources and independent dispatch is the process, in the operating system that introduces the thread, the thread is the basic unit of the Independent dispatch. In the same process, the switch of the thread does not cause the process to switch. Thread switching occurs in a different process, causing the thread to switch.

Owning Resources : Whether it is a traditional operating system or an operating system that introduces threading, processes are the basic unit of resources. However, threads can access system resources for their subordinate processes.

concurrency : In an operating system that introduces threads, not only can the processes be executed concurrently, but also concurrently between multiple threads within the same process.

system overhead : Process creation, destruction, switching overhead is much larger than the thread. Synchronization and communication between threads is very easy to implement.

2 Processor Scheduling

Dispatch : is to allocate the processor, that is, from the ready queue, according to a certain algorithm (fair, efficient) Select a process and assign the processor to it to run, in order to achieve the execution of the process concurrency. Is the core issue of operating system design.

scheduling level : A job from start to finish, often going through the following three levels of scheduling: Job scheduling (also known as advanced scheduling), intermediate scheduling (also known as memory scheduling), process scheduling (also known as low-level scheduling).

process Scheduling Method : When a process is executing on the machine, if a more important or urgent process needs to be handled, that is, the higher priority of the process into the ready queue, at this time how to allocate processor. Sub-deprivation, deprivation.

non-deprivation scheduling : Once the CPU is assigned to a process, the process will have the CPU until the execution completes or transitions to the wait state.

denial of dispatch : Also known as preemptive, if a more important or urgent process requires the CPU, then immediately stop the process being executed, assign the processor to this more important or urgent process

3 Process Synchronization

Critical Resource : A resource that only one process is allowed to use at a time. In each process, the code that accesses the critical resource is called a critical section. The access process for critical resources is divided into 4 parts:

Entry area: Check if the critical section can be entered, and if so, set the flag of the critical section being accessed, preventing other processes from entering the critical section at the same time.

Critical section: The code that accesses critical resources in a process is also called a critical segment.

Exit Area: Clears the flag for the critical section you are accessing.

Remaining area: The remainder of the code.

synchronization : Also known as direct constraints, refers to the completion of a certain task of two or more processes, these processes because of the need to be in some position to coordinate their work order two waiting, the transmission of information arising from the constraints of the relationship.

Mutual exclusion : Also known as indirect constraint relations. When a process enters a critical section using a critical resource, another process must wait for the critical resource to be accessed by another process after the process that occupies the critical resources exits the critical section.

Primitive : refers to the sequence of operations that completes a function without being interrupted and executed.

semaphore : Used to solve mutual exclusion and synchronization problems, can only be accessed by a standard primitive. Integer signal volume, record-type signal volume, the use of signal volume to achieve synchronization, the use of signal volume to achieve the process of mutual exclusion, the use of signal volume to achieve precursor relationship.

Enhancement : Enhancement is a software module that consists of a set of data and the operations that define the data on this set of data, which initialize and change the data and synchronization processes in the process.

4 Dead Lock

Deadlock : a deadlock (waiting for each other) of multiple processes due to competing resources, which cannot be pushed forward without external forces.

the necessary conditions for deadlock generation : Mutex condition, non-deprivation condition, request and hold condition (the process has kept at least one resource, and requests a resource that is occupied by another process, at which time the request process is blocked, but the resources it obtains remain), A cyclic wait condition (there is a loop waiting for a process resource, and the resources that each of the processes in the chain have acquired are requested by the next process).

Banker algorithm : Data structure description, banker algorithm description, security algorithm

3rd Chapter Memory Management

1 Concept of memory management

memory Management : refers to the memory partition and dynamic allocation of the operating system.

Links 3 ways : Static connection, dynamic link when loaded, dynamic link at runtime.

3 ways to load memory : Absolute mount, relocatable mount (static relocation), dynamic run fashion (dynamic relocation)

Exchange and Overwrite : Switching techniques are primarily performed between different processes, while overrides are used in the same program or process.

Memory Continuous Allocation management Mode : User process in main memory continuous storage, single continuous allocation (memory divided into the system and user area), fixed partition allocation, dynamic partition allocation.

non-contiguous allocation management of memory : Allows a program to be loaded into nonadjacent memory partitions in a decentralized manner. Depending on whether the size of the partition is fixed divided into paging storage management mode and segmented management storage.

Basic Paging Storage Concepts : Blocks in a process are called pages , in-memory blocks are called page boxes , and external memory are directly called blocks . Process in the execution of the need to request the main memory space, is to allocate each page in main memory page box, which results in a page and page box one by one corresponding.

logical address A to Physical address e conversion (based on paging management): Set the page size to L,

1 Calculate the page number p=a/l and the offset in the page w=a%l;

2 Compare page number p with the page table length M, if p>=m, will produce a cross-border interruption, otherwise continue;

3 Page Table Item address B (also called physical block number) = Page table start Address f+p* page table length;

4 calculation e=b*l+w;

Basic Segmentation Management : The logical space is divided according to the natural segments in the user process. Each paragraph is numbered starting from 0, the segment requires continuous, and the segment does not require continuous.

2 Virtual Memory Management

locality principle : Temporal locality (which may be accessed again soon after being accessed), spatial locality (storage units near the storage unit being accessed may be accessed immediately).

virtual Memory features : Multiple (jobs are partitioned into memory runs), swapping (swap-in for job-run engineering), virtualization (logical expansion of memory capacity).

virtual Memory Implementation Method : Request paging storage management, request segmented storage management, request segment page storage management.

Request Paging Management : Page table mechanism, missing pages interrupt mechanism, address change mechanism slightly.

Page Replacement Method : OPT, Fifo,lru, CLOCK

Chapter 4th Document Management

The information for all files is stored in the directory structure.

Open File table for the operating system.

The process opens the table.

the logical structure of a file: is the organization of the files seen from the user's point of view, which is divided into unstructured files (streaming files) and structured files (record-based files).

the physical structure of the file : from the implementation point of view, also known as the storage structure of the file, refers to the file in the external memory last organizational mode.

FCB: A file control block, a data structure used to hold various information required by a control file, an ordered set of FCB called a file directory , and a FCB is a file directory entry. Each directory entry in the file directory consists only of a file name and a pointer to the node that corresponds to the file.

file sharing: 1 based on the sharing method of the index node; 2 Use the symbolic chain for file sharing.

File System hierarchy : User interface, file directory system, access control module, logical file system with file buffer, physical file system, allocation module, Device management program module.

Directory Implementation : 1 linear list; 2 hash table.

file Implementation:1 file allocation method (refers to how to allocate disk for the file, there are 3 ways: continuous allocation, link allocation, index allocation); 2 File storage space management (file memory space Division and initialization; file storage space management, that is, the organization of the Idle Block management, There are 3 methods: Idle table method, Idle link list method, bit graph method, Group link method)

disk read and write operation time : Look for Time (the time that the head moves to a fixed track), the delay time (the time it takes for the head to navigate to the sector of a track), the transfer time (the time it takes to read or write data from the disk).

Disk scheduling algorithm (decision to find time): First come first service algorithm FCFS, the shortest time to find the first algorithm SSTF, scanning algorithm scan, cyclic scanning algorithm c-scan.

bootloader: An initialization program that needs to be run when the computer starts.

Boot control block: Includes information required by the system to boot the operating system from this partition.

Partition control block: Contains partition details.

5th chapter input and output management

IO control mode : Direct control of the program, interrupt drive mode, DMA mode (direct accessor storage, the entire data transfer is done under the DMA Controller), channel (refers to the processor dedicated to the input and output) control mode.

io scheduling : is to determine a good sequence to execute these IO requests.

Disk Cache : Unlike the usual sense of small capacity between the CPU and memory to tell the memory, instead of using the in-memory access space to stage a series of information read from the disk.

Buffer technology: 4 types: Single buffer, double buffer, buffer pool, loop buffer.

Operating System Basics

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.