Basic computer and network knowledge

Source: Internet
Author: User

Basic computer and network knowledge

Process: one execution of the program. PCB

Difference between a program and a process: a program is a static command sequence. A process is a resource set reserved for the execution program thread.

 

Process conversion and control:

  • Basic Process status:

    Three-state model: five-state model:

    Compared with the three-state model, the five-state model introduces new and terminated states, processes with pending states and their transformation

Process control is generally completed by the operating system.

 

Processes are mutually exclusive and P and V Operations:

  • Inter-Process Synchronization: the asynchronous environment is in a certain order
  • Mutual Exclusion: shared resource disputes
  • Critical resources: resources that can only be called by one process at a time
  • Critical Zone Management: Critical Zone refers to the code used to operate critical resources. Principle: when there is no blank, wait, and so on, limited wait, let the right Wait
  • Semaphore limit (Dutch scholar: Dijkstra)
  • Integer semaphore and PV operations:

    Semaphore: an integer variable that is assigned different values based on different control objects.

    • Common semaphores: Mutual Exclusion of processes. The initial value is 1 or the number of resources.
    • Private semaphores: Process Synchronization. The initial value is 0 or a positive integer.

    Physical significance of semaphore S: S> = 0 indicates the number of available resources, S <0, and its absolute value indicates the number of processes waiting for the resource in the blocked queue

    Common Methods for synchronizing processes to mutex during Pv operations. The primitive for low-level communication during PV operations. P indicates applying for a resource, and v indicates releasing a resource.

    P operation definition:

    V Operation:

  • Processes are mutually exclusive using pv operations:

    Set the initial mutex value to 1. The Execution Code segment is as follows:

  • Process synchronization using PV:

    Process Synchronization is a mutual constraint caused by mutual cooperation between processes. To synchronize processes, You can associate a semaphore with a message. When the semaphore is zero, the expected message is not generated. When the semaphores are non-zero, the expected benefits already exist. Assuming that the semaphores S are used to represent messages, the process can use the P operation to test whether messages arrive. Call the v operation to notify the messages that are ready.

    Example: producer and consumer

    The semaphores Bufempty indicates the number of empty units in the buffer zone. Buffull indicates the number of non-empty units in the buffer zone. Their initial values are 1 and 0 respectively.

     

Process Management Domain Communication:

  • Process Communication:

    Information exchange between communication value processes. Control Information Interaction and data information interaction. The interaction of control information is called low-level information communication. The synchronization and mutex of processes are achieved through semaphores. Data communication is called advanced communication. It is mainly implemented through the shared storage system, message transmission system, and pipeline communication. Advanced Communication includes direct communication and indirect communication.

  • Guan Cheng

    Hansen and Hoare propose another synchronization mechanism: pipe process.

    A management process consists of a set of operations on shared data, initial code, and access permissions that can be performed by a group of concurrent processes. (A group of processes that share data and operate on it constitute a management process ). A process can call a management process whenever necessary. At any time, only one process can enter the management process, and the rest can only wait. A management process provides a mechanism that allows multiple processes to securely and effectively share abstract data types.

    Each process has a name, as shown below:

Process Scheduling and deadlock:

  • Process Scheduling: Processor Scheduling. The main function is to determine when the processor is allocated to which process. (Generally, Job submission in the operating system requires high-school and low-level scheduling .)
  • Scheduling Methods and algorithms:
    • Scheduling Method: how to allocate cpu when a process with a higher priority arrives. There are two types of Scheduling Methods: Deprivation and non-deprivation.
    • Scheduling Algorithm: commonly used: first-come service, time slice rotation, priority scheduling, multi-level feedback Scheduling
  • Deadlock: more than two processes are waiting for resources that the other party has occupied for an indefinite period of time.
  • Causes and conditions of deadlock:

    Reason: illegal resource competition and process Promotion order

    Conditions: mutex, request persistence, non-deprivation, and loop

    Policy: ostrich policy (ignore Policy)

    Prevention Policy (one of the four necessary conditions for cracking the deadlock)

    Avoid policies (carefully allocate resources and dynamically avoid deadlocks)

    Detect and remove deadlocks (in the event of deadlocks, the system can detect and release them)

Thread:

An entity in a process, called and allocated by the system independently. In the operating system where threads are introduced, a process usually has several threads. A thread only has some resources that are essential for running. It shares all the resources of a process with several threads of the same process. Threads have many characteristics of traditional inheritance, called Lightweight processes, while traditional processes are called heavy processes.

Storage Management

Memory is a key resource of computers and a place where various resources are stored. The main task of memory management is to improve the utilization rate of the primary storage, expand the primary storage, and effectively protect the primary storage information. The storage management object is primary memory)

Concept of Storage Management:

  • Logical Address: After a user program is compiled, each target is compiled in the order where 0 is the base address. It is also called relative address.
  • Physical address: the address of a storage unit in the primary storage. Also known as absolute address
  • Storage space:
    • Address Space: a set of logical addresses.
    • Storage space: a set of physical addresses

Address relocation:

The process of converting a logical address to a physical address.

  • Static relocation: location during compilation, also known as location during memory loading
  • Dynamic Relocation: positioning during runtime

Storage management functions: the distribution and recovery of the primary storage improves the utilization rate, storage protection, and expansion of the primary storage.

Storage Management Methods: partition storage, paging storage, segmented storage, segment-page storage, and virtual storage

Storage protection: in multiple program systems, there are both operating systems and many user programs in the memory. In order to make the system run normally and avoid mutual interference between various programs in the memory, the programs and data in the memory must be protected.

  • Prevent out-of-bounds addresses
    The address generated by the process must be checked, and an interruption occurs when the process crosses the border. The operating system will handle the problem accordingly.
  • Prevent unauthorized operations
    Users can read and write information in their own regions;
    Information that can be shared in public areas or authorized and usable can be readable but cannot be modified;
    Unauthorized use of information cannot be read or written.
  • Storage protection is generally based on the hardware protection mechanism, supplemented by software. Because of the high overhead of the system implemented by software, the speed is doubled. When an out-of-bounds or illegal operation occurs, the hardware is interrupted and enters the operating system for processing.

 

Device management:Management of all input and output devices in a computer system.

I. Objectives of device management:

(L) allows you to conveniently use external devices and control devices to complete user input and output requests.

(2) Improve the parallel working capability of the system and improve the efficiency of equipment use.

(3) improve the reliability and security of peripheral devices and systems so that the system can work normally.

Ii. device management functions

Device management should have the following functions: Allocation and recovery of devices, startup of peripheral devices, drive scheduling of disks, interrupt handling of external devices, and realization of Virtual Devices.

Distribution of peripheral devices

  • Device category: devices can be divided into two categories: exclusive devices and shared devices. A virtual device uses a shared device to simulate an exclusive device.
  • Absolute Number and relative number of a device: specify a number for each device in the system for system identification. This number is called "absolute number of a device ". However, the absolute number is not allowed by the user. When applying for a device, the user can only apply for the device type. However, to identify a device of the same type, you can use the "device relative number ".

3. device allocation

When a user applies for a device, the system selects an idle device from the device category based on the request and the current device allocation, and assigns it to the applicant, the application is independent from the actual device. The system sets up "equipment table" and "device table" to record the distribution of system devices.

Drive scheduling of disks

The positions of any part on the disk are determined by three parameters: the cylinder number, the head number, and the fan area number. Access to the mobile arm disk usually takes three parts: first, move the head to the corresponding cylinder. This time is called the search time. Once the head arrives at the specified cylinder, wait until the accessed slice is rotated to the read/write header, which is called the delay time; the actual transfer time is the transfer time. The time for one disk access is the sum of the above three. The "Search Time" takes the longest time.

Arm shift Scheduling

The following arm shift scheduling algorithms can be used.

1. Service algorithms first

2. shortest search time priority algorithm:

3. Scanning (elevator) algorithm: always selects the Request closest to the current position from the current position of the magnetic arm along the moving direction of the magnetic arm, and changes the moving direction only when there is no request in the forward direction.

Rotating delay scheduling

When the head arrives at a certain cylinder, it may have multiple requests. The scheduling of these requests is usually based on the order in which the requests are rotated by the head.

Device startup and I/O interrupt handling

  • Channel: the channel is equivalent to a single-function processor, instead of the CPU to control I/O operations, dedicated to data input and output, So that I/O operations can work in parallel with the CPU. Channels are the basis for parallel computing and transmission. In a system equipped with channels, the host can connect multiple channels. One channel connects multiple controllers, and one Controller connects multiple devices of the same type; some devices (fast devices like disks) often need to connect to multiple controllers and connect the controllers to multiple channels for cross-connection. (Separate programming of the channel here uses the bios language)
  • Peripheral device startup: the channel has its own command system, including read, write, control, transfer, end, and empty operation commands, once the CPU sends the "start I/O" command, the channel can work independently of the CPU and execute the channel program formed by the channel command (CCW) to complete I/O.

The process of starting and controlling peripheral devices to complete I/O is as follows:

(1) construct a channel program based on I/O requests.

(2) The central processor issues the "start I/O" command, and the channel executes the commands in the channel program one by one to implement I/O.

(3) After I/O is completed, the channel uses the interrupt mechanism to report the execution to the central processor.

  • Interrupt handling

The interrupt handler is responsible for processing the interruptions sent by the channel, and processing whether the input and output ends normally or encounter errors.

The channel status word (CSW) records the status of the channel, Controller, and device. When the central processor receives an interruption from the channel, it uses the channel status word to determine whether the input and output are normal.

Virtual Devices: exclusive devices are not conducive to improving system efficiency. Two measures can be taken: offline peripheral device operation and Simultaneous peripheral device operation.

  • Offline peripheral device operation: the offline peripheral device operation uses two peripheral computers to write the slow and exclusive device information to the disk and transmit the information on the disk to the exclusive device, when a job is executed, it only exchanges information with the shared disk. The peripheral computer is independent from the master computer, not under the control of the master computer, so it is called "offline peripheral device operation ". Although the operation of the offline peripheral device improves the system efficiency, there are still some new problems: the use of the peripheral computer increases the cost, increases the manual operation of the operator, and increases the turnover time of the job.
  • External Device operations: external device operations are also known as virtual device technologies. The core idea is to simulate operations on exclusive devices on a shared device (usually a disk, transform a low-speed dedicated device into several virtual devices that can be operated in parallel. The operating system designs two programs: "pre-Input Program" and "Slow output program", and opens a region called "well" on the disk. "Pre-input programs" and "Slow output programs" are executed under computer control. Therefore, this technology is called "operations on both online and external devices" (abbreviated as SPOOL or SPOOLING ), other systems are called "fake offline operations ". When a user's job is to enter the system, the pre-Input Program of the SPOOLing system sends the job information from the exclusive input device to the specified area (called the input well) on the disk. When the job is running, data can be read directly from the input well. When data needs to be output during execution, the output data can be sent to another specified area (called the output well) on the disk. Finally, after the job is completed, the slow output program sends the data in the output well to the exclusive output device in sequence. Programs that implement "input well reading" and "output well writing" can be collectively referred to as "Well Management" programs. The SPOOLing system is actually based on the interrupt mechanism and channel, which consists of three programs: "pre-input" program, "Well Management" program, and "Slow output" program.
  • DMA technology: DMA is short for Direct Memory Access. It means "direct access to memory ". It refers to a high-speed data transmission operation that allows data to be read and written directly between external devices and memory, that is, no CPU intervention is required. The entire data transmission operation is performed under the control of a "DMA controller. In addition to processing data at the beginning and end of data transmission, the CPU can perform other work during transmission. In this way, the CPU and input and output are in parallel most of the time. Therefore, the efficiency of the entire computer system is greatly improved.

 

File Management:

File Type:

File structure and organization:

  • Logical Structure: Structured record files, non-structured ghost stream files
  • Physical Structure: continuous structure, link structure, index structure, index structure of multiple physical sections

File Access methods: sequential access and Random Access

File storage device management: bitmap, triggered, and linked

File control block and file directory: the unique identifier of the FCB file. A file directory is an ordered set of file control blocks.

File Usage: Current Directory (working directory)

 

Job Management:

Job Management and job mechanism: the total amount of work done by the job killing system to complete users' computing tasks (or transaction tasks. It consists of three parts: Program, data, and statement of operation. A group of programs that the operating system uses to manage the entry, execution, and revocation of jobs. JCB is the unique identifier of the Job control block.

Job status and conversion:


 

 

 

Job Scheduling and common Scheduling Methods:

Job Scheduling changes from the backup status to the execution status. Common scheduling methods include service first, high response rate, priority scheduling, and balanced scheduling.

User Interface:

 

Network Operating System:

Network Operating System (NOS): allows various computers in the network to conveniently and effectively share network resources and provide users with a collection of software and related rules for various services.

Features of the network operating system:

Hardware independence, multi-user support, network user programs and management, multiple client support, directory service, and value-added service.

Classification of network operating systems:

Centralized mode, c/S mode, and peer-to-peer mode.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.