Operating system Notes

Source: Internet
Author: User

1. Features of the operating system: concurrency, sharing, asynchrony, and virtualization.

2. Privileged and non-privileged directives:

(1) Privileged instruction. Commands that can only be executed in a pipe state are privileged directives. These directives are generally performed only by the operating system and are not performed by the general user program.

(2) non-privileged instructions. Commands that can be executed in both the tubular and the configuration are non-privileged instructions.

3. Status of the processor: pipe state and configuration, one identity in the program status Word.

(1) Tube state: Also known as System State, nuclear mentality, is the operating system in the state of operation.

(2) The configuration: Also known as the user state, is the general user program running state.

(3) State transition: The only way to convert from a head to a tubular state is to interrupt, and the PSW should be in a tubular state when the interrupt occurs. The transition from a tubular to a visual state can be achieved by modifying the PSW.

4. Multi-channel program with respect to the advantages of single-channel program: Equipment, memory, processor resource utilization increased.

5. A process is a running activity of a program with certain independent functions about a data set. The dynamic nature of the process.

6. Status of the process: Run state, ready state, waiting state (suspend state, sleep state, waiting for an event to occur) .

7. The process control block is a data structure that flags the process, which contains all the information that the system needs to manage the process: process identification (process number), user ID, process state, dispatch parameters, field information, family contact, program address, current open file, message queue pointer, resource usage, process queue pointer.

8. The process consists of two parts, the process control block and the program, where the program includes code and data. The process control block is in the operating system space and the program belongs to user space.

9. The process's program (Code and data) is called a process image.

10. Overhead generally refers to the time and space spent running operating system programs and managing the system.

11. The process is divided into system processes and user processes from an operating system perspective. The system process belongs to the OS, also known as the Daemon (daemon) process, and the priority is generally higher than the user process. Run the application for the user process.

12. A thread, also known as a light process, is a relatively independent execution flow within a process. A process can contain multiple threads that execute the same code snippet or different code snippet in the same program, sharing the data area and the heap. It is generally considered that the process is the allocation unit of the resource, and the thread is the CPU's dispatch unit.

13. Threads have the following advantages over the process:

(1) Fast Context switching: Switching from one thread in the same process to another simply changes the register and stack, and the address space, including programs and data, is unchanged.

(2) Low system overhead: Creating a line turndown the creation process requires less work to be done, and for a customer request, the server dynamic creation thread has a higher response speed than the dynamic creation process.

(3) Communication is easy: because of the sharing of the address space of multiple threads in the same process, the information that one thread writes to the data space can be read directly by the other threads in the process, which is convenient and quick.

14. A thread control block is a data structure with a flag thread that contains all the information that the system needs to manage threads: thread identity, thread state, dispatch parameters, field (Universal Register, instruction register, user stack pointer), link pointer. The thread control block may be part of the operating system space, or it may belong to user process space.

15. Threads are implemented in two ways: User-level threads implemented in the configuration, core-level threads implemented in the pipe

16. Reasons for introducing multithreaded programming

(1) Some applications have several internal control flow structures, which are cooperative in nature and require shared memory. It is easy to model the problem by multithreading, which can get the most natural solution algorithm.

(2) In applications that require multiple control flows, multithreading has a greater advantage in speed than many processes. The results of statistical test show that the speed of thread establishment is 100 times times faster than that of process, and the switching speed between process and process is also in order of magnitude difference.

(3) Using multithreading can improve the parallelism between the processor and the device. In the case of a single control flow, the process that initiates the device is blocked when it enters the core, and the other code of the process cannot be executed at this time.

(4) In a hardware environment with multiple processors, multithreading can be executed in parallel, both to improve resource utilization and to improve process propulsion speed.

Of course, it should be remembered that multithreading is also conditional: multiple threads in the same process have the same code and data, which are either cooperative (executing different parts of the code), or isomorphic (executing the same code).

17. In Linux, processes and threads have a uniform representation within the system: a thread is a process that has the same address space as its parent process. Fork is used to create a separate address space for the child process with a completely new context. While clone does not produce a new address space, the child process shares the address space of the parent process. Clone provides fine-grained, shared component control for your application.

18. In Windows 2000/XP, threads are also managed as objects, and each process contains at least one thread, which is the basic unit of the core dispatch. Each thread consists of two stacks: the system stack and the user stack. The process terminates when all threads in the process terminate.

19. Interrupts can be divided into two main categories: forced interruption and voluntary interruption.

20. The operating system-visible thread is a system-level thread, and if the thread is implemented at the user level, the entity that the operating system actually dispatches is still a process.

21. Processor scheduling (process scheduling) algorithm: first-to-first service algorithm (FCFS), the shortest Job first algorithm (SJF), the shortest remaining time priority algorithm (SRTN), the highest response ratio priority algorithm (HRN), the highest priority algorithm (HPF), the cyclic rotation algorithm) (Round Robin , RR), classification algorithm (MLQ), Feedback queueing algorithm (FB)

22. Real-Time Scheduling algorithm: First deadline priority scheduling (EDF), rate monotonic scheduling (RMS).

23. The error of two processes simultaneously manipulating a variable is related to the propulsion speed of the process, known as a time-related error.

24. Shared variables: variables to be accessed by two or more processes, also known as public variables.

25. The program segment that accesses shared variables is called a critical section, also known as a critical segment.

26. A resource that is only allowed to be used by one process at a time is called a critical resource.

27. Two or more than two processes cannot enter a critical section on the same set of shared variables at the same time, which can occur because of a timing-related error, known as process mutex.

28. Critical area management should meet 3 correctness principles: Mutex, progress, limited wait.

29. Waits that do not enter the waiting state are called busy waits. Busy waits for wasting processor resources and is therefore inefficient.

30. Implementation of Process mutex:

(1) Software method: Dekker algorithm, Peterson algorithm, Lamport bread Shop algorithm, Eisenberg-mcguire algorithm

(2) Hardware methods: Test and set the instruction (test and set), swap instruction (swap), switch interrupt

31. A set of processes, in order to coordinate the speed of the promotion, at some points need to wait or wake up, the inter-process such a mutually restrictive relationship is called process synchronization, short-term synchronization. Obviously process synchronization occurs only between processes that are logically related, and process mutexes can occur between any two processes.

32. A set of processes, if they are not performed properly, but concurrency can be performed normally, this phenomenon is called process cooperation.

33. The tools used to implement inter-process synchronization are called synchronization mechanisms, also known as synchronization facilities.

34. Synchronization mechanism: Signal volume and PV operation, condition critical area, pipe process. Suitable for single-processor systems and multi-processor systems with public memory.

35. An uninterrupted program of execution is called the primitive language. P Operations and V operations are defined as primitives. Because the p operation and the v Operation code length and execution time are very short, the implementation usually adopts the method of switching interrupts.

36. The synchronization mechanism for distributed systems is: Communication sequence process, rendezvous, distributed process, remote procedure call.

37. Mutual exclusion, synchronization, and information exchange between processes are collectively referred to as process communication (IPC).

38. There are two main modes of process communication: Shared memory mode and message delivery mode. The message delivery pattern is divided into direct and indirect ways. The direct way has the symmetry form and the asymmetric form, has the buffer path and the non-buffering path. The indirect way is also called the mailbox Way.

39.Linux Process communication: (1) Shared memory: establishment of shared storage area, attachment of shared storage area, separation of shared storage area, (2) signal, pipe, socket.

The concurrency control mechanism for 40.Windows 2000/XP is for threads, not processes: semaphores, mutexes, events, critical sections. WaitForSingleObject and Waitformultipleobject are available for semaphores, mutexes, and time three synchronization objects .

41. Each process in a set of processes waits for a resource that is never available to other processes in this group of processes, which is known as a process deadlock, or deadlock.

42. Types of deadlocks: competing resources, process communication, and other reasons.

43. Deadlock conditions: Resource monopoly, non-deprivation, maintain application, cycle wait. A deadlock occurs when and only four conditions are met, as long as a condition is broken.

44. Deadlock: S is the sufficient and necessary condition for the deadlock state is that the resource allocation diagram of S is not completely reduced.

45. More general deadlock: each process in a set of processes waits indefinitely for an event to be raised by another process in this set of processes, called a deadlock. Communication deadlocks cannot take the precautionary strategy (static) described earlier or avoid policy (dynamic), which can take a timeout technique.

46. When the waiting time has a noticeable impact on the progress and response of the process, it is called the process of starvation. When the task of starvation to a certain level of process is no longer of practical significance, it is said that the process is starved to death. Starvation that occurs under busy waiting conditions is called a live lock.

47. Deadlocks use the "ostrich algorithm", which is ignored in most systems, and employs a banker algorithm that avoids deadlocks in the system designed and implemented by Dijkstra.

48. Virtual Storage Management works based on local principles, including the locality of Time (loop) and space (program order execution).

49. The page replacement algorithm, also known as the elimination algorithm: the best Algorithm (OPT), first-out algorithm (FIFO), the least Recently used algorithm (LRU), recently eliminated (NUR), the least frequent use of the algorithm (LFU), the most frequent use of the algorithm (MFU), two chance algorithm, clock algorithm, Improved clock algorithm

50. Disk arm scheduling algorithm: First-come-first service algorithm (FCFS), shortest lookup time-first algorithm (SSTF, existence of track discrimination), scanning algorithm, look algorithm (elevator algorithm), cyclic scanning algorithm, cyclic look algorithm, n-step scanning and frozen scanning algorithm

51. The technology used to deal with inconsistent data arrival and departure speed is called Buffering.

52. Each computer on the network OS can run a different OS, while the computers of the distributed OS run the same OS. The network OS communicates in a shared file mode, and the distributed OS communicates in a messaging manner. The network OS mainly provides communication and information resource sharing services, and the distributed OS is the main target of computational acceleration and system reliability.

53. A busy wait lock is called a spin lock.

54. Data storage speed is difficult to meet the peak computing speed of microprocessors, this is called Memory Wall (wall) performance bottleneck problem.

Operating system Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.