Effective interview operating system frequently questions

Source: Internet
Author: User
Tags message queue mutex semaphore socket

One: Operating system

1. What is the status of the process, the state transition diagram, and the event that caused the conversion.

2. The difference between a process and a thread.

A process is an operating system resource allocation unit, a thread is the operating system execution unit, the process has a separate address space, a process crashes, in protected mode will not affect other processes, and the thread is only a different execution path in a process. Thread has its own stack and local variables, but there is no separate address space between the threads, a thread dead is equal to the entire process dead, so the multi-process program is more robust than multithreaded programs, but in the process of switching, the cost of large resources, efficiency is worse. But for some concurrent operations that require simultaneous and shared variables, only threads can be used, and processes cannot be used.
Why do threads still have a process? In multi-channel programming, we allow multiple programs to be loaded into memory at the same time, which can be implemented concurrently under the dispatch of the operating system. This is the design, greatly improve the CPU utilization, the process is to achieve multi-channel programming on the CPU proposed, the process is still a lot of defects, mainly embodied in two points: 1, the process can only do one thing at a time, if you want to do two things at once or many things, the process will be powerless.

2, if the process is blocked during execution, such as waiting for input, the entire process hangs, even if some work in the process does not depend on the input code slice, it will not be able to execute.

From a logical point of view, the meaning of multithreading is that in an application, multiple execution parts can be executed concurrently. Because to be concurrent, we invented the process, and further invented the thread, the thread belongs to the process at this level and then provides a layer of concurrent abstract process and thread differences

1) The process is the unit of the operating system resource allocation, and the thread is the unit that the operating system executes.

2) The process has a separate address space, and after a process crashes, it does not affect other processes in protected mode, and the thread is just a different execution path in a process. Threads have their own stacks and local variables, but there is no separate address space between threads, and a thread dies as the entire process dies, so a multi-process program is more robust than a multithreaded program.

3) A thread is an entity of the process and is the basic unit of CPU dispatch and dispatch. The thread itself basically does not own system resources, it has only a few resources (such as program counters, a set of registers and stacks) that are essential in the run, but it can share all the resources owned by the process with other threads that belong to one process. Process switching, the cost of large resources, the efficiency is poor. Therefore, multithreading is more efficient than multiple processes.

4) One thread can create and revoke another thread, which can be executed concurrently between multiple threads in the same process

3. Process communication in several ways.

# pipe: A pipe is a half-duplex mode of communication in which data can only flow in one direction and can only be used between processes that have affinity. A process's affinity usually refers to a parent-child process relationship.
# named pipe (named pipe): A named pipe is also a half-duplex communication, but it allows communication between unrelated processes.
# semaphore (Semophore): Semaphore is a counter that can be used to control access to shared resources by multiple processes. It is often used as a locking mechanism to prevent a process from accessing the shared resource while other processes are accessing the resource. Therefore, it is primarily used as a means of synchronization between processes and between different threads within the same process.
# message Queue: Message Queuing is a linked list of messages, stored in the kernel and identified by message queue identifiers. Message Queuing overcomes the disadvantages of less signal transmission information, only unformatted byte stream, and limited buffer size.
# signal (sinal): A signal is a more sophisticated means of communication that notifies the receiving process that an event has occurred.
# Shared Memory: Shared memory is the mapping of memory that can be accessed by other processes, which is created by a process, but can be accessed by multiple processes. Shared memory is the fastest IPC approach and is specifically designed for low-efficiency operation of other interprocess communication modes. It is often used with other communication mechanisms, such as signal two, to achieve synchronization and communication between processes.
# Socket: Socket is also an inter-process communication mechanism, unlike other communication mechanisms, it can be used for different and inter-process communication.

4. Thread synchronization in several ways. (Must write producer, consumer problem, completely digest understanding)

It is possible for threads to share some resources with other threads, such as memory, files, databases, and so on. Conflicts can arise when multiple threads read and write to the same shared resource at the same time.   At this point, we need to introduce the thread "sync" mechanism.  1 Critical Zone: Through the serialization of multithreading to access public resources or a piece of code, fast, suitable for controlling data access. 2 Mutex: Designed to coordinate separate access to a shared resource together.
3 semaphore: Designed to control a limited number of user resources.
4 event: Used to inform the thread that some events have occurred, thus initiating the start of the successor.

Critical zones are efficient, but are not not available for cross-process use. A mutex can be named, which means it can be used across processes, creating a mutex that requires more resources and is less efficient than a critical section. semaphore, multi-threaded mutex problem for multiple resources, which is the same as the PV operation in the operating system. It indicates the maximum number of threads concurrently accessing the shared resource. More like a resource counter, a mutex is actually a count=1 case Semaphor

Linux has a read-write lock, suitable for reading more than the case of writing.

Read and write locks are similar to mutexes, but read-write locks allow for higher parallelism. The mutex is either locked or unlocked, and only one thread can lock it at a time.

5. How threads are implemented. (i.e. the difference between a user thread and a kernel thread)

Kernel thread: Created and revoked by the operating system kernel. The kernel maintains the context information of processes and threads, as well as thread switching. A kernel thread is blocked due to I/O operations and does not affect the operation of other threads. Windows NT and 2000/XP support kernel thread user threads: Created and managed by the application process using line libraries, not since the operating system core. No need for user-state/kernel-mindset switching, fast. The operating system kernel does not know the existence of multithreading, so one thread blocking will cause the entire process (including all its threads) to block. Because the processor time slice allocation here is in the process as the basic unit, each thread executes relatively less time.

6. The difference between the user state and the nuclear mentality.

Refers to the operating state of the CPU, the highest level of privilege is the ring 0, is considered a nuclear mentality, the lowest level is the ring 3, often considered as user state; Rings 1 and 2 are seldom used. User state: Ring3 the code running in the user state is subject to many checks by the processor, which can only access the virtual address of the page that is specified in the page table entry that maps its address space, and only the I/O license bitmap in the task status segment (TSS) (I/O Permission BITMAP) provides direct access to the accessible ports specified in the Nuclear mentality: RING0 in the storage protection of the processor, the kernel mentality, or the privileged state (corresponding to the user state), is the operating system core operation mode. Code running in this mode allows unrestricted access to system storage, external devices
3 ways to switch the user state to the kernel State:
1) System call: This is a way for the user state process to switch to the kernel state actively, the user state process through the system call request using the operating system provided by the service program to complete the work. The core of the system call mechanism is to use an interrupt that the operating system is particularly open to the user, such as an int 80h interrupt for Linux.
2) Exception: When the CPU executes the program running in the user state, some pre-unknown exception occurs, this will trigger the current running process switch to handle the exception of the kernel-related program, also went to the kernel state, such as page faults.
3) Interruption of peripheral equipment: When the peripheral device completes the user requested operation, the CPU will send the corresponding interrupt signal, then the CPU suspends execution of the next instruction to be executed to execute the handler corresponding to the interrupt signal, if the previously executed instruction is a user-state program, Then the process of this conversion will naturally occur from the user state to the kernel state switch. For example, the disk read and write operation is completed, the system will switch to the hard disk read and write interrupt handler to perform subsequent operations.
These 3 methods are the most important way for the system to go to the kernel state from the user state at runtime, where the system call can be thought to be initiated by the user process, and the exception and the peripheral device interrupt are passive. When a process executes, the values in all registers of the CPU, the state of the process, and the contents of the stack are referred to as the context of the process, which switches from the user state to the kernel mentality, consuming the >100 cycle CPU clock

7. The difference between the user stack and the kernel stack.

The kernel creates a process, creates a process control block at the same time, and creates the process's own stack. A process has two stacks, a user stack and a system stack, the user stack's space points to the user address space, and the kernel stack's space points to the kernel address space.

Kernel stack and user stack differences

1. The kernel stack is the stack that the system uses when it is running in the kernel state, and the user stack is the stack used by the system when it is running in the user state.

When the process due to break into the kernel state, the system will save some user-state data information to the kernel stack, when returned to the user state, take out the kernel stack of information recovered, back to the original execution of the program.
The user stack is the stack created by the process in user space, such as a general function call, which will be used by the user stack.

2. The kernel stack is a fixed area belonging to the operating system space, which can be used to save the interrupted scene, save the parameters that are called by the operating system sub-program, return value and so on.

User stack is a part of the user process space, the user saves the user process sub-program calls between the parameters, return value and so on.

3. Each Windows has 4g of process space, the system stack uses the low segment of process space, the user stack is a high-end part if users want to directly access the system stack part, need to have a special way.

Why do I need to set up two different stacks?

Reason for sharing:

Kernel code and data are shared for all processes, and if you do not set the corresponding kernel stack for each process, you cannot implement different code for different processes.

Security reasons:

If there is only one stack, then the user can modify the stack content to break through the kernel security.


8. Memory pool, process pool, thread pool. (c + + programmer must Master) Reason: 1. The thread execution process is divided into three processes, thread creation time, thread execution time, and thread destruction time. Use the thread pool to destroy and create threads without frequency, reducing thread creation and destruction times and increasing efficiency. 2. Pre-Create technology. The thread pool's appearance is focused on reducing the overhead of the thread pool itself. The thread pool uses pre-created technology, and immediately after the application is started, a certain number of threads (N1) are created and placed in the idle queue. These threads are in a blocking (Suspended) state and do not consume the CPU, but consume less memory space. When the task arrives, the buffer pool chooses an idle thread to run the task into this thread. 2. Thread pool Implementation thread pool manager: Used to create and manage thread pool worker threads the thread task interface actually executed in the:  thread pool:  Although the thread pool is mostly used to support network servers, we abstract the tasks performed by threads, forming task interfaces, This allows the thread pool to be independent of the specific task. Task queue: The concept of the thread pool is specific to the implementation of a data structure, such as a queue, linked list, where the execution thread is saved. 3. The limited thread pool is committed to reducing the impact of the overhead on the application, which is premised on the premise that the cost of the thread itself is not negligible compared to the thread execution task. If the cost of the thread itself is negligible relative to the thread task execution cost, then the benefits of this thread pool are not obvious, for example, for FTP servers and Telnet servers, it is often a long and expensive process to transfer files, so it may not be ideal to use a thread pool at this time. We can select the "instant Create, instant destroy" strategy. In summary, the thread pool is usually suitable for the following occasions: (1)   Unit time processing tasks frequently and task processing time is short (2)   for the real-time requirements of high. If the thread is created after the task is accepted, it may not meet the real-time requirements, so it must be pre-created with a thread pool. (3)   must often face high-burst events, such as Web servers, and if there is a football, the server will have a huge impact. If you take a traditional approach at this point, you must constantly generate threads and destroy threads. In this case, the dynamic thread pool is used to avoid the occurrence of the
memory pool:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.