The operating system of high efficiency interview questions

Source: Internet
Author: User
Tags abstract message queue mutex semaphore socket

One: Operating system

1. What are the states of the process, the state transition diagram, and the events that cause the conversion.

2. The difference between process and thread.

A process is an operating system resource allocation unit in which a thread is a unit of operating system execution, a process having a separate address space, and a process that crashes, which does not affect other processes in protected mode, and threads are just different execution paths in a process. Thread has its own stack and local variables, but there is no separate address space between the threads, one thread dead is equal to the entire process dead, so the process of multiple processes than multithreaded program is robust, but in the process of switching, the resource consumption is greater, the efficiency is some worse. However, for concurrent operations that require simultaneous and share certain variables, only threads can be used, and processes cannot be used.
Why do I need threads when we have a process? In multi-channel programming, we allow multiple programs to be loaded into memory at the same time, under the operation of the operating system, can be implemented concurrently. This is the design that greatly improves the CPU utilization, process is in order to achieve multi-channel programming on the CPU, the process is still a lot of defects, mainly reflected in two points: 1, the process can only do one thing at a time, if you want to do two or more things at the same time, the process can do nothing.

2, if a process blocks during execution, such as waiting for input, the entire process hangs, even if some work in the process does not depend on the input code, and cannot be performed.

From a logical point of view, multithreading means that multiple parts of an application can be executed concurrently. Because to be concurrent, we invented the process, and further invented the thread, the thread belongs to the process at this level and then provide a layer of concurrent abstract process and thread difference

1 The process is the unit of the operating system resource allocation, and the thread is the unit that the operating system executes.

2 process has a separate address space, a process crashes, in protected mode will not affect other processes, and the thread is only a different execution path in a process. Threads have their own stacks and local variables, but there is no separate address space between threads, and a thread that dies equals the whole process dies, so multiple process programs are more robust than multithreaded programs

3 The thread is an entity of the process, is the basic unit of CPU scheduling and dispatching. Threads themselves basically do not own system resources, only have a few essential resources in operation (such as program counters, a set of registers and stacks), but it can share the entire resources owned by the process with other threads belonging to one process. When the process is switched, the resource is more expensive and the efficiency is poor. Therefore, multithreading is more efficient than multiple processes.

4 A thread can create and revoke another thread, which can be executed concurrently between multiple threads in the same process

3. Several ways of process communication.

# pipe (pipe): A pipe is a half-duplex way of communication, data can only flow one-way, and can only be used between relational processes. A process's affinity usually refers to a parent-child process relationship.
# Named Pipes (named pipe): Named pipes are also half-duplex means of communication, but it allows communication between unrelated processes.
# semaphore (Semophore): semaphores are a counter that can be used to control access to shared resources by multiple processes. It is often used as a locking mechanism to prevent a process from accessing a shared resource, and other processes also access that resource. Therefore, it is primarily used as a means of synchronization between processes and between different threads within the same process.
# Message Queuing: Message Queuing is a list of messages that are stored in the kernel and identified by message queue identifiers. Message Queuing overcomes the disadvantage of less signal transmission information, the pipeline can only host the unformatted byte stream, and the buffer size is limited.
# signal (sinal): A signal is a more complex form of communication used to inform the receiving process that an event has occurred.
# Shared Memory: Shared memory is a map of memory that can be accessed by other processes, and this shared memory is created by one process, but can be accessed by multiple processes. Shared memory is the fastest way to IPC, and is designed specifically for the inefficient operation of other interprocess communication methods. It is often used in conjunction with other communication mechanisms, such as signal two, to achieve synchronization and communication between processes.
# Socket: Socket is also a interprocess communication mechanism, unlike other communication mechanisms, it can be used for different processes and communication between them.

4. Thread synchronization in several ways. (Be sure to write about producer, consumer issues, complete digestion and understanding)

Threads may share some resources with other threads, such as memory, files, databases, and so on. Conflicts can occur when multiple threads are reading and writing the same shared resource at the same time.   At this point, we need to introduce the thread "sync" mechanism.  1 Critical area: access to public resources or a section of code through serialization of multithreading, which is suitable for controlling data access. 2 mutexes: Designed to coordinate separate access to a shared resource.
3 semaphore: Designed to control a limited number of user resources.
4 event: Used to inform the thread that some events have occurred, starting the successor task.

Critical areas are highly efficient, but are not not available for use across processes. Mutexes can be named, that is, it can be used across processes, creating mutexes requires more resources and less efficiency than critical areas. Semaphore, multiple-resource multithreading mutex problem, which is the same as the PV operation in the operating system. It indicates the maximum number of threads accessing the shared resource at the same time. More like a resource counter, a mutex is actually a count=1 case Semaphor

Linux has read and write locks, suitable for reading more often than the case.

Read-write locks are similar to mutexes, but read-write locks allow for higher parallelism. The mutex is either locked or unlocked, and only one thread can lock it at a time.

5. How the thread is implemented. (That is, the difference between the user thread and the kernel thread)

Kernel thread: Created and revoked by the operating system kernel. The kernel maintains context information and thread switching for processes and threads. One kernel thread is blocked due to I/O operations and does not affect the operation of other threads. Windows NT and 2000/XP support kernel thread user threads: Created and managed by the application process using line threading, not since the core of the operating system. No user state/kernel switch is required, fast. The operating system kernel does not know the existence of multithreading, so a thread blocking will cause the entire process (including all its threads) to block. Because the processor time slice allocation here is a process-base unit, each thread executes less time.

6. The difference between user state and nuclear mentality.

Refers to the CPU running state, the highest level of privilege is ring 0, is regarded as the nuclear mentality, the lowest level is ring 3, often considered as user state, rings 1 and 2 are rarely used. User state: Ring3 code running in User state is subject to a number of checks by the processor, which can only access the virtual address specified in user-state access to the page in the page table entry that maps to its address space, and can only have an I/O license bitmap in the task status segment (TSS) Permission BITMAP) for direct access to the accessible ports specified in the. Nuclear mentality: RING0 in the storage protection of the processor, the nuclear mentality, or the privileged state (corresponding to the user state), is the operating system kernel mode of operation. Code running in this mode allows unrestricted access to system storage, external devices
3 ways to switch User state to kernel state:
1 system call: This is a mode of user-state process to switch to the kernel state, the user state process through the system call request to use the operating system provided by the service program to complete the work. But the system calls the mechanism its core still uses the operating system to be specially open to the user the interruption to realize, for example Linux's int 80h interrupts.
2) Exception: When the CPU in the execution of the program running in the user state, some of the prior unknown exceptions, then triggered by the current run process to deal with the exception of the kernel-related programs, but also to the kernel state, such as the fault page anomaly.
3 Interruption of peripheral equipment: When a peripheral device completes a user-requested operation, a corresponding interrupt signal is sent to the CPU, and the CPU pauses execution of the next instruction to be executed and executes the handler corresponding to the interrupt signal, if the previously executed instruction is a user-state program, Then the conversion process naturally occurs from the user state to the kernel state of the switch. For example, the hard disk read and write operation completed, the system will switch to the hard disk read and write interrupt handler to perform subsequent operations.
These 3 methods are the most important way for the system to go to the kernel state at runtime, where the system call can be considered to be initiated by the user process, and the exception and peripheral device interrupts are passive. When a process is executing, the value in all registers of the CPU, the state of the process, and the contents of the stack are called the context of the process from the user state to the kernel mindset, to consume the CPU clock of the >100 cycle

7. The difference between the user stack and the kernel stack.

The kernel creates the process, creates the process control block, creates the process's own stack. A process has two stacks, user stacks and system stacks, the user stack's space points to the user address space, and the kernel stack's space points to the kernel address space.

Kernel stack and user stack differences

1. The kernel stack is the stack that is used when the system is running in the kernel state, and the user stack is the stack that the system uses when it runs in user state.

When the process due to interrupt into the kernel state, the system will be some user state data information stored in the kernel stack, when returned to the user state, remove the kernel stack of information to recover, return to the original execution of the program.
A user stack is a stack created by a process in user space, such as a generic function call, that will be used on the user stack.

2. The kernel stack is a fixed area of the operating system space, which can be used to save the interrupt site, save the parameters of the operating system subroutine calls, return values, etc.

The user stack is a part of the user process space, and the user saves the parameters, return values, and so on that are invoked between user process subroutines.

3. Each Windows has 4g of process space, the system stack uses the low segment of the process space, the user stack is the high-end part if users want to directly access the system stack part, need to have a special way.

Why do you want to set up two different stacks?

Share reason:

The kernel's code and data are shared for all processes, and if the corresponding kernel stack is not set for each process, then different processes cannot be implemented to execute different code.

Security reasons:

If there is only one stack, then users can modify the stack content to break through the kernel security.


8. Memory pool, process pool, thread pool. (c + + programmers must master) Reason: 1. The thread execution process is divided into three processes, thread creation time, thread execution time, thread destroy time. Use a thread pool to destroy and create threads without frequency, reducing thread creation and destruction time, and increasing efficiency. 2. Pre-creation technology. The emergence of a thread pool is aimed at reducing the overhead associated with the thread pool itself. The thread pool takes a pre-built technique and, after the application starts, creates a number of threads (N1) immediately and puts it into the idle queue. These threads are in a blocking (suspended) state, do not consume the CPU, but occupy a small amount of memory space. When the task arrives, the buffer pool selects an idle thread and runs the task into this thread. 2. Thread pool Implementation thread pool manager: Used to create and manage thread pool worker threads:  thread pool The actual execution of the threading task interface:  Although the thread pool is most often used to support a network server, we abstract the tasks that the thread performs to form the task interface, This causes the thread pool to be independent of the specific task. Task queues: The concept of a thread pool can be specific to the implementation of a queue, a data structure such as a list, in which the execution thread is saved. 3. The limitation thread pool is dedicated to reducing the impact of the overhead of the thread itself on the application, provided that the thread itself is cost-less than the thread performs the task. If the cost of the thread itself is negligible relative to the thread task execution cost, then the benefits of the thread pool is not obvious, for example, for FTP servers and Telnet server, usually transfer files for a long time, expensive, then, at this time, we adopt a thread pool is not the ideal method, We can choose the "create instantly, destroy instantly" strategy. In short, the thread pool is usually suitable for the following occasions: (1)   Unit time processing tasks frequently and task processing time is short (2)   for real-time requirements higher. If you are creating a thread after you accept the task, you may not be able to meet the real-time requirements, so you must use the thread pool for pre-creation. (3)   must often face high bursts of events, such as Web servers, if there is a football broadcast, the server will have a huge impact. If you take the traditional approach at this point, you must constantly generate a large number of threads, destroy the thread. A dynamic thread pool is used to prevent this from happening.
Memory Pool:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.