How many questions can you answer about Linux? --Answer 1~13 question

Source: Internet
Author: User
Tags message queue mutex semaphore

Can 1.memcmp be used to compare structures? The difference between strcmp and memcpy?

Reference: http://www.cnblogs.com/cxz2009/archive/2010/11/11/1875125.html

[Email protected]:/study/linuxknowledge# cat memcmptest.c

#include <stdlib.h> #include <string.h> #include <stdio.h> #include <unistd.h> typedef structs Cmptest{char  a;short b;int   c;} Cmptest; int main (int argc, char *argv[]) {cmptest T1, t2;printf ("%d\n", memcmp (&t1, &t2, sizeof (cmptest))); cmptest T3, T4;memset (&t3, 0, sizeof (cmptest)), memset (&t4, 0, sizeof (cmptest));p rintf ("%d\n", memcmp (&T3, &T4, sizeof (cmptest)); t1.a = ' a '; t1.b = 3;t1.c = 5;t2.a = ' a '; t2.b = 3;t2.c = 5; printf ("%d\n", memcmp (&t1, &t2, sizeof (cmptest))); t3.a = ' a '; t3.b = 3;t3.c = 5; t4.a = ' a '; t4.b = 3;t4.c = 5;printf ("%d\n", memcmp (&t3, &t4, sizeof (cmptest))), t2 = t1;printf ("%d\n", memcmp (& T1, &t2, sizeof (cmptest)); T4 = t3;printf ("%d\n", memcmp (&t3, &t4, sizeof (cmptest))); return 0;}

[Email protected]:/study/linuxknowledge#./memcmptest

1

0

1

0

0

0

[Email protected]:/study/linuxknowledge#

That is, if the initialization is assigned an initial value (memset), it can be compared

In addition, if you assign a value using the "=" number, the two structures will be equal

However, if the structure does not assign the initial value, the results of the memcmp comparison are still different, even if the variables are assigned the same values.

In addition, even if the comparison results are the same, it does not mean that the two structures are the same, except that the contents of the structure are the same in large chunks of memory because the members may be different.

2. What are the differences between soft interrupts and hard interrupts?

(1) The hard interrupt is the hardware interrupt signal, the CPU interrupts the signal, and the soft interrupt is hardware independent, the CPU is dispatched. For example, an ext interrupt or an MSI interrupt, a mailbox interrupt, etc., triggers the CPU to respond, which is a hard interrupt. Soft interrupts are generally handled as the lower half of interrupts, the CPU is processed in the upper half of the interrupt handler function, and then the execution of the lower half is dispatched by the kernel, where the lower half can be implemented using soft interrupts, or Tasklet, Workqueue.

(2) Soft interrupts are typically processing I/O requests without interrupting the CPU and being dispatched by the kernel. A hard interrupt is typically a response to a hardware interrupt signal that interrupts the CPU and triggers the interrupt function in the kernel. Process, the soft interrupt switches from the process to the driver, and the hard interrupt is the hardware->cpu-> interrupt handler function.

3. How are interprocess communications? Which is the most efficient?

(1) Piping pipe and well-known pipeline FIFO-such as the redirection of the shell

(2) signal signal-such as killing some process kill-9, such as ignoring some process nohup, the signal is a software interrupt

(3) Message Queuing--slower than shared memory, buffer limit, but no lock, suitable for small pieces of data such as commands.

(4) Shared memory-the fastest IPC method, the same piece of physical memory mapped to process A, B's respective process address space, you can see the other side of the data update, you need to pay attention to the synchronization mechanism, such as mutual exclusion lock, semaphore. Ideal for transferring large amounts of data.

(5) Semaphore-PV operation, producer and consumer examples

(6) Socket--socket network programming

The following references: http://blog.csdn.net/piaoairy219/article/details/17333691

------Piping

The advantage of the pipeline is that there is no need to lock, the disadvantage is that the default buffer is too small, only 4 K, at the same time only suitable for parent-child interprocess communication, and a pipeline only suitable for one-way communication, if two-way communication needs to establish two. And not suitable for multiple sub-processes, because the message will be chaotic, its sending and receiving mechanism is used read/write this applicable flow, the disadvantage is that the data itself has no boundaries, the application needs to explain itself, and the general message is mostly a fixed long message header, and a variable length message body, After a child process is read from the pipeline to the message header, the message body may be received by another child process

------Message Queuing

Message Queuing also do not lock, the default buffer and the single message upper limit is larger, on my suse10 is 64K, it is not limited to the parent-child interprocess communication, as long as the same key, you can let different processes to the same message queue, it can also be used to give two-way communication, but a little bit of identification, Can be differentiated by the type in the message, such as a task dispatch process, the creation of a number of execution child processes, whether the parent process sends the dispatch task of the message, or the child process to send the task to execute the message, the type is set to the PID of the target process, Because MSGRCV can specify that only messages of type message are received, the child process only receives its own tasks, and the parent process receives only the task results

------Shared Memory

Shared memory can almost be considered to have no upper limit, it is not limited to the parent-child process, with the message queue similar to the location, because the memory is shared, there is no one-way limit, the biggest problem is the application to do their own mutual exclusion, there are several scenarios

1 only for two process sharing, put a flag in memory, it must be declared as volatile, everyone is mutually exclusive based on the flag bit, for example, 0 o'clock the first can write, the second waits, 1 o'clock the first wait, the second can write/read

2 is only applicable to two processes, is to use the signal, everyone waits for different signals, the first to write out the signal 2, wait for the signal 1, the second waiting for the signal 2, after receiving read/write, send signal 1, it is not used more process is because although the parent process can send a signal to different sub-processes, However, the child process receives the signal to access the shared memory at the same time, resulting in a race condition between different sub-processes, if the use of multiple blocks of shared memory, and there is a child process to send a result notification signal, the parent process received a signal, do not know who sent, also means that the access to which block of shared memory, even if the Because the waiting signal must be blocked, if a child process terminates unexpectedly, the parent process will be blocked forever and cannot be timed out

3 uses semaphores or msgctl its own locking, unlocking function, but the latter is only available for Linux


4.kmalloc and Vmalloc difference?

Kmalloc maps the physical memory of the antecedent, Vmalloc maps the physical memory of the non-linear zone, or the physical memory of the high-end memory area. Because Vmalloc is a non-linear region of mappings, it is easier to succeed when you request large chunks of memory. Kmalloc due to the application of continuous physical memory, due to memory hole, and other reasons, the application of large blocks of memory is easy to fail, can be based on the partner system of large memory usage to determine whether the use of KMALLOC applications.

Kmalloc, get_free_pages application is the advance area memory

Ioremap, Vmalloc applies for the memory of the Vmalloc area, or the high-end memory area, the non-linear mapping area.


5. What are the differences in memory access between the mmap, malloc, and kernel states of the application layer Vmalloc?

The mmap of the application layer is used to map files, and malloc is used to dynamically allocate memory

Mmap does not allocate actual physical space, it simply maps the file to the virtual address space of the calling process.

Malloc, Vmalloc allocating actual physical memory

6. Does the user state have direct access to the kernel state memory?

A more common method of implementing kernel State and user-state process sharing memory. Kernel state: The kernel allocates memory (available page __get_free_page), and then writes the virtual address of the allocated memory to the registered proc file system. User state: First, according to the proc file system, the virtual address of the kernel allocated memory is obtained, the address is converted to the physical address, and the physical address is accessed by mapping/dev/mem. Another example is frame caching, which is also an example of accessing kernel state memory from a user state.


7. How does MSI interrupt in PCIe trigger?

To trigger an MSI interrupt on an MSI address by accessing address, data in the MSI capabilitiy structure, and writing MSI data to the


8. How do I know the size of a file?

Multiple ways:

(1) tail pointer minus head pointer:

unsigned long get_file_size (const char *path)  {      unsigned long filesize =-1;      FILE *FP;      fp = fopen (path, "R");      if (fp = = NULL)          return filesize;      Fseek (FP, 0L, seek_end);      FileSize = Ftell (FP);      Fclose (FP);      return filesize;  }  

(2) Read file properties

#include <sys/stat.h>    unsigned long get_file_size (const char *path)  {      unsigned long filesize =-1;          struct stat statbuff;      if (stat (path, &statbuff) < 0) {          return filesize;      } else{          filesize = statbuff.st_size;      }      return filesize;  }  


The difference between 9.spin_lock and mutex_lock? What is the difference between spin_lock in single-core and multi-core mode?

Spin_lock and Mutexock:(refer to http://blog.csdn.net/wilsonboliu/article/details/19190861)

The mutex Mutex_lock is sleep-waiting. This means that when a mutex is not acquired, there is a context switch, adding itself to the busy wait queue until another thread releases the mutex and wakes it up, while the CPU is idle and can dispatch other task processing.

Spin lock spin lock is busy-waiting. This means that when there is no lock available, it is always busy waiting and locking requests continue until the lock is obtained. The CPU is always busy in this process and cannot do other tasks. (question: Is this not a dead lock?) For example, on a dual-core machine, there are two threads (thread A and thread B) that run on CORE0 and Core1 respectively. Using a thread on a SPIN-LOCK,COER0 will always consume the CPU.

Another notable detail is that spin lock consumes more user time. This is because two threads are running on two cores, and most of the time only one thread can get the lock, so the other thread is always busy waiting on the core it runs on, CPU occupancy is 100%, and the mutex is different, and the context switch occurs when a request to lock fails. This will make it possible to empty a nucleus for other computing tasks.

How to choose?

(1) The mutex is suitable for the scene with very frequent lock operation and has better adaptability. Although it costs more than spin lock (mostly context switching), it is suitable for complex scenarios in real-world development, providing greater flexibility in terms of performance.
(2) Spin Lock's lock/unlock performance is better (less CPU instruction), but it only adapts to scenarios where critical areas run for a short time. In the actual software development, unless the programmer is aware of the lock operation behavior of their own program, it is not a good idea to use spin lock (usually a multi-threaded program has thousands of operations on the lock, if the lock operation fails (contended lock requests) Too much will waste a lot of time waiting for the empty.

(3) A more insured approach may be to use the Mutex first (conservatively), and then, if there is further demand for performance, you can try tuning with spin lock. After all, our program is not as high-performance as Linux kernel (the most common lock operation for Linux kernel is spin lock and RW lock).

Spin_lock differences in single-core and multicore: (Reference http://blog.chinaunix.net/uid-25871104-id-3052138.html)

Spin_lock is a synchronization mechanism for the Linux kernel. The kernel code can claim the possession of a resource by acquiring Spin_lock until it releases the Spin_lock; if the kernel code tries to get a locked spin_lock, this part of the code waits until the Spin_lock is obtained.

spin_lock Differences in single-core multicore:

The implementation of Spin_lock in kernel has different processing methods for single core (up) and multi-core (SMP). For a single core, if the spin_lock is not in the interrupt context, the Spin_lock-locked code loses CPU ownership and only occurs when the kernel is preempted. Therefore, for a single core, it is only required to prevent preemption when the lock is acquired by Spin_lock, and open preemption when releasing the lock . For multicore, there is a case where two pieces of code are executed at the same time on multiple cores, which requires a real lock to declare the code's possession of the resource.


10. What does a multi-process wait for?

(Refer to http://blog.csdn.net/wallwind/article/details/6998602)

Wait for the child process to complete: if all its child processes are running, it is blocked, or if a child process has been terminated, the waiting parent process gets to the terminating state, the terminating state of the child process is returned immediately, and if he does not have any child processes, an error is returned immediately.


11. If the process is stuck, how can I see where the card is?

(1) Printing the stack information at this time by magic key

(2) Printing stack information through GDB

There is no train of thought in this question, for the Great God Guide


12. How to troubleshoot downtime problems? (Refer to http://blog.chinaunix.net/uid-25909722-id-3047986.html)

(1) View message information (/var/log/messages, best redirect or configuration saved to flash for restart analysis)

(2) Magic key to see if the system has a response, such as a response to analysis Magic Keyboard trigger various information

(3) Turn on kernel lock detection configuration

This question also invites the great God under the guidance, the thought is more limited


13. How does the variable parameter work? How should it be implemented? (Reference: http://blog.csdn.net/wooin/article/details/697106)

[Email protected]:/study/linuxknowledge# cat vaarg.c

#include <stdarg.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include < unistd.h> void Simple (int i, ...) {va_list Arg_ptr;char *s = NULL; Va_start (arg_ptr, i); s = Va_arg (arg_ptr, char *); Va_end (arg_ptr); printf ("%d%s\n", I, s) ; return;} int main (int argc, char *argv[]) {int i = 10;simple (i, "Hello ni ma!\n"); return 0;}


[Email protected]:/study/linuxknowledge#./vaarg

Ten Hello ni ma!

[Email protected]:/study/linuxknowledge#

Simple to say is to use VA_ARG macro to achieve, va_arg macro, see the details http://blog.csdn.net/wooin/article/details/697106

Further exploration back to write


How many questions can you answer about Linux? --Answer 1~13 question

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.