[*] Enable loadable module support --->
Dozen
To enable the support for the loadable module, If you enable it, you must use "make
Modules_install "Install the kernel module in/lib/modules/to enable your kernel support module. What is the module? The module is a small piece of code, compiling
Later, you can dynamically Add the kernel when the system kernel is running, so as to add some features to the kernel or support certain hardware. Generally, some commonly used drivers or features can be compiled into modules to reduce the size of the kernel. In
You can run the modprobe command to load it to the kernel (you can also remove it when it is not needed ). Whether some features are compiled into modules is not frequently used, especially when the system is started.
The required driver can be compiled into a module. If it is a driver that needs to be used at system startup, such as a file system, the support of the system bus should not be compiled into a module; otherwise, the system cannot be started. Not used at startup
Module compilation is the most effective way. You can refer to the man manual to learn about modprobe, lsmod, modinfo, insmod and rmmod.
If not, select y.
- [] Forced module Loading
Force module loading allowed
- [*] Module unloading
Allow unmount of loaded modules
- [*] Forced module unloading
This option allows you to forcibly detach a module in use, even if the kernel considers it unsafe. The kernel will immediately remove the module, regardless of whether someone is using it (using the rmmod-F command ). This is mainly for functions provided by developers and impulsive users. If not, select n.
- [] Module Versioning support
Sometimes, you need to compile the module. If this option is selected, some version information will be added to provide independent features for the compiled modules, so that different kernels can distinguish them from their original modules when using the same module. This may be useful sometimes. If not, select n. Allow other kernel versions of modules (problems may occur)
- [] Source checksum for all modules
Verify the source code for all modules, if you do not compile the kernel module by yourself, you do not need this function to avoid version conflicts caused by accidental changes to the source code of the kernel module while you forget to change the version number. If not, select n.
-*-Enable the block layer --->
Block
Device support. This option is required for hard drive, USB, and SCSI devices so that block devices can be removed from the kernel. If not selected, the blockdev file will be unavailable. Some file systems, such
Ext3 will be unavailable. This option disables SCSI character devices and USB
Storage devices, if they use different Block devices. Select y unless you know that you do not need to mount the hard disk and other similar devices. However, this item is optional.
- [*] Support for large (2 TB +) Block devices and files
Only when using block devices larger than 2 TB
- [*] Block layer SG support v4
General SCSI block device version 4th
- [] Block layer data integrity support
Block device data integrity support
- Io schedulers --->
Io scheduler I/O is the input/output bandwidth control, mainly for hard disks, is the core necessary thing. Three I/O schedulers are provided here.
- <*> Anticipatory I/O schedory
Enable
In most environments, assume that a block device has only one physical query head (for example, a separate SATA hard disk), and merge multiple random lowercase data streams into one capital data stream, exchange the write latency for the maximum write latency
Throughput. Suitable for most environments, especially those with more writes (such as file servers), preemptive I/O
The default disk scheduling method is used. It is usually a good choice for most environments. But it and deadline I/O
The scheduler is relatively large and complex, and sometimes it is slow during data import.
- <*> Deadline I/O Scheduler
Enable
The poll scheduler is concise and compact. It provides the smallest read latency and excellent throughput, and is especially suitable for reading more environments (such as databases) deadline.
The I/O scheduler is simple and tight, and has the same performance as the preemptive scheduler. It works better when some data is transferred in. As for single-process I/O disk scheduling, it works in almost the same way as preemptive scheduling.
So it is a good choice.
- <*> Cfq I/O Scheduler
The QoS policy is used to allocate the same amount of bandwidth to all tasks to prevent the process from getting starved to death and achieve low latency. It can be considered as a compromise between the two schedulers. the CFQ scheduler, a multi-user system with a large number of processes, attempts to provide the same bandwidth for all processes. It will provide an equal working environment and is suitable for desktop systems.
- Default I/O scheduler (CFQ) --->
Mo
I understand the above three I/O schedulers in this way: the preemptive approach is traditional, and its principle is to prioritize scheduling when there is a response. If your hard disk is running a job at this time, it will also be paused.
Users. The term is: All work has a deadline and must be completed before that. When a user responds, it determines whether to respond to the user based on whether the user's work can be completed. CFQ indicates the average distribution.
Resources, no matter how fast your response is, or the amount of work you need, are evenly distributed.
- () Anticipatory
- () Deadline
- (X) CFQ
- () No-op