Since dealing with diskless systems, I have learned more than N pieces of knowledge, both in the software and hardware, both in the theoretical and practical aspects. Many people think that diskless systems are very complicated, I am also misled by this idea that diskless is very complicated, but in fact I found that diskless is actually very simple, and the so-called diskless is more complicated than the combination of theory and experience, I used to see a lot of diskless tutorials on the Internet. I didn't take it for granted at the time. I always thought I was a good guy. But after I actually did it, I found myself spreading this information, use a famous saying that someone once said: "A lot of people are unhappy and disagree, just to prove that the experience of their predecessors is correct."
In fact, this famous saying goes like practice, experience, and theory, because we often hit the south wall in practice and sum up our experience, after a long time of experience, I began to be curious, so I began to search and finally found that the theory has already described what you have practiced and what you have summarized. Therefore, I would like to share some theoretical knowledge with you today, mostly from the Internet. If there are any errors, I hope you can correct them in time.
Since we mentioned diskless at the beginning, today is also a very important thing related to diskless, that is, Nic parameter settings. We all know that diskless means that the client does not have a hard disk, and diskless means to put the client's hard disk on the server and work through a virtualization technology. In this virtual process, the network card is a key link. It is like a data cable with a disk client. This data cable is much more complicated than the SATA data cable, not only the problem of good contact, but also the problem of setting, setting, fast, stable, and difficult to set, problems may also be varied and complicated. OK. Now, let's get started!
Now that we want to talk about NIC-related content, we have to replace the legendary IEEE. What is IEEE? He is actually an organization and has created many Internet communication standards. IEEE Full name: Institute of Electrical and Electronics Engineers, for example, the "Nic aggregation" we have heard before is actually a standard protocol created by IEEE called 802.3ad link aggregation, for example, the VLAN we mentioned is actually a standard protocol created by IEEE named 802.3q (Virtual LAN virtual LANs: VLAN). If you are interested, I believe I can understand a lot about IEEE or 802.3.
Energy Efficient Ethernet: eee
The above mentioned are some very reliable things that IEEE has done. In fact, they have recently developed an energy-saving standard 802.3az, the function is to automatically reduce the power consumption when the network card has no traffic. The maximum power consumption is realized only when the network usage is high. The full name of this 802.3az energy-saving standard is energy efficient Ethernet (EEE for short, energy-saving and efficient Ethernet technology. His appearance has brought a lot of trouble to diskless, as long as you enable the Eee settings in the NIC parameter, it may cause slow boot speed. Currently, RealTek's 8111e NIC (rev06) is relatively new in the market) this energy-saving technology is supported. However, for batch reasons, some Enis may be slow if the Eee option is not disabled, and the XP scroll bar requires more than 6 laps, if it is disabled, it may change to 2 or 1 lap, and some 8111e NICS will not be affected. This is the first Nic parameter EEE. Because the technology is still relatively new, we can only see this option when we see the RealTek NIC with a newer driver, and we also see this news on the RealTek official website.
Upload
Download Attachment
(12.24 KB)
One of the red lines is the world's first ......, Yes, the problem with EEE seems to be met only by RealTek Nic. Maybe the first person who eats crabs is always the first to taste delicious and the first person who will be hurt ...... It's just that our little white users really cannot afford to hurt ...... For the configuration page of a RealTek 8111e Nic, if your Nic has the Eee option, you must disable it. Of course, if it does not, you will not need to ignore it because it does not have this option, the ENI does not support EEE.
Upload
Download Attachment
(26.11 KB)
In addition, "environmental protection and energy conservation" and "greenethernet" are similar to EEE, which are both energy-saving functions. Therefore, we recommend that you disable them. In short, functions related to energy conservation must not be enabled on diskless ECS, otherwise, it is neither slow nor unstable, because there is no "no traffic" for the NIC on the diskless disk, and problems may occur when the NIC is enabled.
Process control, flow control, flowcontrol
This option is basically applicable to all network cards, but there are some differences in the calling method. For example, the RealTek Nic is called traffic control, the intel Nic is called process control, and some Nic options are simply in English, called flowcontrol, many vswitches also have this function, also called flowcontrol, which is short for Flow Control in the following theoretical explanation. In this way, you can leave one word less.
The Flow Control Supported by the network adapter is different from the QoS, although the goal may be the same. The Flow Control Supported by NICs or switches is also an IEEE standard, which is called 802.3x full-duplex Ethernet data link layer traffic control. Because it is an electronic and electrical standard, switches, ethernet devices such as NICs support and comply with the 802.3x standard. The core function of this standard is to prevent "packet loss" caused by network congestion, the general working principle is that when one end of the device on both ends of the link is too busy, he will send a command to the other end of the device to suspend packet sending, in this way to relieve pressure, solve the packet loss problem. For example, if you eat on your own, it is actually a reflection of "throttling", because you may get into your nose because you are too busy or busy. However, if you feed a person, it is like there is no "Flow Control". You may feed the food to others' noses ......
It seems that stream control should be a very good way to prevent packet loss, but why should we disable it on diskless storage? The reason is very simple, because almost all diskless software now supports the "resend data packet" function. That is to say, if the client finds packet loss or the server finds packet loss, it will request again, there is no need for the network card to get rid of the mouse from the middle, but it is also officially because the diskless software has a re-transmission mechanism. When this re-transmission mechanism encounters traffic control, it staged such a farce:
When the client Nic asks for data from the server Nic, it says: the server will send me the next packet!
When the server Nic sends data to the client Nic, it says: soryy, I am too busy. You wait, so the server paused.
As a result, the diskless software client and the client Nic said: I rely on it. Why haven't the packets been sent yet? If you don't come, I keep sending it!
The diskless server also asked the server NIC: I sent the packet to the server Nic. Why didn't the client respond? The client does not respond to packet loss, so the diskless server is desperately sending packets to the client ......
In this case, a problem occurs due to the flow control. Due to the data waiting problem, the client is stuck. Because the diskless server never sends data packets, the server may also fail, it is the reason why throttling affects diskless storage. Therefore, whether it is a server, a client, or a vswitch, it must be disabled if there is a traffic control!
Giant frame, giant frame data packet, jumboframe
This Nic parameter is basically available on all NICs, and is also called differently because of different Nic brands. For example, RealTek is called a giant frame, and an Intel Nic is called a giant frame packet, some earlier Nic drivers are displayed in English, called jumboframe. This article is also called a huge frame to reduce typing.
The parameters mentioned in the first two network interface parameters were developed on the basis of the IEEE international standard protocol. This giant frame is not an international standard, but a non-mainstream standard agreed by communication equipment companies, the so-called jumbo frame is a super-long frame format designed specifically for Gigabit Ethernet. the maximum length of the Ethernet standard frame is 1518 bytes, while the jumbo frame length varies with manufacturers, generally, the minimum size is 2 kb, and the larger size is about 9 KB. So what are the benefits of this huge frame?
Let's take a symptom and explain it to everyone. I believe that people without disks are familiar with the hd_speed software. He is a speed measurement software and has a parameter named "block size" in his speed measurement options, for example:
Upload
Download Attachment
(24.53 KB)
Careful students will surely find that, in the same diskless and network environment, if you choose the "block size" for speed testing, it will be different from the tested speed, for example, in a network environment with a 64 K speed of 64 Mb/s, the speed of K or K can reach 80 Mb/s or even 90 Mb/s, this phenomenon is actually the principle of giant frames.
The use of giant frames can effectively reduce the number of packets in the network, thus improving the transmission efficiency and reducing the extra burden on network devices to process "headers. This is why giant frames can improve the transmission efficiency of optical networks.
I believe you have noticed that the switch has a parameter called packet forwarding rate, but it does not limit the packet size, that is, the large packet. In fact, the small packet does not seriously affect the forwarding efficiency. Therefore, if multiple large data packets can be transmitted per unit time, the data volume will naturally increase, and the speed displayed by the software will also become faster. Take the hdspeed speed test just now as an example:
The speed of 64 K blocks is 64 Mb/s. In fact, this network channel can transmit 64*1024/64 = 1024 64 K packets per second. If no packet loss occurs, when 1024 128 K data packets can be transmitted per second, the network transmission speed can theoretically reach Mb/s, that is, doubled!
Speaking of this, many people are excited. They need to quickly open huge frames to speed up, but don't forget that this is just an ideal situation. In fact, even a good network will have packet loss, in the case of packet loss, the larger the data packet you transmit, the greater the impact of losing a data packet. The problem is that the transmission speed is unstable, therefore, Jumbo is a very good technology in an ideal environment. In an unsatisfactory environment, Jumbo is undoubtedly a disaster. At the same time, Jumbo is not an industry standard, however, the standards for each instance are not the same. If the hardware devices provided by different manufacturers are used, the speed may fluctuate due to the length of a single frame.
In fact, which Internet cafe can achieve Nic, network cable, and switch? Actually, no. Therefore, giant frames are still disabled for Internet cafes.
Here is a small experience for everyone. The intel 82574l Nic is used as the server to Open 4 K giant frames, and the RealTek Nic is used as the client to open 2 k giant frames. When using some dummies to switch devices, there may be some improvements to speed testing. However, these improvements are generally insufficient to optimize the customer experience. If you are interested, you can play with them to enhance your understanding!
A large number of transfer loss, interrupt throttling rate, and interrupt mode ......
The functions of the above Nic parameters are not the same, but in fact they all have the same purpose, or they are the relationship of mutual cooperation. First, we will explain a large number of transfer reductions, this option exists on the RealTek Nic.
For example, we know that a CPU is responsible for computing on a PC, and a NIC actually has a CPU computing function, the so-called Nic throughput capability, the response capability is actually an embodiment of the computing capability of the NIC chip. However, in the early days, the processing capability of the NIC chip was very general, so some NICs had the last option. The specific name was forgotten, which roughly meant that when the NIC performance was optimized, it is based on CPU reduction as the optimization indicator, or I/O performance as the optimization indicator. This is the same as the large amount of transfer to reduce the negative effect, so a large amount of transfer to reduce the negative, who is reducing the burden? It is actually a burden on the CPU. If the network adapter processes too much data, it will consume some CPU resources for calculation. In order to reduce the CPU network transmission speed, the CPU pressure increases, with this feature, the transmission speed will automatically decrease when the transmission speed is too high. If it is turned off, the highest performance of the network card will be realized.
The current CPU computing power is not too good, so there is no need to discard high-performance network transmission speed to reduce CPU usage, which is completely outdated. In fact, the impact of the interrupt throttling rate and interrupt mode in the title is the same as that of a large number of transmissions, it is developed to prevent CPU usage from rising due to excessive network transmission speeds. For example, this function parameter in an Intel Nic is called "interrupt throttling rate ", in the description, the benefits and disadvantages of this function are also explained in detail. For details, see:
Upload
Download Attachment
(10.47 KB)
The full description is as follows:
Sets the rate of controller adjustment or delayed interrupt generation to optimize network throughput and CPU usage.. The default setting (adaptability) dynamically adjusts the interrupt rate based on the traffic type and network usage. Selecting another setting may improve the network and system performance in a configuration.
When there is no interrupt adjustment, because the system must handle a large number of interruptions, the CPU usage will increase at a higher data rate.. Interrupt adjustment enables the network drive to accumulate interruptions and send a single (rather than a Series) interrupt. At a high data rate, high interrupt adjustment settings may improve system functions. Low interruption mediation settings should be selected at a low data rate because delayed interruption may lead to a delay.
When your server uses Xeon 3430 for the worst and Xeon 5506 for the better, and dual CPUs for the better, you think it is necessary to reduce the CPU and pear problems, to reduce the NIC performance? My answer is: no. Of course, if your server is still a poor CPU, it will not be modified by default to avoid peak hours, the server's CPU usage is high, leading to a second card problem in the whole site, or even Server failure ......
If you disable this parameter based on tests that are not strictly prohibited, you can increase the network transmission speed by at least 5 Mb/s.
Hardware validation and, adaptive frame spacing adjustment, TCP/UDP checksum
After a few hours, Luo suo finally talked about the last Nic parameter. From the NIC parameter description, we can basically understand that this is an effect function, and he does not need to say much about its role, in fact, it is to prevent packet corruption and frame loss when the network environment is poor. Here we need to introduce two concepts: the legendary TCP and UDP. I believe that everyone is very familiar with terms. What are their characteristics? I believe that not everyone knows about them, at least I am writing an article, it cannot be clearly expressed, So BaiduTCP and UDP features:
TCP: Transmission Control Protocol, which provides connection-oriented and reliable byte stream service.
Before the customer and the server exchange data with each other, a TCP connection must be established between the two parties before data can be transmitted. TCP provides timeout resend, discard duplicate data, test data, traffic control, and other functions to ensure data can be transferred from one end to the other.
UDP-the User Datagram Protocol is a simple datagram-oriented transport layer protocol.
UDP does not provide reliability. It only sends the data from the application to the IP layer, but it cannot guarantee that the data can reach the destination. Because UDP does not need to establish a connection between the client and the server before transmitting the datagram, and there is no timeout and re-transmission mechanism, the transmission speed is very fast.
Of course, TCP and UDP are far more simple than described above, but after all, we are not engaged in research and development, just understand the truth. In my own words, the description is:
TCP is a very reliable person. To speak with you, you must understand it from start to end, hear it clearly, and ensure correct communication;
UDP is a very unreliable person. I said mine, you listen to you, and I just want to finish it. As for whether you understand it or not, it has nothing to do with me.
Therefore, TCP is slow but reliable, and UDP is too slow to complete the task, so the speed is fast!
What is the relationship between TCP, UDP, and validation? The TCP protocol has an effective verification function and does not need to work together with the network adapter. Even if the verification is required, UDP also needs verification to prevent data corruption caused by bad packets during transmission, TCP is not required at all. In diskless mode, every packet is critical after the instance is started. Therefore, most diskless instances adopt the TCP protocol to ensure reliability. Therefore, this verification function is not required, at the same time, package verification is also a good resource-consuming operation. To put it worse, it is a waste of valuable resources for such a superfluous function! Therefore, if no disk is available, you can disable the NIC parameters with the valid words!
But in other words, we can't do things too well. Just like talking, we can't say too much. Although this verification function looks safe in most cases, however, it can be said that it is safe to use after learning about its features. For example, if a software using UDP protocol is used, the verification function may not be enabled in poor network conditions, A problem may occur!
Well, Luo suo has made so many requests. I 'd like to say that any function of any product is generated in a specific environment. To meet specific requirements, therefore, in theory, any function is useful in itself. It depends on the environment to determine whether to use it or not. For example, knives and guns are used in war, but during the assassination, you can use a gun to expose it easily. However, if you use a knife in Chengdu, you can never find it. Well, this example is a little bloody. You can also think of it as Cutting watermelon, using a knife is definitely better than using a gun ......
This is the end of today. Most of the above content is from the network description. The full text is completed by the bloggers. Of course, there are also some very rogue metaphors of the bloggers, which may be inappropriate, this has caused misunderstanding. If you do not understand the problem or make a mistake, I hope you can criticize or correct the problem. Thank you!
The following is a description of NIC parameter settings from the network dimension knowledge base. This setting is common to servers and clients:
Http://bbs.icafe8.com/forum.php? MoD = viewthread & tid = 280990 & page = 1 # pid1523072
Nic brand |
Parameter Name (Chinese and English) |
|
Suggestions |
RealTek |
Eee |
|
Disable |
Giant frame/jumboframe |
|
Disable |
Flow Control/flowcontrol |
|
Disable |
Massive transfer load/offloadlargesend |
|
Disable |
Environmental protection and energy saving/greenethernet |
|
Disable |
Hardware validation and/offloadchksum |
|
Disable |
Intel |
TCP/IP Load Balancing options |
Load-based acceptance of IP verification and/checksumrxip |
Disable |
Receive TCP verification and/checksumrxtcp for Load Balancing |
Disable |
Load-sharing transfer IP verification and/checksumtxip |
Disable |
Load-based transmission TCP verification and/checksumtxtcp |
Disable |
Load-based TCP segmentation/tcpsegmentation |
Disable |
Process control/flowcontrol |
|
Disable |
Adaptive frame spacing adjustment/adaptiveifs |
|
Disable |
Interrupt throttlerate/interruptthrottlerate |
|
Disable |
Jumbopacket |
|
Disable |
Marvell |
Jumbopacket |
|
Disable |
Uninstall TCP/UDP Checksum
(IPv4)/tcpudpchecksumoffloadipv4 |
|
Disable |
Massive transfer load (IPv4)/lsov1ipv4 |
|
Disable |
Interrupt mode/interruptmoderation |
|
Disable |
Flow Control/flowcontrol |
|
Disable |
Energy Star/wakeupspeed |
|
Disable |
Atheros |
Flow Control/Flow Control |
|
Disable |
Interrupt adjustment/interrupt moderation |
|
Disable |
Maximum interrupt per second/max irq per second |
|
30000 |
Receiving buffer/number of receive Buffers |
|
512 |
Task uninstallation/task Offload |
|
Disable |
Broadcom
|
Hardware validation and/chksumoffload |
|
Disable |
Flow Control/flowcontrol |
|
Disable |
Unmount a large number of messages/large send Offload |
|
Disable |