With the increasing number of computers, the increasing number of network applications, and the increasing number of networks, it is difficult for enterprises to adapt to new network requirements, upgrading is a historical necessity. In fact, as the price of servers keeps decreasing, the threshold for upgrading servers is also decreasing.
When upgrading a server, you should fully consider the role of the server. In fact, different network services have different requirements for server configuration. For example, the Web server and Proxy Server require a large amount of memory, but they do not have a high requirement on hard disk capacity and CPU processing capacity. The FTP server and file server require a large capacity of hard disk and memory, there are no high requirements on CPU processing capabilities, database servers require large memory and high CPU processing capabilities, and hard disk capacity.
When high requirements on CPU processing capabilities are required, you should consider the dual-CPU architecture; when high requirements on hard disk capacity, you should consider configuring RAID arrays; when high requirements on memory, you should provide 1 ~ 2 GB memory.
If the server supports a multi-CPU architecture and is not filled with all the CPU sockets, you can simply increase the CPU to improve the processing performance of the server.
Multi-CPU Symmetric Processing SMP, Symmetric Multi-Processing) technology, refers to a computer with a group of processor multiple CPUs), CPU sharing memory subsystem and bus structure. Although multiple CPUs are used at the same time, from the management point of view, they behave like a single machine. With the improvement of the network application level, it is difficult to use a single processor to meet the actual application requirements. At this time, we must use a Symmetric Multi-processing system to fill the CPU of the server to solve this conflict. The most common Symmetric Multi-processing system on servers usually uses 2, 4, 6, or 8 processors.
Generally, the server uses the new generation of Xeon processor Foster Based on the Pentium 4 core technology. The low-end Foster is called Foster DP. DP stands for Dual Processor, which means Dual Processor is supported. If more processors are needed for parallel processing, a more expensive Foster MP must be used. MP means Multi Processor is supported ). The MP series also has a maximum of 4 MB full-speed three-level cache. Different Versions of Foster processors can be distinguished based on the size of level 2 and level 3 caches. Foster DP will be equipped with KB Level 2 caches. In addition to having Level 2 caches, Foster MP, it is also equipped with a kb or 1 MB level-3 cache. Such a large cache will provide better performance in some programs that use the same data multiple times, such as databases.
Abstract: When upgrading a server, you should fully consider the role of the server. In fact, different network services have different requirements for server configuration. For example, the Web server and Proxy Server require a large amount of memory, but they do not have a high requirement on hard disk capacity and CPU processing capacity. The FTP server and file server require a large capacity of hard disk and memory, there are no high requirements on CPU processing capabilities, database servers require large memory and high CPU processing capabilities, and hard disk capacity.
2-way CPU server motherboard supported
Therefore, when selecting a new CPU configuration for the server, pay attention to the different architecture of the server and the original CPU type. In addition, it is best to use the same model, clock speed, or even the same number of CPUs on a server to ensure compatibility between CPUs. If multiple servers are purchased at the same time, some servers can use the old CPU, while other servers use the new CPU to ensure the stability of system operation.
How much memory does the server need? This is related to the purpose of the server. At present, even the lowest-level entry-level servers are equipped with a memory of more than MB under standard conditions; the memory of department-level servers should be more than MB; as for enterprise-level servers, the memory should be MB or more. In practical applications. We recommend that you configure a higher level. The working group-level server should be 512 MB, the department-level server should be 1 GB, and the enterprise-level server should be 2 GB or more.
The memory used by the server is generally Registered ECC memory. "ECC" is the abbreviation of "Error Check & Correct" Error Check and correction), refers to the memory with "Error Check and correction" function. ECC memory has the function of checking memory data. If an error is detected, the error is automatically corrected as much as possible.
ECC Server Memory
When upgrading the server memory, you should also pay attention to the matching with the motherboard and original memory. On the same server, try to use the same speed, capacity, or even the same batch number of memory.
A large number of users on the network often access the server at the same time, requiring the server's I/O input/output) performance to be powerful. SCSI technology, RAID technology, high-speed smart NICs, and large memory expansion capabilities are all effective ways to improve the I/O capabilities of the IA architecture server.
Abstract: When upgrading a server, you should fully consider the role of the server. In fact, different network services have different requirements for server configuration. For example, the Web server and Proxy Server require a large amount of memory, but they do not have a high requirement on hard disk capacity and CPU processing capacity. The FTP server and file server require a large capacity of hard disk and memory, there are no high requirements on CPU processing capabilities, database servers require large memory and high CPU processing capabilities, and hard disk capacity.
Because the disk access speed cannot keep up with the CPU processing speed, the disk becomes a bottleneck for improving server I/O capabilities. In order to solve the increasing contradiction between the high speed of computer CPU and the low speed of Disks, Professor Paterson of the University of California at Berkeley proposed the concept of RAIDRedundant Array of Independent Disks in 1987. The technical idea is: using existing small and cheap disks, multiple disks are combined into a disk array in a certain way. Through some hardware technology and a series of scheduling algorithms, the entire disk array is like a large disk with a large capacity and high reliability and speed.
SCSI RAID
RAID has many features. First, the storage capacity is increased, and multiple hard disks can be organized, just like reading a hard disk. Second, multiple disk drives can work in parallel, improving the data transmission rate, the Data Reading rate of the hard disk can be multiplied to meet the concurrent data access requests. Third, the reliability is improved due to the verification technology. For RAID 1 and RAID 5 arrays, when a hard disk is damaged, the original data on the damaged disk can be restored using other disks without affecting the normal operation of the system, and can replace the damaged hard disk, that is, the Hot Swap function, in the live state), the array controller will automatically write the restructured data to the new disk, or write the data to the hot backup disk, and use the new disk as the new hot backup disk. In addition, disk arrays are usually equipped with redundant devices, such as power supplies and fans, to ensure the heat dissipation and System Reliability of disk arrays. Currently, commonly used RAID types include RAID0, RAID1, RAID3, and RAID5.
SCSI hard disk
Generally, ide raid and IDE Hard Disks are cheap but have poor performance. Therefore, they are used for low-cost working group-level servers. SCSI, RAID, and SCSI hard disks have high performance, but the price is very expensive, so it is used for department-level or enterprise-level servers.
Abstract: When upgrading a server, you should fully consider the role of the server. In fact, different network services have different requirements for server configuration. For example, the Web server and Proxy Server require a large amount of memory, but they do not have a high requirement on hard disk capacity and CPU processing capacity. The FTP server and file server require a large capacity of hard disk and memory, there are no high requirements on CPU processing capabilities, database servers require large memory and high CPU processing capabilities, and hard disk capacity.
Improving the performance of each server is of course a good choice. However, it organically organizes a number of servers that do not have the potential for upgrading and uses Server Load balancer and Server clusters, to meet the increasing network needs, is also a very good choice. In fact, we use multiple servers with lower performance, instead of all the servers with higher performance. Some of them are similar to "not putting all the eggs in one basket.
Working principle of Cluster technology: in a Cluster, a node server acts as the Cluster Manager. It first receives a request from the user, then, determine which node in the group has the least load and send the request. All nodes in the cluster will open a buffer in the local memory, which is similar to the bridge board in the NUMA system. When a node needs to use data in the memory of other nodes, the data is first put into the local buffer through the network.
The reason is simple. First, because network services are distributed on different servers, even if one of the systems is paralyzed, other network services will not be affected. On the contrary, if only one server is used, the impact of system paralysis on Enterprise websites will be fatal. Second, when multiple network service requests occur at the same time, when multiple computers are combined to process their respective events at the same time, it is obviously better to execute multiple tasks on a single computer. Third, the total cost of multiple servers with poor performance is often lower than that of a server with strong performance. Therefore, if there is no high requirement on the server's processing capabilities, it is safer and more economical to distribute network services to multiple servers.
In addition to allocating different network services to different servers, Server Load balancer and clusters must be set for some key services. On the one hand, network requests that are too concentrated can be shared, reduces the pressure on each server and provides fast and reliable responses to customers' requests. On the other hand, server redundancy can be implemented to ensure that network services are continuously provided when one or more servers fail.