Linux Server Analysis Optimization

Source: Internet
Author: User
Tags nginx server

Ext.: http://jiekeyang.blog.51cto.com/11144634/1774473

I. System performance Analysis

1. The performance of the system refers to the effectiveness, stability and responsiveness of the operating system to complete the task . The complete task of the operating system is related to the system itself, the network topology, the routing device, the routing strategy, the access device and the physical circuit . when the Linux server problems, should be from the application, operating system, server hardware, network environment and other aspects of troubleshooting .

2. Performance Optimization Scheme provides: The most important factor that affects system energy is the application and operating system two aspects, because these two aspects of the problem is good hiding, not easy to detect, and other aspects of the problem will generally be immediately located. System hardware:(1). When the hardware is physically faulty, replace the hardware directly; (2). Hardware performance does not meet the requirements, upgrade hardware. Network: Insufficient bandwidth, network instability, optimize and upgrade the network can be. Application: directly modify or optimize the software system. Operating System configuration: Modify the system parameters, modify the system configuration .

3. Resource balance: Linux operating system is an open source product, is open source software practice and application platform, can support the use of open source software. The purpose of performance optimization is: to a certain extent, the use of the system's resources to reasonable and maintain a balance, that is, when the system is running well is precisely when the system resources to achieve a balanced state. the overuse of any resource can disrupt system balance, resulting in slow system response or excessive load. excessive use of CPU resources can cause a large number of waiting processes in the system, causing the application to respond slowly, and a large increase in the process will result in increased system memory resources, when the memory is exhausted, the system will use virtual memory, and the use of virtual memory will lead to increased disk I/O and increase CPU overhead.

4. System administrator Analysis System performance and Management scheme: System administrator to understand the operating system's current state of operation, such as system load, memory status, process status, CPU load and so on. And the system administrator also need to master the operating system hardware information, such as disk I/O, CPU model, memory size, network card bandwidth and other related parameters. The system administrator also needs to understand the usage of the application to system resources , and the operational efficiency of the deeper applications, such as the existence of bugs in the program, memory overflow and other problems, and have a basic solution to such problems. The system resources can be monitored to determine whether the application has an exception , if the application has problems, if the system administrator itself can be resolved, then solve themselves, if not resolved, you need to reflect to the developer, the application to modify or upgrade.

Two Factors that affect the performance of Linux servers

1. Hardware Resources

(1). Cpu

Most CPUs can only run one thread at a time, and the Hyper-threading processor may run multiple threads at the same time , so it can take advantage of the processor's Hyper-threading feature to raise performance . The Linux system only runs the SMP kernel to support Hyper-threading , but the more CPU you install, the lower the performance increase rate . Also, theLinux kernel identifies multi-core processors as multiple single-core CPUs, such as two quad-core CPUs, which are treated as 8 single-core CPUs under a Linux system. But performance analysis, the two are not equivalent, relatively speaking, 8 single-core CPU performance is relatively higher.

CPU bottleneck: mail server, Dynamic Web server .

(2). Memory

The memory is too small, the process is blocked, the application slows down, or even loses its response; the memory is too large to waste resources . Linux uses physical memory and virtual memory Two ways, virtual memory can alleviate the lack of physical memory , but the excessive use of virtual memory, will lead to the performance of the application Degradation . Therefore, in order to ensure the high-performance operation of the application, it needs large enough physical memory, but the physical memory is too large, which can cause the waste of resources. For example, 32-bit processor Linux operating system, the memory of more than 4G parts will be wasted, to use more memory, you need to install 64 of the operating system.

on a 32-bit Linux operating system, applications can use only 2G of memory (because of the limitation of Processor addressing range), so even with larger memory, the application cannot use it.

Memory bottlenecks: print servers, database servers, static Web servers.

(3). Disk I/O performance

In an application with frequent read and write operations, if the disk I/O performance is not met, there is a case of application stagnation, so a common use of RAID arrays to improve disk I/O performance.

RAID: Redundant array of independent disks, referred to as a disk array . Raid provides more I/O performance and data redundancy than a single hard disk by combining multiple separate disks (physical hard disks) in different ways to form a single disk group (logical hard disk).

RAID technology consists of a disk group, the equivalent of a large hard disk, users can partition it, set up file system and other operations, and a single physical hard disk basically the same, the only difference is that the RAID disk group I/O performance is much higher than the performance of a single hard disk, And there is a great improvement in data security.

Common RAID Technologies:RAID0, RAID1, RAID2, RAID3, RAID4, RAID5, RAID6, RAID7 , raid0+1, RAID10, etc.

(4) Network bandwidth

Network bandwidth is also an important factor that affects performance, and low-speed, unstable networks can cause network application access to be blocked . The solution to bandwidth is to use a high-volume bandwidth or use a fiber-optic network.

2. Operating system resources

(1). System Installation Optimization

When installing a Linux system, optimizations can be made on add-ons such as disk partitioning, swap memory allocation , and so on.

Disk plane: Disk allocation can follow the requirements of your application: A. For frequent data and data security requirements, the disk can be made raid0;b. High data security requirements but no data read and write requirements can be made raid1;c. High read requirements, and write operation is not required, but to ensure data security, you can make the disk RAID5; D. High requirements for reading and writing, and require high data security, you can make the disk raid0+1; The system is optimized at the disk level by making the disks different RAID levels with different requirements.

Memory plane: When the memory is small (physical memory is less than 4G), generally set swap swap partition for memory twice times, if the physical memory is greater than 4G and less than 16G, set swap swap partition size is equal to or slightly less than memory, if the memory size is greater than 16G, in principle, you can set the swap partition to 0 , it is recommended to set a swap partition of a certain size to play a buffering role.

(2). Kernel parameter optimization

The optimization of kernel parameters should be combined with the specific application, and the parameters should be optimized according to the different requirements of the application.

(3). File system optimization

The optional file system under Linux has ext2, ext3, Ext4, XFS, and ReiserFS, depending on the needs of the application, choosing a different file system.

The Linux standard file system starts with the VFS, then ext, then ext2, to be exact. EXT2 is a standard file system on Linux, Ext3 is built on the basis of ext2, and is based on the design concept of Super block and Inode .

The XFS file system is a high-level log file that provides low-latency, high-bandwidth access to file system data through distributed processing of disk requests, locating data, and maintaining cache consistency . As a result, XFS has better scalability , with excellent logging capabilities, strong scalability, and fast write performance .

ReiserFS is a high-performance log file system that manages data through a balanced tree structure , including file data, filenames, and log support . The advantages are good access performance and High Security . With high efficiency, reasonable use of disk space , advanced log Management mechanism , unique search methods , mass disk storage and other characteristics.

(4). Optimization of the application program

The optimization of the application is primarily to test the usability and efficiency of the application to debug the application for bugs.

Three System performance analysis and optimization criteria

The main factors that affect the performance of the system are CPU, memory, and I/O disks , the following are the criteria for determining the performance of the system :

Bad

cpu:user%+sys% <70% =85% >=90%

Memory: Swap in (SI) =0 per CPU with10 page/s more swap in & swap out

Swap out (SO) =0

Disk: iowait% <20% =35% >=50%

Parameter explanation:

User%:cpu the percentage of time in user mode.

The percentage of SYS%:CPU in System mode.

Iowait%:cpu the percentage of time to wait for the input output to finish.

Swap in: page import of virtual memory, i.e. swap from virtual disk to memory

Swap out: A page export of virtual memory that is swapped from memory to a virtual disk.

Four, common application resource occupancy cases

1. A static page-based Web service

The main features are the majority of small files, read operations frequently, Web servers are generally Apache or nginx.

Application scenario: Apache or Nginx server processing of static pages is very fast and efficient, when the Web Access is small , can not be optimized to provide direct external services, but in high concurrent requests , A single Web server cannot support a large number of client accesses, and it is necessary to build a load Balancing cluster platform of multiple Web server components to ensure service availability. and the cache server can be built on the front of the Web server to provide efficient access processing speed, the static resource file cache to the operating system in memory directly read operations, when the client access to the server, the cache server first find the appropriate resources in the buffer, if the cache resources exist directly processing, If the resource does not exist and the request is sent to the back-end Web server, the Web server will find the appropriate resource on the backend server based on the request and finally return the processing results to the client. This can greatly improve the concurrent access performance of the Web server. However, this kind of structure server needs to have very large memory, when the system memory is sufficient, can alleviate the pressure of disk read operation, when the memory is low, the system will use virtual memory, the frequent use of virtual memory will increase the disk I/O increase, resulting in increased CPU consumption, and again affect the performance of the Web server. And network bandwidth is also a factor restricting high concurrent access, when the traffic is very large, the network bandwidth is not enough, can block the network, resulting in degraded system performance. Network bandwidth bottlenecks can be optimized by adjusting the bandwidth or switching to fiber optics. Therefore, the structure is only suitable for low-concurrency Web sites. The usual cache servers are varnish and squid.

2. Dynamic page-based Web services

The main feature is the frequent writing operations, generally for Java, PHP, CGI, Perl and other dynamic language written by the site .

Scenario: Frequent write operations can cause CPU resource consumption to be severe, mainly due to the fact that the execution of the dynamic program requires compiling, reading the database , and so on, which consumes a lot of CPU resources. So the server that handles dynamic Web applications. in general, it is necessary to have more than one personality to better CPU. Because the dynamic content of the Web application in high concurrency, will produce multiple processes, a large number of processes will cause high system load, and a large number of processes will consume the system a large amount of memory, resulting in low system memory, the system will use virtual memory, and the large amount of virtual memory use will cause disk write operations frequently, thereby aggravating the CPU load, Therefore, a Web server that handles dynamic content requires not only multiple high-performance CPUs, but also large memory, and the ability to add memcached cache servers between the Web server and the database to improve the efficiency of data processing.

3. Database application

Main features: memory and disk I/O consumption is large, CPU consumption is not very large

Scenario: The back-end database is always performing frequent write and read operations, which are expensive for system memory and disk I/O, and are optimized at both the memory level and the disk level in order to ensure efficient and secure data processing.

The RAID array can be selected for disk I/O optimization, and as far as possible to separate the Web server from the database server, when the client requests for a large database, the database can be considered load balancing, so as to improve the database access performance.

For larger tables in the database, large tables can be divided into smaller tables and indexed to improve the efficiency of data query . and the database query statement is more complex, easy to make the CPU bottlenecks, resulting in slower data updates, resulting in disk I/O writes a lot of waiting, there is write operation bottleneck, so in the data program code writing, as far as concise .

The database can also be read and write separation , according to the database read and write pressure to establish two identical database server, read operation and write operations separately, only need to synchronize data timing. However, this will result in the data not being synchronized in real time, can be added to the cache server in front of the database , when the client needs to read real-time data can be obtained from the cache server, does not affect the write operation of the server to synchronize data to the read operation of the database server. and join the cache server, can greatly reduce the pressure of the read and write operation server, improve the performance of the data.

4. Software Download Application optimization

Main features: Mainly for static resources download, which is characterized by a serious bandwidth consumption, and storage performance requirements are high .

Application scenario: When the customer downloads the application, the bandwidth is not enough, can cause the download slow, even the download failure, and a large number of downloads at the same time, more serious increase the burden of bandwidth, and the burden on the database is also increased. Download load can be shared by multiple, multi-point server offload . on the HTTP server side. Can be used to support high-concurrency servers, such as Nginx server, the use of asynchronous non-blocking I/O operation mode, the ability to handle the download of resources is strong.

5. Streaming Media Services Application optimization

Main features: Mainly used in video conferencing, video-on-demand, distance education, online broadcast and other scenarios . The characteristic is the bottleneck of network bandwidth and storage system bandwidth (mainly read operation of database).

Application scenario: When the remote video on demand, the smoothness of the video has a high demand, so the network bandwidth and data read requirements are high. For the network bandwidth, the previous elaboration can be mainly from the bandwidth optimization and the replacement of optical fiber processing, this is the hardware processing scheme. Software aspects can be optimized from storage policies, transport policies, scheduling policies, and proxy caches .

The storage aspect can adopt multi-point distributed storage, which can improve the security of data. It can also improve the reading rate of the data, and can optimize the video encoding format, save storage space and optimize storage performance.

The transmission aspect can control the data stream transmission according to the pressure through the intelligent Data Flow way , as far as possible to ensure the customer to watch the video smoothness.

In the scheduling aspect , Dynamic Data and static data can be combined into the same proxy server .

In the aspect of proxy cache, we can adopt the policies of segmented cache, dynamic cache and static cache to speed up the reading speed of video data and reduce the reading operation pressure of the database.

In the context of streaming media structure, the drawbacks of memory consumption and thread stacking can be improved by using memory pool and thread pooling techniques .

Linux Server Analysis Optimization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.