A comprehensive explanation of UNIX Network Performance

Source: Internet
Author: User

Over time, UNIX has become more and more popular. Do you know about UNIX systems? This article describes in detail the UNIX network performance analysis, which plays a certain role in learning UNIX network performance analysis. UNIX®Network layout helps you understand your network and its running mode.

However, what happens when UNIX network performance and the speed at which files or connection services are transmitted suddenly drop? How can I diagnose network problems and find problems in the network?

Introduction

The network performance has a great impact on the performance and reliability of the rest of the environment. If applications and services wait for data transmitted over the network, or the client cannot connect to them or receive information, these problems need to be resolved.

Performance issues also affect the reliability of applications and environments. Performance problems may be caused by network faults. In some cases, performance problems may even be caused by network faults. To understand and diagnose network problems, you must first understand the nature of the problem, which is often related to latency or bandwidth.

In general, network performance problems are often related to the underlying hardware, and cannot break through the physical limitations of the network environment. All performance problems are often related to certain protocols or systems, such as NFS or Web access. However, you can diagnose and identify problems in the operating system and determine the correct rectification measures.

This article discusses the steps involved in identifying performance issues:

Determine baseline performance level
Locate the problem
Obtain statistics
Identify bottlenecks

Understanding Network metrics

To understand and diagnose performance problems, you must first determine the baseline performance level. First, we will introduce two important concepts used in determining the basic linear performance: network latency and network bandwidth.

Network latency

The network latency is the interval between the actual received data packets sent to the destination. As a network performance indicator, increasing latency indicates that the network is busy. This means that the number of data packets to be transmitted exceeds the transmission capability, and the data sender must wait before transmission or re-transmission.

Network latency also increases when the network complexity and the number of hosts or gateways through which data packets are sent. The cable length between points also affects the delay. For long-distance lines, traditional copper lines are always slower than fiber connections.

Network latency is different from application latency. Network latency is only related to data packets transmitted over the network. Application latency refers to the interval between the application receives the request and the response.

Network bandwidth

Bandwidth indicates the number of packets that can be transmitted over the network within a specific period of time. Bandwidth affects the amount of data that can be transmitted. It limits the speed of data transmission to a host to the maximum speed supported by network connections. When multiple concurrent connections are used, the total transmission speed is determined.

Theoretically, the network bandwidth should not change unless the network interface and hardware are changed. The main factor affecting network bandwidth is the number of hosts that use the network at a given time.

For example, a 1 GB Ethernet interface can transmit 1 GB of data to another network host, 100 MB to 10 hosts at the same time, or 10 MB to 100 hosts at the same time. Of course, there is usually no need for stable bandwidth. In a period of time, there will be many small requests from a large number of hosts, and the available bandwidth of the server looks much larger than the total bandwidth of the client.

Obtain statistics

Before determining whether a problem occurs in the network, you must first determine the baseline performance and then make assumptions accordingly. Therefore, you must check the latency, performance, and test related to the network application environment by checking various network parameters, and then monitor and compare the performance changes over time.

Baseline network tests should be performed under Controllable conditions. Ideally, a test should be performed in two scenarios: isolating no other network communication streams) and a typical network communication flow. This provides two baselines:

For isolated monitoring, check the performance between the server and one or more clients without other communication streams on the network. This means either disabling other services or placing servers and clients in an isolated network environment, which is completely isolated but identical with the standard network environment ).
For standard monitoring, the client and server should be connected to the standard network, and the network has a normal background communication flow, but apart from the server to be tested, disable all application-related communication flows, such as email, file services, and Web services ). You can use many standard tools and tests to determine the baseline value during actual testing.

Measurement Delay

All network administrators are familiar with the ping tool and use it as a basic tool to check the availability and latency of network devices. Ping can be used on most machines, including clients and servers, as long as they are configured to respond to ICMP packets sent to the device by the ping tool. In short, ping sends an echo packet to the device, expecting the device to send the packet content back.

In this process, ping can monitor the time spent on sending data packets and receiving responses, which is an effective way to measure the response time of the echo process. In the simplest form, you can send an echo request to a host and find out the response time in Listing 1 ).

Listing 1. Using ping to determine latency

 
 
  1. $ ping example  
  2. PING example.example.pri (192.168.0.2): 56 data bytes  
  3. 64 bytes from 192.168.0.2: icmp_seq=0 ttl=64 time=0.169 ms  
  4. 64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.167 ms  
  5. ^C  
  6. --- example.example.pri ping statistics ---  
  7. 2 packets transmitted, 2 packets received, 0% packet loss  
  8. round-trip min/avg/max/stddev = 0.167/0.168/0.169/0.001 ms 

You must use Control-C to stop the ping process. In Solaris and AIX®You must use the-s option to send multiple echo packets and obtain the timing information. To obtain baseline data, you can use the-c option in Linux®. On Solaris/AIX, you can specify the default packet size to 56 bytes) and the number of packets to be sent, so you do not have to manually terminate the ping process. Then, you can obtain the timing information automatically, as shown in Listing 2 ).

Listing 2. specifying the packet size when using ping on Solaris/AIX

 
 
  1. $ ping -s example 56 10  
  2. PING example: 56 data bytes  
  3. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=0. time=0.143 ms  
  4. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=1. time=0.163 ms  
  5. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=2. time=0.146 ms  
  6. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=3. time=0.134 ms  
  7. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=4. time=0.151 ms  
  8. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=5. time=0.107 ms  
  9. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=6. time=0.142 ms  
  10. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=7. time=0.136 ms  
  11. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=8. time=0.143 ms  
  12. 64 bytes from example.example.pri (192.168.0.2): icmp_seq=9. time=0.103 ms  
  13. ----example PING Statistics----  
  14. 10 packets transmitted, 10 packets received, 0% packet loss  
  15. round-trip (ms)  min/avg/max/stddev = 0.103/0.137/0.163/0.019 

The example in Listing 2 is the result when the network is idle. If the host or network itself is checked during the test, the ping time increases significantly. Ping alone is not enough to indicate whether there is a problem, but sometimes ping can quickly identify whether there is a problem that requires further diagnosis.

Ping support may be disabled. Therefore, you should check whether the host can be accessed before using ping to check whether the host is available.

Ideally, the ping time between specific hosts should be tracked continuously for a period of time, so that the average response time can be obtained and the location to be checked can be identified.

Use sprayd

The sprayd daemon and the associated spray tool send a large packet stream to the specified host to determine how many of these packets have responded. It is a method for measuring network performance and should not be regarded as a performance indicator because it uses a connectionless transmission mechanism. As defined, data packets sent using the connectionless transmission mechanism cannot reach the destination, and data packets can be lost in communication.

Spray can be used to check whether there are many communication streams on the network, because if no connection transmission (UDP) loses many packets, it means the network or host is too busy.

Spray can be used on Solaris, AIX, and other UNIX platforms. You may need to enable the spray daemon usually through inetd ). After the sprayd daemon is started, run spray and specify the host name as shown in listing 3 ).


Listing 3. Using spray

 
 
  1. $ spray tiger  
  2. sending 1162 packets of length 86 to tiger ...  
  3. 101 packets (8.692%) dropped by tiger  
  4. 70 packets/sec, 6078 bytes/sec 

As mentioned above, speed should not be used as a reliable performance indicator, but the number of lost packets is meaningful.

Use simple network transmission test

The best way to determine the network bandwidth performance is to check the actual speed when sending and receiving data with the machine. You can use many different tools to perform tests across many applications and protocols, but the simplest method is often the most effective.

For example, to determine the network bandwidth when using NFS to transfer files over the network, you can timing a simple file transfer process. To this end, use mkfile to create a large file, for example, use $ mkfile 2g 2 gbfile to create a 2 GB file ), transmit it to another machine over the network and calculate the time required. See figure 4 ).

Listing 4. Calculating the time it takes to transfer a file to another machine over the network

 
 
  1. $ time cp /nfs/mysql-live/transient/2gbfile .  
  2. real 3m45.648s  
  3. user 0m0.010s  
  4. sys 0m9.840s 

The test should be run multiple times, and then the average time of the transmission process should be obtained to better understand the performance level.

You can use Perl scripts such as listing 5 to automatically execute replication and timing.

Listing 5. Use Perl scripts to automatically execute replication and timing

 
 
  1. #!/usr/bin/perl  
  2. use Benchmark;   
  3. use File::Copy;  
  4. use Data::Dumper;  
  5. my $file = shift or die "Need a file to copy from\n";  
  6. my $srcdir = shift or die "Need a source directory to copy from\n";  
  7. my $count = shift || 10;  
  8. my $t = timeit($count,sub {copy(sprintf("%s/%s",$srcdir,$file),$file)});   
  9. printf("Time is %.2fs\n",($t->[0]/$count));   

The above is an introduction to UNIX network performance analysis.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.