Linux View port usage status, shutdown port method

Source: Internet
Author: User
Tags ack

Summary

Today, during the writing of the socket, during which I encountered viewing the status of a certain port, I then looked it up from the Internet and summarized it.

Content

As we all know, the port is not independent, it is dependent on the process. When a process is turned on, its corresponding port is turned on and the process shuts down, and the port is closed. The next time a process is turned on again, the corresponding port is turned on again. Instead of purely understanding that a port is closed, you can disable a port.

1. View ports

"Command"

Netstat-anp

Note: The Add parameter '-n ' will turn the application into a port display, that is, the address of the number format, such as: nfs->2049, ftp->21, so you can open two terminals, one for each corresponding to the port number of the program)

2. View the application for the port

"Command"

Lsof-i:xxx

(XXX refers to the corresponding port number). Or you can view the file/etc/services, from which you can find the service that corresponds to the port.
(Note: Some ports cannot be found through Netstat, the more reliable method is "~$ sudo nmap-st-o localhost")

3. Close the port

"iptable"

sudo iptables-a input-p tcp--dport $PORT-j DROP "
sudo iptables-a output-p tcp--dport $PORT-j DROP "
" kill"
kill-9 pid" (PID: Process number)

1) Disable the port via the Iptables tool, such as:
2) or turn off the corresponding application, the port will naturally shut down

4. Kill

Use Kill to close the process used by-9, the following describes the use of kill, kill the actual effect is to send a signal to the process (signal), its common format is

"Kill"

Kill-sig PID

The sig here can be a signal corresponding to the number, or can be a signal name, such as if you use the Kill-9 PID is actually issued 9th signal to the city, 9 corresponding signal name is kill. So kill-9 is equivalent to Kill-kill PID. A few of the usual signals are

"Kill"

Int 2 This is when you end a program with CTRL + C under Bash, and Bash sends this signal to the process, which, by default, will end when the process receives the program. You can use the Kill-int PID to send this signal.
QUIT 3 This is the signal you send when you use Ctrl+\ to end the program under Bash, which is the end of the process by default.
KILL 9 This signal is called "strong kill", because it is unable to change the process to receive this signal after the action performed, the process can only exit. (the previous two signals, although the default is to exit, but the application itself can be modified by the signal system for other actions, such as ignoring the two signals and other actions)
More information, can man kill, have time to learn about the Linux signal mechanism, signal-related system calls and so on.

Currently in the network transmission application, the TCP/IP communication protocol and its standard socket application Development Programming Interface (API) are widely adopted. The TCP/IP transport layer has two parallel protocols: TCP and UDP. TCP (Transport Control Protocol, transmission Protocol) is connection-oriented and provides high reliability services . UDP (User Datagram Protocol, Subscriber Datagram Protocol) is non-connected and provides efficient service . In practical engineering applications, the choice of reliability and efficiency depends on the environment and requirements of the application. In general, the network transmission of ordinary data uses high-efficiency UDP, and the network transmission of important data uses TCP with higher reliability.

In the application development process, I found that applications based on TCP network transmission sometimes appear sticky packet phenomenon (that is, the sender sends a number of packets of data to the receiving party to stick to a packet). occurs in a stream loss, UDP does not appear sticky, because it has a message boundary.

I. Introduction to the TCP protocol

TCP is a connection-oriented transport layer protocol, although TCP does not belong to the ISO set of protocols, but because of its successful application in the business and industry, it has become a de facto network standard, widely used in various network host communication.

As a connection-oriented transport layer protocol, the goal of TCP is to provide users with reliable end-to-end connections to ensure the orderly and accurate transmission of information. In addition to providing basic data transmission functions, it also adopts a series of measures such as data number, checksum calculation and data confirmation to ensure reliability . It numbers each byte of data that is transmitted and requests the receiver to return a confirmation message (ACK). If the sender does not receive the data confirmation within the specified time, it will retransmit the data. The data number allows the receiver to handle the disorder and repetition of the data. The data error problem is solved by adding a checksum to each transmitted data segment, and the receiver checks the checksum after receiving the data, and if the checksum is incorrect, the data segment of the error code is discarded and the sender is required to retransmit it. flow control is also an important measure to ensure the reliability, if no flow control, may be due to receive buffer overflow and loss of large amounts of data, resulting in a lot of retransmission, causing network congestion vicious circle. TCP uses a variable window for flow control, and the receiver controls the amount of data sent by the sender.

TCP provides users with high-reliability network transmission services, but reliability assurance measures also affect the transmission efficiency. Therefore, in the actual engineering application, only the transmission of the key data is TCP, and the transmission of common data generally uses the high efficiency UDP.

Analysis on the problem of sticky bag and its countermeasures

A TCP sticky packet is a packet of packets sent by the sender to the receiver when it is received, viewed from the receive buffer, followed by the head of the packet data immediately preceding the end of the packet.

There are many reasons for the sticky-packet phenomenon, which may be caused by the sender or by the receiver. The sticky packets caused by the sender are caused by the TCP protocol itself, and TCP is often needed to collect enough data to send a packet of data to improve transmission efficiency. If the data sent several times in a row is very small, usually TCP will be based on the optimization algorithm to synthesize the data packets sent out once, so that the receiver received the sticky packet data. the sticky packet caused by the receiver is due to the fact that the receiver user process does not receive the data in time, resulting in sticky packets. This is because the receiver first put the received data in the system receive buffer, the user process from the buffer to fetch data, if the next packet of data arrives before a packet of data has not been taken away by the user process, the next packet of data into the system receive buffer when the previous packet of data is received, The user process takes the data from the system receive buffer based on the pre-set buffer size, so that it takes more than one packet of data at a time (as shown in Figure 1).


Figure 1


Figure 2


Figure 3

There are two kinds of sticky packets, one of which is glued together is the complete packet (Figure 1, Figure 2), the other case is stuck together with the package has an incomplete package (Figure 3), this assumes that the user receive buffer length of M bytes.

Not all sticky-packet phenomena need to be processed, if the transmitted data is a continuous stream of data without structure (such as file transfer), you do not have to separate the packets of adhesion (short-subcontracting). But in the actual engineering application, the transmitted data is usually the structure data, then needs to do the subcontract processing.

When dealing with the sticky-packet problem of fixed-length structure data, the sub-packet algorithm is simple, and the sub-packet algorithm is more complicated when it deals with the sticky-packet problem of uncertain long structure data. Especially in the case of 3, as a packet of data content is divided into two successive receiving packets, the processing is more difficult. In practical engineering application, the phenomenon of sticking should be avoided as far as possible.

in order to avoid sticking, the following measures can be taken . one is caused by the sender of the sticky packet phenomenon, the user can be programmed to avoid, TCP provides a mandatory data transfer immediately after the operation instruction Push,tcp software received the operation instruction, the data is sent out immediately, without waiting for the transmission buffer full; Second, for the receiver caused by the adhesive package, you can optimize the program, reduce the workload of the receiving process, improve the priority of receiving process and so on, so that they receive data in a timely manner, so as to avoid the phenomenon of sticky packet; The third is controlled by the receiver, a packet of data by the structure of the field, the human control is divided into By this means to avoid sticky packets.

The above mentioned three kinds of measures, all have their shortcomings. Although the first method of programming can avoid the sticky packets caused by the sender, it shuts down the optimization algorithm, reduces the network sending efficiency, affects the performance of the application, and is generally not recommended for use. The second method can only reduce the likelihood of sticky packets, but does not completely avoid the sticky package, when the sending frequency is high, or because the network burst may make a time period packet arrives at the receiver faster, the receiver may still be too late to receive, resulting in sticky packets. The third approach avoids sticky packets, but the application is less efficient and unsuitable for real-time applications.

One of the most comprehensive countermeasures is that the receiver creates a preprocessing thread, and the received packets are preprocessed to separate the stuck packets. We have experimented with this method and proved to be efficient and feasible.

Third , programming and implementation

1. Implementation Framework

The Experiment Network communication program uses the TCP/IP protocol Socket API programming implementation. The socket is for the client/server model. The TCP implementation is shown in Framework 4.

Figure 4

2. Experimental hardware environment:

Server: Pentium 350 microcomputer

Client: Pentium 166 microcomputer

Network platform: A LAN connected by a 10-Gigabit shared hub

3. Lab Software Environment:

Operating system: Windows 98

Programming language: Visual C + + 5.0

4. Main thread

Programming in a multi-threaded way, the server has a total of two threads: Send data thread, send statistics display thread. There are three threads in the client: receiving a data thread, receiving a preprocessing sticky packet thread, receiving a statistics display thread. Where the send and receive thread priority is set to thread_priority_time_critical (highest priority), the preprocessing thread priority is thread_priority_above_normal (higher than the normal priority), The display thread priority is thread_priority_normal (normal priority).

The data structure that the experiment sends is shown in 5:

Figure 5

5. Sub-packet algorithm

According to the three kinds of different sticky-packet phenomena, the sub-package algorithm adopts the corresponding solution respectively. The basic idea is to first convert the received data stream (length set to m) to a predetermined structure data form, and remove the structure data length field, which is the N in Figure 5, and then the first packet data length according to n calculation.

1) If n

2) If n=m, it indicates that the data stream content is exactly a complete structure data, directly into the temporary buffer.

3) If the n>m, it indicates that the data stream content is not enough to form a complete structure of data, to be left with the next packet of data to be processed.

Interested in the specific content of the subcontracting algorithm and the software implementation, can contact the author.

Congestion control for TCP 1. Introduction

The bandwidth in the computer network, the cache and the processing machine in the Exchange node are all the resources of the network. At some point in time, the performance of the network becomes worse if the demand for a resource in the network exceeds the available parts that the resource can provide. This situation is called congestion.

Congestion control is to prevent too much data from being injected into the network, so that routers or links in the network are not overloaded. Congestion Control is a global process, and flow control is different, traffic control pointing to point traffic control.

2. Slow start and congestion avoidance

The sender maintains a state variable called the congestion window CWnd (congestion Windows) . The size of the congestion window depends on the degree of congestion of the network and is dynamically changing. The sender makes its sending window equal to the Congestion window, and the sending window may be smaller than the congestion window, taking into account the receiver's ability to receive.

The idea of a slow start algorithm is not to send a large amount of data at the beginning, first to detect the network congestion, that is, from small to large gradually increase the size of the congestion window.

Here is an example of the congestion window size using the number of message segments to illustrate the slow start algorithm, in which the real-time congestion window size is measured in bytes. Such as:

Of course, a single acknowledgment is received, but when this confirms multiple datagrams, the corresponding values are added. Therefore, the congestion window doubles after a single transmission round. This is the multiplication growth, and the subsequent congestion avoidance algorithm increases the addition growth comparison.

A slow-start threshold ssthresh state variable is also required to prevent the increase in CWnd from causing network congestion. The usage of Ssthresh is as follows:

When Cwnd<ssthresh, the slow start algorithm is used.

When Cwnd>ssthresh, the congestion avoidance algorithm is used instead.

When Cwnd=ssthresh, slow starts with congestion avoidance algorithm arbitrary.

Congestion avoidance algorithms allow congestion windows to grow slowly, that is, each time a round trip is taken, the congestion window of the sender is added 1, not doubled. This congestion window grows slowly in linear order.

Either in the slow-start phase or in the congestion avoidance phase , as long as the sender determines that the network is congested (which is based on the fact that no acknowledgement is received, although there is no acknowledgement that the packet is missing for another reason, it is handled as congestion because it cannot be determined), Set the slow start threshold to half the size of the sending window when congestion occurs. Then set the congestion window to 1 and execute the slow start algorithm. Such as:

Again, this is just for the sake of discussion. Instead of changing the size of the congested window to the number of datagrams, it should actually be bytes.

3. Fast retransmission and fast recovery

Fast retransmission requires the receiving party to issue a duplicate acknowledgement immediately after receiving an out-of-sequence message segment (in order for the sender to know earlier that a message segment has not arrived) and not wait for the sender to confirm when sending the data itself. The fast retransmission algorithm stipulates that the sender should immediately retransmit a message segment that has not yet been received, without having to wait for the set retransmission timer time to expire, as long as it receives three duplicate confirmations in a row. Such as:

Fast re-transmission with the use of fast recovery algorithm, there are the following two points:

① when the sender receives three duplicate confirmations consecutively, the "multiplication reduction" algorithm is executed, and the ssthresh threshold is halved. However, the slow-start algorithm is not executed next.

② Considering that if the network is congested, it will not receive several duplicate confirmations, so the sender now thinks the network may not be congested. Therefore, the slow start algorithm is not executed at this time, but the CWnd is set to the size of Ssthresh and then the congestion avoidance algorithm is executed. Such as:

4. Random Early detection red

The above congestion avoidance algorithm is not associated with the network layer, in fact, the network layer strategy of congestion avoidance algorithm is the biggest impact of the router's discard strategy. In simple cases, routers usually process incoming groupings in the first-in, FIFO strategy. When the router's cache is not grouped, it discards the incoming packet, which is called the trailing discard policy. This results in packet loss and the sender considers the network to be congested. What is more serious is that there are many TCP connections in the network, and the segment of the packets in these connections is usually the multiplexed routing path. In the event of a trailing drop of the router, many TCP connections can be affected, and the result is that many TCP connections enter a slow-start state at the same time. This is referred to as global synchronization in the terminology. Global synchronization causes a sudden drop in network traffic, and after the network returns to normal, its traffic suddenly increases a lot.

To avoid global synchronization in the network, routers use random early detection (red:randomearly detection). The algorithm points to the following:

The queue of the router maintains two parameters, that is, the queue is the longest gate limit min and Max, whenever a packet arrives, Red calculates the average queue length. Then treats the coming groupings in a separate situation:

The average queue length of ① is less than the minimum threshold--queues the newly arrived packets into a queue.

② average queue Length between the minimum and maximum threshold-the packet is discarded according to a probability.

③ average queue Length is greater than maximum threshold-discards newly arrived groupings.

Randomly discarding packets with probability p allows congestion control to be performed only on individual TCP connections, thus avoiding global congestion control.

The key to red is to select three parameter minimum threshold, gate limit, drop probability and calculate average queue length. The average queue length uses the weighted average method to calculate the average queue length, which is the same as the round trip Time (RTT) calculation strategy.

In order to prevent the network congestion, TCP proposes a series of congestion control mechanisms. Originally by V. The TCP congestion control proposed by Jacobson in the 1988 paper consisted of "slow start (Slow start)" and "congestion avoidance (congestion avoidance)", and later the TCP Reno version was specifically added "fast retransmission retransmit) "," Rapid recovery (fast Recovery) "algorithm, and later in the TCP Newreno the" Fast Recovery "algorithm has been improved, in recent years, there have been selective response (selective acknowledgement,sack) Algorithm, there are other aspects of large and small improvements, become a hotspot of network research.

The main principle of TCP congestion control relies on a congestion window (CWnd) to control, before we also discussed TCP and a peer-to-peer notification of the receive window (RWND) for flow control. The size of the window value represents the largest packet of data that can be sent out but has not yet received an ACK, and obviously the larger the window the faster the data is sent, but the more likely it is that the network will become congested, and if the window value is 1, then it will be simplified to a stop-and-hold protocol, each sending a data, Must wait until the other party's confirmation to send a second packet, obviously the data transfer efficiency is low. TCP Congestion control algorithm is to balance the two, choose the best CWnd value, so that the network throughput maximization and does not generate congestion.

Due to the need to consider both the congestion control and traffic control aspects of the content, TCP's Real send window =min (Rwnd, CWnd). However, Rwnd is determined by the peer, the network environment has no effect on it, so when considering congestion, we generally do not consider the value of Rwnd, we will only discuss how to determine the size of CWnd values for the time being. About CWnd units, in TCP is in bytes to do units, we assume that TCP each transmission is in accordance with the MSS size to send data, so you can think that CWnd according to the number of packets to do the unit can also understand, So sometimes we say that CWnd increases by 1 is equivalent to the number of bytes increased by 1 MSS size.

Slow start: The initial TCP sends a large number of packets to the network after the connection is established, which can easily cause the router cache space in the network to run out of congestion. Therefore, the newly established connection can not send a large number of packets at the beginning, but only gradually increase the amount of data sent each time according to the network situation, in order to avoid the above phenomenon. Specifically, when a new connection is created, CWnd is initialized to 1 maximum message segment (MSS) size, the sender starts to send data according to the Congestion window size, and each time a segment is confirmed, CWnd increases the size of 1 MSS. In this way, the value of CWnd increases exponentially with the network roundtrip time (Round trip Time,rtt), in fact, the slow start speed is not slow at all, but its starting point is relatively low. We can simply calculate the following:

start---> CWnd = 1

after 1 RTT---> CWnd = 2*1 = 2

after 2 RTT---> CWnd = 2*2= 4

after 3 RTT---> CWnd = 4*2 = 8

If the bandwidth is W, then the rtt*log2w time can be full of bandwidth.

Congestion avoidance: From a slow start, it can be seen that CWnd can grow quickly to maximize the use of network bandwidth resources, but CWnd cannot continue to grow indefinitely, certain restrictions are needed. TCP uses a variable called slow-start threshold (Ssthresh), and when CWnd exceeds that value, the slow-start process ends and enters the congestion avoidance phase. For most TCP implementations, the value of Ssthresh is 65536 (also in bytes). The main idea of congestion avoidance is to increase the addition, that is, the value of CWnd no longer refers to the number of levels to rise, start adding. At this time, when all the message segments in the window are confirmed, the CWnd size plus 1,cwnd value increases linearly with the RTT, which avoids the increase of the network congestion and slowly increases the optimal value for the network.

The two mechanisms discussed above are not detected in the case of congestion behavior, then when the discovery of the congestion of CWnd and how to adjust it?

first of all, TCP is how to determine the network into a congested state, TCP believes that the main basis for network congestion is that it re-transmitted a message segment . As mentioned above,TCP for each segment has a timer, called retransmission timer (RTO), when the RTO timeout and has not been confirmed by the data, then TCP will be retransmission of the message segment, when a timeout occurs, then the likelihood of congestion is very large, A message segment may be lost somewhere in the network, and there are no messages for subsequent segments, in which case the TCP response is "strong":

1. Reduce Ssthresh to half the value of CWnd

2. Set the CWnd back to 1

3. Re-enter the slow boot process.

Generally speaking, the principle of the change of TCP Congestion control window is the AIMD principle, that is, the addition increases and the multiplication decreases. It can be seen that the principle of TCP can be better to ensure fairness between the flow, because once the loss of packets, then the immediate halving of the avoidance, you can leave the other new streams enough space, so as to ensure the fairness of the whole.

In fact, TCP has another situation that will be re-transmitted: that is, received 3 identical ACK. When TCP receives the order arrival packet, it sends ACK,TCP to use 3 identical ACK to determine the loss of the packet, at which time the fast retransmission, the fast retransmission to do things are:

1. Set Ssthresh to half of CWnd

2. Set the CWnd to the value of Ssthresh (some of them are ssthresh+3)

3. Re-enter the congestion avoidance phase.

Later, the "Fast recovery" algorithm was added after the above "fast retransmission" algorithm, when received 3 duplicate ACK, TCP finally entered is not the congestion avoidance phase, but the rapid recovery phase. Fast retransmission and fast recovery algorithms are commonly used simultaneously. The idea of fast recovery is the "data Baoshou" principle, that is, the number of packets in the network at the same time is constant, and only when the "old" packet leaves the network can a "new" packet be sent to the network, if the sender receives a duplicate ACK, Then the TCP ACK mechanism indicates that a packet has left the network, so CWnd adds 1. If this principle is strictly followed, there will be little congestion in the network, in fact the purpose of congestion control is to correct the violation of this principle.

In particular, the main steps for rapid recovery are:

1. When 3 duplicate ACK is received, the Ssthresh is set to half of CWnd, the CWnd is set to the value of Ssthresh plus 3, and then the lost segment is re-transmitted, plus 3 is due to the receipt of 3 duplicate ACK, indicating that 3 "old" packets have left the network.

2. When a duplicate ACK is received again, the congestion window increases by 1.

3. When the ACK of the new packet is received, the CWnd is set to the value of Ssthresh in the first step. The reason is that the ACK confirms the new data, stating that the data has been received from the duplicate ACK, that the recovery process has ended, that it can go back to the state before the recovery, and that it is again in the congestion avoidance state.

The fast retransmission algorithm first appeared in the Tahoe version of 4.3BSD, quickly recovering the first Reno version of 4.3BSD, also known as the Reno version of the TCP congestion control algorithm.

It can be seen that Reno's fast retransmission algorithm is for the retransmission of a packet, but in practice, a retransmission timeout can lead to a lot of retransmission of packets, so when multiple packets are lost from a data window and trigger fast retransmission and fast recovery algorithm, the problem arises. As a result, Newreno appeared, and it was slightly modified on the basis of Reno fast recovery, which can recover multiple packets lost within a window. Specifically, Reno exits the fast recovery state when the ACK of a new data is received, and the Newreno needs to receive confirmation from all packets in the window before exiting the fast recovery state, thereby increasing throughput by one step.

SACK is to change the TCP confirmation mechanism, the initial TCP only confirms the data that has been continuously received, SACK the chaotic sequence and other information will be all told to each other, thus reducing the data sender re-transmission of blindness. For example, serial number 1,2,3,5,7 data received, then the ordinary ACK will only confirm the serial number 4, and sack will be the current 5,7 has received information in the SACK option to inform the peer, thereby improving performance, when using sack, Newreno algorithm can not be used, Because sack itself carries information that allows the sender to have enough information to know which packets need to be re-transmitted, without needing to retransmit which packets.

Linux View port usage status, shutdown port method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.