Basic IO Graphs:
IO graphs is a very useful tool. The basic Wireshark IO graph shows the overall traffic situation in the capture file, usually in units per second (number of messages or bytes). The default x-axis time interval is 1 seconds, and the y-axis is the number of messages per time interval. If you want to see the number of bits per second or byte, click "Unit" and select what you want to see in the "Y Axis" drop-down list. This is a basic application that is useful for viewing peaks/troughs in traffic. To see further, click any point in the graph to see the details of the message.
To explain the convenience, click on the sample packet, or use your own Wireshark click Statistics–io Graphs. This packet capture is a case where the HTTP download encounters a message loss.
Note: The filter condition is empty and this graphic shows all traffic.
The display in this default condition is not very useful in most troubleshooting. Change the y axis to Bits/tick so you can see the flow per second. From this graph you can see that the peak rate is around 300kbps. If you see some places where traffic drops to zero, that could be a problem point. The problem is well-found on the map, but it may not be as obvious in the reading list.
Filter:
Each graph can be applied with a filter condition. Here you create two different graph, one HTTP and one ICMP. You can see the filter condition in Graph 1 using "http" Graph 2 using "ICMP". A few gaps in the red ICMP traffic can be seen in the figure for further analysis.
Creates two graphics, one showing ICMP Echo (type=8) and one showing ICMP Reply (type=0). Normally, there is a continuous reply for each echo request. Here is the situation:
You can see a gap in the middle of the Red Pulse line (ICMP type==0–icmp Reply), while the ICMP request in the entire graph remains contiguous. This means that some reply are not received. This is the reply drop caused by the loss of the message. The ping information seen in the CLI is as follows:
Common Troubleshooting filter Conditions :
There are some filtering conditions that are useful for troubleshooting network latency/Application issues:
tcp.analysis.lost_segment: Indicates that a discontinuous serial number has been seen in the clutch. The loss of the message causes a duplicate ACK, which results in retransmission.
Tcp.analysis.duplicate_ack: Displays messages that have been confirmed more than once. A large cool repeat ACK is a sign of high latency between TCP endpoints.
tcp.analysis.retransmission: Displays all retransmissions in the capture package. If the number of retransmissions is still normal, too much retransmission may be problematic. This usually means slow application performance and/or loss of user messages.
tcp.analysis.window_update: The TCP window size in the transfer process is graphically formatted. If you see the window size drop to zero, this means that the sender has exited and waits for the receiver to confirm all transmitted data. This may indicate that the receiving end is overwhelmed.
tcp.analysis.bytes_in_flight: The number of bytes not confirmed on the network at a certain point in time. The number of unacknowledged bytes cannot exceed your TCP window size (defined in the initial 3 this TCP handshake), in order to maximize throughput you want to get as close to the TCP window size as possible. If you see a continuous lower than the TCP window size, it can mean a loss of the message or other problems that affect throughput on the path.
Tcp.analysis.ack_rtt: measures the captured TCP message with the corresponding ACK. If this time interval is longer, that may indicate some type of network delay (packet loss, congestion, and so on).
Apply some of the above filter conditions in the clutch:
Note: Graph 1 is the total HTTP traffic, displayed as Packets/tick, with a time interval of 1 seconds. Graph 2 is a fragment of TCP lost packets. Graph 3 is a TCP duplicate ACK. Graph 4 is a TCP retransmission.
As you can see from this graph, there are a number of retransmissions and duplicate ACK compared to the overall HTTP traffic. From this diagram, you can see the point in time at which these events occurred, as well as the proportion of the overall traffic.
function :
IO graphs has six functions available: SUM, MIN, AVG, MAX, COUNT, LOAD.
MIN (), AVG (), MAX ()
First look at the minimum, average, and maximum time between frames, which is useful for viewing the delay between frames/messages. We can use these functions in conjunction with "Frame.time_delta" filter conditions to see the frame delay, and make the round-trip delay more obvious. If the capture package file contains multiple sessions between different hosts, and you want to know only one pair, you can combine the "Frame.time_delta" with the source and target host conditions such as "ip.addr==x.x.x.x &&ip.addr= =y.y.y.y ". As shown in the following:
We did the following steps:
- Set the Y axis to "advanced" to make the Caculation field visible. You do not see the calculation option without doing this step.
- The x-axis time interval is 1 seconds, so each histogram represents a 1-second interval calculation result.
- Filters out two HTTP sessions for a specific IP address, using the condition: "(ip.addr==192.168.1.4&& ip.addr==128.173.87.169) && http".
- Using 3 different graph, calculate min (), AVG (), Max (), respectively.
- Applying the condition "Frame.time_delta" to each calculation result, the style is set to "FBar", which shows the best results.
From the visible, in the 106th second, the data flow Max Frame.delta_time reaches 0.7 seconds, which is a severe delay and results in packet loss. If you want to delve deeper, just click on this point in the diagram and jump to the appropriate frame. Corresponds to the 1003th message in this sample capture package file. If you see the average delay between frames is relatively low but suddenly a bit of delay is very long, click on this frame to see what happens at this point in time.
Count ()
This function calculates the number of occurrences of an event within a time interval, which is useful when viewing a TCP parsing identifier, such as retransmission. The example diagram is as follows:
Sum ()
The function counts the cumulative value of the event. There are two common use cases that are seen in capturing the amount of TCP data, as well as checking the TCP sequence number. Let's take a look at the first example of TCP length. Create two graphs, one using client IP 192.168.1.4 as the source and the other using the client IP as a destination address. For each figure we combine the sum () function with the Tcp.len filter condition. Split into two different graphs we can see the amount of data moving in a single direction.
From the chart we can see that the amount of data sent to the client (IP.DST = = 192.168.1.4 filter) is higher than the amount of data from the client. Shown in red in the figure. The black bar displays data from the client to the server with a small amount of relative data. This makes sense because the client simply requests the file and sends the confirmation data after it is received, while the server sends a large file. It is important that if you exchange the order of the graph, the client's IP as the target address of Figure 1, and the client IP as the source address of Figure 2, the use of Fbar may not see the correct data display. Because the lower figure number indicates that it is displayed in the foreground, the higher chart number may be overwritten.
Now let's take a look at the TCP serial number of the same packet loss and delay.
You can see a number of peaks and drops in the graph that indicate a problem with the TCP transport. Compared to normal TCP messages:
This diagram shows that the TCP serial number is increased fairly stably, indicating smooth transmission, no multiple passes or drops
One-Stop learning Wireshark (iii): Analyze data flow using Wireshark IO graphical tools