Network Simulators in Linux

Source: Internet
Author: User
Recently, we need to use a network simulator to set the speed, packet loss, and latency in the network. in FreeBSD OS, you can use dummynet + ipfw for settings. but in Linux, what software is used for simulation?

There are two options:
1. nistnet: a very powerful tool with few documents.
2. netem: simple and practical. It can "gives you to delay, reordering, loss, Etc..." in the network environment ..."

About netem. For my fedora6 system, this tool has been provided. You can use the "Tc" command to set the tool. Here is a netem help, which is very useful.

Http://www.linux-foundation.org/en/Net:Netem
--------------------------

NetemProvides network emulation functionality for testing protocols by emulating the properties of wide area networks. The current version emulates variable delay, loss, duplication and re-ordering.

If you run a current 2.6 Distribution, (fedora, opensuse, Gentoo, Debian, mandriva, UBUNTU), then netem is already enabled in the kernel and a current version of Net: iproute2 is supported ded. the netem kernel component is enabled under:

 
Networking --> networking options --> QoS and/or fair queuing --> network emulator

Netem is controlled by the command line tool 'tc 'which is part of the iproute2 package of tools. The Tc command uses shared libraries and data files in the/usr/lib/TC directory.

Contents
  • 1 Examples

    • 1.1 Emulating Wide Area Network delays
    • 1.2 Delay Distribution
    • 1.3 Packet Loss
      • 1.3.1 Caveats
    • 1.4 Packet duplication
    • 1.5 Packet upload uption
    • 1.6 Packet Re-Ordering
      • 1.6.1 Caveats
    • 1.7 Rate Control
    • 1.8 Non-FIFO queuing
    • 1.9 Delaying only some traffic
  • 2 FAQ
    • 2.1 How come first ping takes longer?
    • 2.2 How come TCP is so slow over netem?
    • 2.3 How can I use netem on incoming traffic?
    • 2.4 How to reorder packets based on Jitter?
    • 2.5 How does the value of HZ impact netem?
  • 3 Links
  • 4 Contact info

Examples

Emulating Wide Area Network delays

This is the simplest example, it just adds a fixed amount of delay to all packets going out of the local Ethernet.

 
# TC qdisc add Dev eth0 root netem delay 100 ms

Now a simple Ping test to host on the local network shocould show an increase of 100 milliseconds. the delay is limited by the clock resolution of the kernel (HZ ). on most 2.4 systems, the system clock runs at 100Hz which allows delays in increments of 10 ms. on 2.6, the value is a configuration parameter from 1000 to 100Hz.

Later examples just change parameters without reloading the qdisc

Real wide area networks show variability so it is possible to add random variation.

 
# TC qdisc change Dev eth0 root netem delay 100 ms 10 ms

This causes the added delay to be 100 ms ± 10 ms. Network Delay variation isn' t purely random, so to emulate that there is a correlation value as well.

 
# TC qdisc change Dev eth0 root netem delay 100 ms 10 ms 25%

This causes the added delay to be 100 ms ± 10 ms with the next random element depending 25% on the last one. This isn't true statistical correlation, but an approximation.

Delay Distribution

Typically, the delay in a network is not uniform. it is more common to use a something like a normal distribution to describe the variation in delay. the netem discipline can take a table to specify a non-uniform distribution.

 
# TC qdisc change Dev eth0 root netem delay 100 ms 20 ms distribution normal

The actual tables (normal, true, paretonormal) are generated as part of the iproute2 compilation and placed in/usr/lib/TC; so it is possible with some effort to make your own distribution based on experimental data.

Packet Loss

Random packet loss is specified in the 'tc 'command in percent. The smallest possible non-zero value is:

<Math> \ frac {1} {2 ^ {32} }= 0.0000000232% </Math>

 
# TC qdisc change Dev eth0 root netem loss 0.1%

This causes 1/10th of a percent (I. e 1 out of 1000) packets to be randomly dropped.

An optional correlation may also be added. This causes the random number generator to be less random and can be used to emulate packet burst losses.

 
# TC qdisc change Dev eth0 root netem loss 0.3% 33.33%

This will cause 0.3% of packets to be lost, and each successive probability depends by about a third on the last one.

<Math> prob_n = prob _ {n-1} \ cdot \; \ frac {33.33} {100} + rand () \ cdot \; \ left (1-\ frac {33.33} {100} \ right) </Math>

Caveats
    • When loss is used locally (not on a bridge or router), the loss is reported to the upper level protocols. this may cause TCP to resend and behave as if there was no loss. when testing protocol reponse to loss it is best to use a netem on a bridge or bridge

Packet duplication

Packet duplication is specified the same way as packet loss.

 
# TC qdisc change Dev eth0 root netem duplicate 1%

Packet upload uption

Random noise can be emulated (in 2.6.16 or later) with the specified upt option. This introduces a single bit error at a random offset in the packet.

 
# TC qdisc change Dev eth0 root netem ready upt 0.1%

Packet Re-Ordering

There are two different ways to specify reordering. The first methodGapUses a fixed sequence and reorders everyNthPacket. A simple usage of this is:

 
# TC qdisc change Dev eth0 root netem gap 5 delay 10 ms

This causes every 5th (10th, 15th ,...) packet to go to be sent immediately and every other packet to be delayed by 10 ms. this is predictable and useful for Base protocol testing like reassembly.

The second formReorderOf re-ordering is more like real life. It causes a certain percentage of the packets to get mis-ordered.

 
# TC qdisc change Dev eth0 root netem delay 10 ms reorder 25% 50%

In this example, 25% of packets (with a correlation of 50%) will get sent immediately, others will be delayed by 10 ms.

Newer versions of netem will also re-order packets if the random delay values are out of order. The following will cause some reordering:

 
# TC qdisc change Dev eth0 root netem delay 100 ms 75 Ms

If the first packet gets a random delay of 100 ms (100 MS base-0 Ms jitter) and the second packet is sent 1 ms later and gets a delay of 50 ms (100 MS base-50 ms jitter); the second packet will be sent first. this is because the queue disciplineTfifoInside netem, keeps packets in order by time to send.

Caveats
    • Mixing forms of reordering may lead to unexpected results
    • Any method of reordering to work, some delay is necessary.
    • If the delay is less than the inter-packet arrival time then no reordering will be seen.

Rate Control

There is no rate control built-in to the netem discipline, instead use one of the other disciplines that does do rate control. in this example, we use token bucket filter (TBF) to limit output.

# TC qdisc add Dev eth0 root handle 1:0 netem delay 100 MS # TC qdisc add Dev eth0 parent handle 10: TBF rate 256 kbit buffer 1600 limit 3000 # TC-s qdisc ls Dev eth0 qdisc netem 1: Limit 1000 delay 100.0 Ms sent 0 bytes 0 Pkts (dropped 0, overlimits 0) qdisc TBF 10: Rate 256 kbit burst 1599b lat 26.6 Ms sent 0 bytes 0 Pkts (dropped 0, overlimits 0)

Check on the options for buffer and limit as you might find you need bigger defaults than these (they are in bytes)

For more explanation about how to use classful queuing disciplines see: Linux advanced routing howto-classes

Non-FIFO queuing

Just like the previous example, any of the other queuing disciplines (gred, CBQ, etc) can be used.

Delaying only some traffic

Here is a simple example that only controls traffic to one IP address.

 
# TC qdisc add Dev eth0 root handle 1: PRIO # TC qdisc add Dev eth0 parent handle 30: netem \ delay 200 ms 10 ms distribution normal # TC qdisc add Dev eth0 parent 30: 1 TBF rate 20 kbit buffer 1600 limit 3000 # TC filter add Dev eth0 Protocol IP parent 1:0 PRIO 3 u32 \ Match ip DST 65.172.181.4/32 flowid

The commands makes a simple priority queueing discipline, then attaches a basic netem to the priority 3 hook. then a TBF is added to do rate control. finally, a filter classifies all packets going to 65.172.181.4 as being Priority 3. for more info on traffic classification see lartc -- Filters

FAQ

How come first ping takes longer?

The first ICMP packet in a ping requires an ARP request/response as well.

How come TCP is so slow over netem?

When you run TCP over large bandwidth delay product links, you need to do some TCP tuning to increase the maximum possible buffer space.

How can I use netem on incoming traffic?

You need to use the intermediate functional block pseudo-device IFB. This network device allows attaching queuing discplines to incoming packets.

 
# Modprobe IFB # IP link set Dev ifb0 up # TC qdisc add Dev eth0 ingress # TC filter add Dev eth0 parent FFFF: \ Protocol IP u32 match u32 0 0 flowid 1:1 action mirred egress redirect Dev ifb0 # TC qdisc add Dev ifb0 root netem delay 750 ms

Another way is to use another machine as an Ethernet bridge, and apply netem to both Ethernet devices.

How to reorder packets based on Jitter?

Starting with version 1.1 (in 2.6.15), netem will reorder packets if the delay value has lots of jitter.

If you don't want this behaviour then replace the internal queue disciplineTfifoWith a pure packet FIFOPfifo. The following example has lots of jitter, but the packets will stay in order.

 
# TC qdisc add Dev eth0 root handle 1: netem delay 10 MS 100 MS # TC qdisc add Dev eth0 parent pfifo limit 1000

How does the value of HZ impact netem?

In the 2.6 line of kernels, Hz is a retriable parameter that takes values of either 100,250, or 1000. because it affects the granularity with which netem is able to delay packets, it is most beneficial to set Hz to 1000, which will allow for delays in increments of 1 ms. see this mailing list post for a more detailed discussion of the impact of Hz.

In kernel versions, 2.6.22 or later, netem will use high resolution timers, if they are enabled. This allows for finer granularity (sub-jiffie) Resolution.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.