Here at IWL, we usually measure bandwidth using specialized devices from Spirent or Ixia.
However, the other day we had occasion to use the popular open source tool Iperf.
Our goal was to measure the bandwidth of a link that had a lot of buffering (i.e. a bufferbloat situation.)
So we set up an Iperf server at one end and an Iperf client at the other.
We ran Iperf in UDP mode because we did not want our results affected by TCP congestion algorithms.
We ran a single-direction test. The client was instructed to send traffic into the link much faster than we knew the link could handle – we expected substantial loss of packets and we also expected a significant and variable time delay for many of those packets that did get through.
We observed those expected effects. And we observed more that we did not expect.
In particular, we observed that Iperf was significantly under reporting the actual bandwidth of the link.
We experimented with the “-l” parameter on the Iperf client and noticed that as the packet size was reduced the bandwith report error increased.
So we inserted a small switch with port monitoring capability onto the link to see what we could see. What we saw brought to mind one word – RTFM.
Had we very carefully read the Iperf documentation we might have noticed that Iperf measures the transport level bandwidth rather than the full bandwidth.
In other words, Iperf measures only the number of UDP data or TCP data bytes that are carried. The bytes used by protocol wrappers – Ethernet, IP, UDP/TCP – are not included in the Iperf bandwidth reports.
On a typical packet there are:
18 bytes of Ethernet frame. This includes the source and destination MAC addresses, the type/length field, and the CRC/FCS. (There are at least 4 more bytes if you use IEEE 802.1Q VLAN tags. There are also 8 more bytes that carry the packet preamble and the start frame delimiter. For convenience we won't use these in our calculations. However, we should remember that these exist on most media and are yet more bits that Iperf ignores. See Counting Bits for a more detailed discussion of the intricacies of counting the bits in a packet.)
20 bytes (or more if there are options) of IPv4 header.
That adds up to at least 44 bytes per-packet that are not taken into account when Iperf calculates link bandwidth.
On large packets, such as might contain 1472 bytes of UDP data, this amounts to an Iperf under report of roughly 3%.
On small packets that error grows substantially. For packets that contain 60 bytes of UDP data the under reporting error is over 40%.
The error can get even larger where the UDP packet has so little data that even with the UDP and IP and Ethernet headers it does not fill a minimum sized Ethernet frame.
The numbers reported by Iperf are not incorrect. Rather they are potentially misleading to those who do not realize that Iperf is measuring only transport level data and who expect that the result will be that raw bandwidth of the link, such as that advertised by ISPs or many consumer link speed test websites.
One can convert the numbers given by Iperf into actual bandwidth.
The formula is:
A = ((L + E + I + U) / L) * R
A is the actual link bandwidth in bits/second.
L is the value given to Iperf's “-l” parameter. For values above 1472 (on a link with an MTU of 1500) Iperf will generate fragmented UDP packets. For our purposes this should be avoided. So it is best if the user specifies a specific value in the range of 60 to 1472. This will result in UDP frames with that amount of data.
E is the size of the Ethernet framing. For convenience we will say that this is 18 bytes – to hold the source and destination MAC addresses, the type/length field, and the 4 byte CRC/FCS. But it can be more if 802.1Q VLAN tags are being used. And there are also 8 more bytes that are used for the packet preamble and the start frame delimiter. For convenience we won't use these here in our calculations.
I is the IPv4 header size. This is 20 bytes plus IPv4 options. For convenience we will use the size without options.
U is the UDP header size of 8 bytes.
(Note: This formula should be computed using floating point arithmetic; integer arithmetic will usually simply, but incorrectly, give 1 as the result.)
For more information on packet framing overhead see Counting Bits.
Iperf's weakness is in its reporting. As a packet stream generator Iperf works fine.
One can combine Iperf as a traffic generator with another open source tool, wireshark –Wireshark that can do count (and graph) the bits.
The best way to do this would be to acquire a small (and relatively inexpensive) consumer grade Ethernet switch that has port monitoring. We tend to use the Netgear GS105E (that final 'E' is important) for this – some configuration by the user is required.
Then we install that switch so that the Ethernet we are measuring goes through the switch. In other words we install the switch astride the path between our Iperf cleint and Iperf server.
Then we run a wire from the monitoring port on the switch to a machine running wireshark (and that is powerful enough to absorb the traffic load).
We then fire up the Iperfs and capture several seconds – typically 30 seconds – of traffic.
Iperf is a useful tool. But one must interpret its results knowing that Iperf measures transport level, i.e. user data, bandwidth rather than the actual bandwidth of a link. Most ISP's advertise the speed of their offerings based on the actual bandwidth rather than user data bandwith.
The differences in the numbers between what ISPs offer and what Iperf reports can be high – over 40% in some cases. The actual difference will mostly depend on the traffic mix (the blend of large and small packets). The difference will also be affected, but to a lesser degree, by the presence of VLAN tags, MPLS headers, micro-packetization, and other rather technical matters.
There is also a small error caused by Iperf not counting hidden traffic, such as ARP.
IPv6, because it has longer IP headers than IPv4, will increase the under reporting by Iperf.
We are ignoring the effect of other kinds of packets on the link being tested. For example, the ARP protocol will consume some bandwidth, but typically not a significant amount. The effect of things like ARP can, however, be multiplied if the time needed for the ARP transaction to complete results in the delay of a significant number of our test UDP packets.
In our measurements the packet loss and delay on the link was sufficiently high so that at the end of each run the Iperf server could not reliably send its summary report back to the client. So we got our numbers by having the server report its perceived bandwidth every few seconds.
© 2014-2018 InterWorking Labs, Inc. dba IWL. ALL RIGHTS RESERVED.