Transcript for this video
Come on in, I've been expecting you.
Thanks for joining me today. I hear you're trying to test TCP congestion avoidance code—I found a tool that will help.
This Maxwell box from InterWorking labs can exercise your TCP code, so you can evaluate your stack under controlled and reproducible conditions.
That sounds useful.
As you know, the Internet is a shared resource, so TCP code that does not properly react to congestion harms everyone.
TCP is supposed to be a good network citizen.
Networks, like highways, can have traffic jams. This usually happens at junctions—places like intersections on highways, or routers in networks.
And sending more traffic into a congested area just makes the problem worse.
A good TCP implementation backs off when it sees signs of network congestion, and then it slowly resumes its transmission rate.
That sounds easy, but the devil is in the details.
What about explicit congestion notification in IP and TCP?
Unfortunately, ECN is not widely deployed, so a TCP stack cannot depend on it. That means a TCP stack has to play detective.
I wanna play Sherlock Holmes. Let's see…congestion causes cues in Internet routers to grow, so increasing packet round-trip time would be one clue.
And routers limit cue growth by discarding packets, so packet loss would be another clue.
A TCP stack can infer packet delay and loss by examining TCP acknowledgments.
…So, duplicate acts would be a really good clue!
How did I do?
Very well, but unfortunately there's no perfect algorithm, much less any perfect implementation.
A lot of TCP code works fine in routine situations, but that same code might misbehave in different, even only slightly different, conditions.
The network certainly changes from second-to-second.
That means that it's very important to test every TCP stack across a broad range of possible network conditions.
But not everyone does enough testing.
Yeah, I've had to reboot too many wedged boxes.
Code that has not been broadly tested is like an airplane that has only been tested in calm weather and loses its wings when it encounters a storm.
Let me show you how to use Maxwell to drive your TCP code through a sequence of congestion cycles.
We will use Maxwell to mimic congestion-induced buildup of cues in internet routers. Here's a diagram.
We will begin with the baseline of 20ms of delay. We will hold that baseline for 60 seconds.
Then, over the next 60 seconds, we will increase the delay. At 120 seconds into the cycle, we will reach 820ms of delay.
And then, we will suddenly drop the delay back to our original value of 20. Then we'll start all over again.
That oughta give my congestion avoidance code a workout.
Could you show me how to set this up?
No problem. On this table I have two computers that are talking to one another using the open-source IPERF client and server programs.
In the middle I have Maxwell.
The Maxwell acts as a layer 2 bridge, like a smart bump on the Ethernet wire.
The Maaxwell will impose the cycles a packet delay. Let's take a look at the Maxwell screen.
First, we will isolate the TCP connection we want to exercise.
On this screen, I have selected TCP and have entered the IP addresses and port numbers for our target connection.
What about the return packets?
Maxwell can give different treatment to each direction.
This screen controls how much we're going to delay the packets in the chosen TCP stream.
Because I have a changing rate of delay, I will click this button.
I've already entered the parameters that describe the sawtooth diagram that we looked at a moment ago.
Let's go back to our main delay screen to get things going.
Notice the green box that shows a diagram of one cycle.
We'll click the green running man icon to put things into motion.
The cycles are now running.
I've graphed the IPERF TCP data rate so that we can see how it changes as Maxwell ramps up the network latency.
This graph shows us how the TCP stacks respond to the changing delay conditions created by Maxwell.
Notice how the TCP data rate reacts dramatically to the increased delay even though the underlying bandwidth remains the same.
It looks to me as if the TCP stacks did not negotiate a sufficiently large window scaling factor to handle this amount of latency.
And see how the data rate nicely recovers when the delay is removed.
Not all TCP stacks recover so gracefully.
It would be interesting to see what happens if we tell Maxwell to discard a few packets when the latency is at its peak.
Or, could we have Maxwell shuffle the TCP acknowledgments?
Yes, but I'll leave that as an exercise for the reader.
Here we used Maxwell to generate delay variations designed to induce TCP congestion avoidance.
You could use Maxwell to check how your TCP stack handles patterns of packet loss, or you could combine packet loss with delay, or with duplication.
Maxwell goes much further than these simple impairments.
For example, Maxwell can also modify the packets in flight.
Have you ever wondered how well your code could handle unusual, but perfectly legal, IP packet fragments?
We've never tested that.
The IP reassembly code in a lot of stacks is quite buggy.
It seems that Maxwell is a tool that no network lab should be without.
We hear that a lot. Maxwell can create real-world network conditions in your lab, giving you an opportunity to really understand how your code responds.
With Maxwell's help, you can make your products more robust and more bug-free.
Wow, this has been helpful. Where can I find out more?
Come visit the website at IWL.com
Want to Know More?
|Find more videos||How to buy||Contact an Expert|