Contact Us
+1.831.460.7010

October 29, 2013

This FAQ is focused on the use of MIni Maxwell at the DARPA Robotics Challenge (DRC) Trials.  For the more generalized technical information see the Mini Maxwell Technical FAQ or the Mini Maxwell web pages.  For deeper background information take a look at the Mini Maxwell User Guide which is visible on the web at http://iwl.com/mini-maxwell/user-guide.

Note: This FAQ neither changes nor extends any DARPA established rules for the DRC Trials.

This FAQ should be considered a living document.  If a team has a question please do not hesitate to ask for clarification; we will update this FAQ accordingly.

About Mini Maxwell And Its Use at The DRC Trials

At the DRC Trials in December 2013 a bandwidth shaping device - a Mini Maxwell - will be placed astride the network link between a team's control station and the team's "field computer" (or the team's robot if no field computer is being used.)

Physically a Mini Maxwell is a small computer - about the size of a paperback novel.  Mini Maxwell has three Ethernet interfaces.  One of those interfaces is for control and management, the other two (typically called "LAN-A" and "LAN-B") are layer-two bridged to one another so that packets arriving on each of those interfaces are forwarded out the other.  In its basic mode a Mini Maxwell acts as a bump on an Ethernet cable and should be almost entirely invisible to devices using that Ethernet.  Mini Maxwell is reasonably robust - it has no moving parts and is passively cooled - but like most electronic gear they are not immune to things like water or electrical static.

Mini Maxwell's job is to create degraded (another word for this is "impaired") network conditions.  The suite of impairments that can be performed includes packet drop, duplication, corruption, delay (variable and fixed), and restricting the rate at which bytes (and thus packets) are allowed to cross through the Mini Maxwell.  Each of these impairments is parameterized and may be controlled through an interface on the management Ethernet port.

Because it is often useful to impair different kinds of network traffic in different ways Mini Maxwell uses filters to classify packets into "bands".  Thus, for example, IPv4 packets could be sent into one band while IPv6 packets are sent into another.  Each of these bands has its own collection of impairment settings.  This system of filters and bands exists for each of the two directions in which packets may be moving.  In total Mini Maxwell has ten bands that may be used, five in each direction.

The impairment settings and classification filters may be changed without stopping traffic flow, although there can be transitional side effects.  These side effects are described in some of the items below.

These Mini Maxwells are capable of being operated either via a web-page user interface or a RESTFUL software interface.  At the DRC Trials the RESTFUL interface will be used and will be driven using Python programs on a Linux laptop computer.  This software is available to the teams for pre-event testing.

The Questions and Answers

QI have a question or concern, who do I contact?

A InterWorking Labs (IWL):
   Website: http://iwl.com/
   Email: maxwell-support@iwl.com
   Telephone: +1 831 460-7010

A person from InterWorking Labs will be on site at the event itself - we will post a mobile phone number at that time.

Q What are the network limits, constraints, and expectations?

A As a general practice teams should tailor their network use so that the traffic load they send onto the network is roughly on par with the actual available bandwidth.  In the general case, where available bandwidth could range from 30,000bits/second (typical dial-up speed) to 10Gigabits/second, this tailoring can be hard (and is the subject of ongoing research by various network protocol groups.)  However, during this event the limits are known in advance (and are shown below).

Minimum size Ethernet packets (60 bytes) and small Ethernet packets (up to around 128 bytes) tend to place a disproportionate load on network devices.  These small packets are a necessary fact of internet protocol life but in a typical traffic mix they make up only small portion of the overall traffic and their occurrence tends to be spread among other, larger packets.  There is concern, however, that sensors might send heavy streams of these small packets.  Teams should try to aggregate this kind of traffic into larger Ethernet packets when reasonably feasible.

Mini Maxwell supports neither tagged VLANs nor jumbograms (Ethernet frames larger than 1514 bytes.)

At the DRC Trials in December 2013 teams should aspire to the following:

  • Use IPv4 or IPv6.
  • Do not use tagged VLANs or jumbograms.
  • Avoid continuous bursts of traffic far in excess of the maximum rate (bandwidth) limitation that will be applied during this event.  Short term bursts are OK (and probably unavoidable.)  But as measured over a span of a second or more a team should aspire to avoid generating traffic at rates significantly that are higher than the network capacity (as constrained by the bandwidth limitation.)
  • Avoid traffic composed only of very small Ethernet packets.  (In practice small, even minimum sized packets are unavoidable, however they should represent only a portion of the overall traffic.)

Q What is is the pattern of impairments that will be applied at the DRC Trials?

A The exact pattern is still subject to change.  However at the date of this writing the following describes the pattern that will used.

The pattern will loosely mimic an internet path such as might be obtained using typical cellular data services.  The pattern will alternate between a relatively good clear connection and one that is suffering from congestion.

Teams should anticipate relatively low levels of available bandwidth - 100,000bits/second to 1,000,000bits/second.  This rate limit is imposed on each direction, independently.  In other words, a rate limit of 100,000bits/second allows for 100,000bits/second in each direction.

It is important to recognize that data rate limitation will cause packet queues to form if the team tries to send data at a rate that exceeds the limit.  These queues can get quite long - as much as ten seconds worth of traffic may be accumulated before the traffic shaping system begins discarding packets.  Because we will be alternating between the "good" mode and the congested mode there will be transitional effects that occur because of that change - See the question/answer on transitional effects in this FAQ.

In addition, delay will be introduced varying from 50milliseconds (one way, 100milliseconds round trip) to 500milliseconds (one way, 1,000 milliseconds round trip.)  This additional delay represents a minimum - if the team is sending data at faster than the limited rate there will be additional packet delay.  That additional delay can grow to be as much as ten seconds.  Although this may seem like a high number it is not inconsistent with what can happen on the internet, particularly in situations involving "bufferbloat" - See the question in this FAQ on that subject.

The pattern of changes is built into the code of the Python programs that will be used at the event - See the question regarding availability of those programs in this FAQ.

Each run will be divided into periods, each period being 60 seconds long.  Time will be measured from the start of the run.  During the run the periods will alternate between "good" mode and "congested" mode.

Good Mode
Base Delay   Rate Limit
50milliseconds (each way)   1,000,000bits/second (each way)
Congested Mode
Base Delay   Rate Limit
500milliseconds (each way)   100,000bits/second (each way)

Not every possible possible packet will be subjected to these impairments.  At the present time the following types of packets will be passed through the impairments shown above.  We do not anticipate that any other forms of traffic aggregate to more than a few bits per second.

  • ARP
  • IPv4
  • IPv6

It is anticipated that the impairment pattern for all teams, at least all teams sharing the same garage, will be loosely synchronized (within a few seconds) of one another.

Q Where can I get a copy of the control software?

A The control software is a set of three programs written in the Python language. These are command line programs; they have a simple command line interface.

These programs use only standard Python modules.  They were written to run on Linux however they can probably be ported to other platforms without much difficulty.

The source code for these programs may be obtained via this URL: http://iwl.com/mmx_materials/utilities/

At the DRC Trials each Mini Maxwell will be controlled using one instance of the periodic.py

Note that the periodic.py imports and uses the services of the other two programs (mm2client.py and setfilters.py.)  So all three of these programs should be placed into the same directory.

Q What are the transitional side effects when I change an impairment parameter or alter my filter map?

A There can be side effects that are caused by a change to an impairment parameter or a change to the packet classification filter map or filter sequence.

Inside a Mini Maxwell or Maxwell G changes to settings are completed within a few milliseconds (or faster.)  During that short period packets that arrive may be processed without impairment or filter classification.

When changing an impairment parameter the side effect may be the early release or discard of some or all of packets that are being delayed at the time of that change.  Because of buffering of packets that are being delayed due to the prior impairment setting it is possible that several packets may be affected.  These side effects are generally limited to only the band and direction, i.e. a change to Band 5 LAN-A-to-LAN-B generally will have no effect on other bands or on Band 5 LAN-B-to-LAN-A.

When changing the classification filter map or filter sequence there may be the following side effects.  For convenience the classification or map prior to each change will be here called "old" and the classification or map after each change is "new".

  • Older packets that have been accumulated into the old rate limiter or delay queue will trickle out according to the rate limit of that old limiter or delay queue.
  • Newly arriving packets that encounter a new rate limiter will receive the benefit of any unused bandwidth credit, thus there can be a short period when rate limitation will not be apparent as that credit is consumed.
  • Because of the interaction of the two points above it is possible that:
    • Some number of new packets will be sent by the Mini Maxwell before older packets.  The net effect of this that for a few seconds packets may be received in a different sequence than they were sent.
    • For a short period of time after the transition the burst data rate on the link may reach or even exceed the sum of the old and new rate limiters.

At the December 2013 DRC Trials transitions between time periods will be accomplished by changing the classification filter map.

Q What should be my IPv4 MTU?

A The IPv4 MTU should be the standard value of 1500.  That number allows for the 14 bytes of a standard (non tagged-VLAN) Ethernet header as well as the Ethernet CRC.

Q What will be the Ethernet network interface bit clocking rate

A The Mini Maxwell devices have Ethernet network interfaces that can run at either 10megabits/second or 100megabits/second (in either full or half duplex mode).

However, it is likely that at the event we will be running those network interfaces at 10megabits/second.

This is a rate that is 10x higher than the highest anticipated bandwidth limitation that will be applied.

Most modern 10/100/1000 Ethernet equipment will transparently and automatically negotiate to this clock rate.  It is unlikely that this will cause any problems; however, in the unlikely event that a team has difficulty, this can usually be corrected using a readily available, inexpensive consumer grade switch as an intermediate device.

The reason for this limitation of Ethernet clock rate is twofold: A) To reduce the chance that a team might send so much traffic that it could overwhelm the bandwidth shaping device and B) to induce the queues of pending traffic to back up in queues inside a team's own computers so that it has more options to deal with situations where the team's software is sending traffic in excess of network capacity.

Q What about bufferbloat?

A Rate limitation and delay both cause packets to be held for a period of time.  That time, which is perceived as latency across the network can cause what is called "bufferbloat."

Bufferbloat is not a flaw in the Mini Maxwell or Maxwell G - rather it is a common internet situation that must be anticipated and handled by protocol software in vendor devices.

Users should expect that when rate limiters or long delays have been configured that bufferbloat effects may be observed in the devices under test.

In particular users should anticipate that the delay due to rate limitation can grow so that several seconds could elapse between the time a packet is sent and the time it arrives.  The effect is the same if a large delay is configured.

Bufferbloat may have an amplified impact on TCP streams because of the various algorithms found in TCP stacks to handle slow start, congestion detection, congestive back off, and recovery.

See http://en.wikipedia.org/wiki/Bufferbloat for more information on bufferbloat causes and effects.

Q Should you adjust txqueuelen?

A It is easy to forget that on congested or limited networks transmit packet queues can build up inside device drivers on our computers.  In some cases this can amount to many seconds worth of traffic.  Many operating systems allow the adjustment of device driver transmit queue lengths.  On Linux systems this is done via the txqueuelen parameter to the ifconfig command.  You may wish to investigate the literature on this subject and consider setting txqueuelen to a non-default value.

Q What is the interaction between rate limitation and delay?

A Suppose you set a rate limit of a million bits per second and a fixed delay of 250 milliseconds.  And then suppose that you present a load that is higher than that rate limit setting.

At first you may observe that your packets are being delayed by the anticipated 250 milliseconds.

However after a while you may observe that your packets are being delayed substantially longer than 250 milliseconds.

The reason for this is that if packets are arriving faster than the rate limitation setting then those packets enter a delay queue.  That queue drains at the configured rate limitation value.  As a consequence each packet spends some amount of time - delay - in that queue.  The amount of that delay depends on how much data is waiting in the queue ahead of that packet.

The rate limitation queue can grow to be quite long - it can hold as much as ten seconds worth of data.

So it is possible, and in fact it is very likely, that if the presented traffic rate is greater than the configured rate limit that the limitation queue will grow and grow and grow - and the rate limitation packet delay will grow correspondingly.  (The queue will grow up to the point where the rate limitation queue reaches its maximum limit and new packets are discarded because there is no where to put them.)

The overall effect of this rate limitation delay is that the delay component created by the rate limiter can become much larger than the value configured in the explicit delay setting.

So, returning to our example, that 250 milliseconds of configured delay can be overwhelmed by the delay caused by the rate limiter.  It isn't that that 250 milliseconds has disappeared.  Rather it can be hard to perceive that 250 milliseconds after it is added to a significantly larger delay caused by the the rate limitation queue.