Network Congestion And Net Neutrality: The Debate Over Bandwidth Utilization

The ground breaking FCC ruling on Net Neutrality has continued to stir the debate on whether corporations or the government should be in charge of regulating the use of the internet. Last week The Wall Street Journal published an article by Holman W. Jenkins: The Gigabit Distraction. This article makes some misleading and “distracting” claims about network neutrality.

Wall Street Journal Subscribers can read it here: The Gigabit Distraction. The article was also published at Silicon Investor

IWL believes that the incumbent carriers, who have enjoyed a regulated monopoly status for years, simply do not want to make the investment in upgrading their network infrastructure. Instead they prefer to have consumers continue to use the present infrastructure and continue to pay a high monthly fee. With traffic engineering, the incumbent carriers could deliver higher capacity (bandwidth) to a select group of customers and charge them more. However, the network neutrality ruling precludes them from doing so.

Jenkins argues:

When the BTIG Research firm last October began covering the Internet pipe operator Cogent Communications, its report contained an amusing insight. Cogent’s last-mile business customers buy a service that offers 100 megabits per second. The average use by these customers, though, is only about 12 mbps, and barely “one or two dozen of their customers have ever reached 50% utilization of the 100 MB pipe,” says BTIG.

Jenkins is attempting to make the point that the existing infrastructure meets the requirements of the overwhelming majority of customers, and only a small minority require more. The implication is that we don’t need network neutrality, because the users are not using what is there! IWL believes the conclusions are misleading for a variety of reasons:

First, there’s a difference between sustained bandwidth utilization and bandwidth spikes in demand.

For sustained bandwidth utilization, while network operators may differ, in general, a user should not exceed 50% to 70% utilization of a 100 MB pipe (the provisioned bandwidth provided by the ISP). Periodic spikes in demand will put the user in the 80% to 90% bandwidth for short bursts of time. When the demand spikes occur, bandwidth is available[1]. However, if the sustained bandwidth utilization were consistently 90% of the 100 MB pipe, then random spikes in demand would exceed bandwidth, quickly creating an underprovisioned network; the user would urgently need an upgrade.

The article misleads the reader into thinking that bandwidth is a static, monolithic phenomenon, when in fact, it is dynamic and ever changing.

The article talks about serving an eight ounce coke in a 128 ounce cup, but that’s a false analogy as a user’s demand for bandwidth is not static like a glass of coke.

Secondly, the term “bandwidth” has never been well defined in the industry and remains largely ambiguous. There’s no agreement on what bits are counted as part of bandwidth[2]. For example, do Ethernet header bits or CRC bits count? Certainly the carriers continue to obfuscate the terms by giving their service offerings names like “100 Ultra” that some users interpret as a bidirectional 100 MB connection. Keeping the user confused seems to be the goal.

At least with network neutrality we can open the door for demanding a clear definition for how bandwidth is tested and measured, and how bandwidth utilization is tested and measured.

Third, the carrier networks are the equivalent of one-lane, dirt roads with potholes. Is it any wonder that no one makes high quality, high performance luxury automobiles to travel on a one lane dirt road with potholes? This is a classic chicken and egg problem. Great products and services requiring a high speed super highway are possible, but if the product creators only see a low capacity, inadequate Internet, why bother creating those products? Thus, the products are never created. Would network neutrality help get the super highway built? Maybe.

We’ve seen the difference between HD videos from NetFlix and YouTube delivered over the satellite (beautiful quality) as compared to delivery over the cable network (poor quality). Maybe that’s acceptable to a large class of users, but since they have not seen any alternatives, how would they know?

Fourth, Jenkins argues (somewhat opaquely) that Google has proven that consumers want choice in their Internet Service Provider:

To the extent Google is succeeding, it’s also because Google has been excused from numerous costs borne by the cable incumbent, including franchise fees, right-of-way fees and buildout mandates (requiring it to bear the cost of delivering service to neighborhoods where few or no customers are to be found).

Yes we have heard this before. The poor, pitiful monopoly carriers had to bear some costs and fees in exchange for having monopoly status in their markets. While they have been compensated for this investment several times over through monthly fees, now with network neutrality they won’t get to carve out a fast lane for clients who need more and will pay more and so they are angry. The extra revenue with no investment will not materialize.

True, Google did not have to bear the costs that Jenkins described, however, Google was not guaranteed any monopoly status for any period of time. Google has to figure out a way to compete and recover its investment.

Finally, Holman W. Jenkins provides the following forecast:

Yes, Google is probably right that if you give Americans gigabit pipes, eventually they will fill them up. That someday-somehow-scenario, though, is not an economic basis for a massive fiber rollout today, which could only be accomplished without massive government subsidization. Ten Netflix videos running simultaneously wouldn’t even consume 4% of the capacity that Google Fiber provides its customers.

Is Google more farsighted than the incumbent carriers and simply planning for adequate future capacity?

To be fair, Google does not have the curse of an installed base. However, we note that some carriers, like Verizon, used natural disasters to excuse their responsibility to the installed base. After Hurricane Sandy, Verizon refused to re-install destroyed “legacy” equipment and forced customers to switch to the newer (and more profitable) wireless services instead.[3]

Consider the following predictions of demand for technology that turned out to be completely wrong:

  • In 1977, Ken Olsen, President of Digital Equipment Corporation (inventor of the minicomputer) said “There’s no reason for any individual to have a computer in his home.”[4]

  • In 1981, Bill Gates, President of Microsoft said “640K ought to be enough for anybody” at a microcomputer tradeshow in Seattle. [5]

Of course the publicists for those individuals deny that they said such things, but the anecdotal evidence is strong. We think Holman Jenkins will be joining their ranks soon.

Footnotes:

[1] Spikes are not necessarily random. Many are, but many are a fact of the media. Many video technologies snap off a video frame every 33 or 40 milliseoncs. And each of those frames may have to be carried as a train of packets – a fresh train launches down the piple every 33 or 40 milliseconds. By the time those trains get to the user they may stretched out or overlapping (causing apparent re-ordering) or there may be packets lost.

[2] IWL has seen from iperf and other tools, that there are different ways of counting bits. For example, the common tool does not count ethernet, IP, and TCP/UDP headers; it only counts the effective data. This can mean that the number of bits reported substantially undercounts the actual number of bits sent down the wire. And in addition, protocols such as TCP take care to avoid congestion. This means that TCP flows start off slowly and ramp up their utilization. And they only accelerate until they perceive congestion, then they back off. TCP aims for efficient utilization of a link without inducing congesting rather than stuffing every possible bit down a path and causing congestion. But moreover, networking is not a steady state activity – data comes in bursts. Networks have to have headroom to accommodate those bursts.

[3] http://money.cnn.com/2013/07/22/technology/verizon-wireless-sandy/

[4] http://www.snopes.com/quotes/kenolsen.asp

[5] http://en.wikiquote.org/wiki/Talk:Bill_Gates

Previous
Previous

Fedora 16 NFS Issue

Next
Next

IWL Quarterly Update: Winter 2015