MEME 2.01


MEME 2.01

In this issue:


o Roto-Rooter for the Net: 1-800-DRAIN-ME
o Corrections (Simson Garfinkel corrects James Gosling)
o Announcements (I.D. Magazine 1996 Design Review, call for submissions.)

This issue of MEME is dedicated to Bandwidth, Blockages, Brownouts -- a close look at wether the Internet is teetering on the verge of collapse, overstuffed by too much data.

"In the last three months, traffic to our site has doubled," Mark Kosters tells me, "We have a root name-server at InterNIC, and the number of queries is now at 230 to 250 queries per second." The site Kosters is talking about lies at a major traffic center for the Internet -- Network Solutions, in Virginia -- they're the people who decide who gets what domain name (burgerking.com for instance, or bennahum.com), the closest thing the Net has to a center.

Kosters is worried; from his vantage point, as principal investigator for the InterNIC, he must keep abreast of technical problems fouling the world's largest computer network. Kosters claims that the Net appears stretched to the breaking point, or perhaps, "shredding point" is a better description: as you read this there's a better than 50-50 chance that somewhere a major switching point, or node, on the Net is teetering on the verge of yet another brownout. "MAE-East," Kosters explains, "has a problem with congestion. It goes down quite often, once every couple of hours. Packets wind up getting dropped."

Translation: MAE-East is one of the major points where Internet traffic meets, gets re-routed, and then sent off one step closer to its destination. Located near Washington D.C., MAE-East is a kind of barometer for the state-of-the-Net. Every time MAE- East collapses under a tidal-wave of data, packets of information simply disappear, swallowed into a black-hole of inadequate bandwidth. As a user, the symptoms are subtle, often nothing more than an alert-box in your Web browser telling you that the host is "unreachable." We often assume that means a lot of other people are trying to access the same Web page -- that's one explanation -- another is that you've just experienced a temporary Internet brownout. Your packets just went down the drain.

When I first discovered this problem, I didn't really believe it. For instance, this fall, you may have come across this story, or one similar to it:

A major brownout on the Internet in mid-September was just a foretaste of the online downtimes to come, say local experts. "I anticipate it happening again," says Dan Benjamin, a local technology consultant. "It's just a matter of time." The brownout resulted in many Internet users unable to use the World Wide Web or other Internet services such as E-mail. "It took down entire parts of the Internet -- it was severe," Benjamin says. "Nobody was prepared for it to happen. The hardware in place simply became too overworked. There was too much traffic."

That little blurb ran November 24th, 1995, in the Orlando Business Journal, a local Florida paper. Similar stories reared their heads in Inc., Internet Week, and Computergram International -- Internet brownout didn't exactly make for front-page news, and I was skeptical anyway. The theme of an overburdened Net returns from time to time to woo us with uncertainty, and like the elemental "cry wolf" story, after awhile you just want to say, "shut up." Conventional wisdom has it that the Net does not follow the laws of physics, there is no limit to how big, fast and dense the Net can become. Like Intel, with its magical semiconductors constantly doubling in speed and halving in price, we assume the stuff that makes the Net -- you know, cables and routers and switches -- the stuff 99% of us barely understand -- also grows like silicon wafers. Well. Maybe they don't. In fact, they definitely don't. And that spells T-R-O-U-B-L-E.

Culprits -- Sticky Webs.

When evaluating anyone's opinion, their background, or "provenance" (as the French say), matters most. Only a handful of people in the world can look at the Net from Kosters vantage point. So I called him expecting him to refute these persistent rumors of Internet outages and brownouts. When he confirmed the rumors were true, I asked for culprits. Kosters went on to explain why the Net suffered from a bandwidth problem, beyond the obvious "it's growing" explanation. Culprit number one: the World Wide Web.

"The Web has taken everything by storm," Kosters says, "but it is inefficient." Here's the logic. When Tim Berners-Lee created the rules governing how HTTP servers would serve up Web-pages, back in 1989, the world of the Net was a nice cozy place -- a universe measured in the thousands of people, sort of like a big bulletin-board service. Back then, the Web that Berners-Lee envisioned didn't even have pictures, it was a text-only universe. The GUI (Graphical User Interface) way of computing was something for Macintosh users and computer scientists into graphics. Berners-Lee wanted to create a way for fellow physicists to share research papers, and yes, he did want a way for charts, tables and pictures to be included in the paper, through the use of "hyperlinks." In the context of this stable world, Berners-Lee had a choice, he could let a client-computer read a Web page bit-by-bit, uploading chunks of information and making it available to the human as it came in. Or he could have the client-computer download the entire Web-document in one big fat gulp (sort of like FTP), and *then* let the human see it. The latter is more efficient because the client "calls" the server less often, reducing Internet traffic. The former is better for the human -- you get to read the page as it is coming in, bit by bit. In an era where computer science was (rightly) learning to emphasize the human over the computer, Berners-Lee made the right choice, favoring people.

Can't fault Berners-Lee for that. He never imagined Time Warner, for instance, would create something called "Pathfinder," or that someone would create "Real Audio" -- and let people listen to radio broadcasts and sounds through web pages -- need I mention the fish-tank, coffee-maker, soda- machine images beamed through the Web in real-time -- and that, oh millions of people, would be doing this in 1996. As they say, "oops." What's done is done. But that "inefficient" Web protocol did two things: it attracted lots of people because it, as Berners-Lee hoped, sure was "user-friendly;" it also clogged the drain big-time. >Culprit number two -- thin pipes.

So bandwidth is up. One solution, the obvious solution, is to make bigger data-pipes. By and large that's been happening. Companies valiantly (or rather profitably) come out with improvements all the time. Unfortunately the growth of traffic on the Internet is on another order of magnitude from the pipes. For now, the situation is getting worse, not better. Since people's eyes tend to glaze over when the words "router", "node", "switch" enter the conversation, I'll keep it short.

"Routing hasn't scaled very well," Kosters says. He goes on to name a certain specific telephone company as being "on the bleeding edge," meaning that its Internet data-handling business teeters on the brink of perpetual breakdown, victim of poor engineering choices (can't say who, sorry, but no, it is not MCI). "The biggest problem for everyone," according to Kosters, "is the switches." The switches, those hardware devices that control the flow of data, just can't do it fast enough. The switches, which are part of the routers, mean the routers don't fare so well. "Everybody uses Cisco as their core routers," says Kosters, "there were software bugs that crashed the routers. That was a major cause of outages. MCI had problems with their routers in Denver, and that led to an outage between the East and West coast. Traffic increases means Cisco can't make these bigger switches soon enough. We are near the point of falling down, waiting for new technology to catch up."

Solutions -- Living in a sticky tub.

The Internet Engineering Task Force (IETF), another obscure Internet governing body, is working on these problems . They are responsible for setting the standards for the Net, like how many numbers can be an Internet Protocol (IP) address (mine is 198.7.7.184). They're examining what to do, which includes examining the Web and seeing if there is some way to alter the way HTTP servers send and receive data to make it more efficient; that alone will make a big difference. Another proposed solution would meter Internet usage, charging heavy users more, light users less, in the hope that the magic hand of the market will put some logic into the system. Dream on. Unless every Internet Service Provider (ISP) by unprecedented government fiat everywhere in the world simultaneously agrees to do this it will not work. All it takes in one mutineer to offer "unlimited Internet service" for a flat-fee to destroy this idea -- and odds are a whole lot more than one will do this (and even if they all agreed, the mathematical modeling of this solution is so complex that predicting the weather seems easier to do; we don't actually know for sure that traffic will go down this way). So what's a plumber to do? Get used to a sticky tub?

Seriously, what's to be done?

There is no clear solution to the Internet bandwidth problem. There is one comforting thought, however. Computer science has a long tradition of countering the odds, of coming out with a solution which defies all expectations. The pot of gold at the end of this problem is massive. Expect a solution to arrive from somewhere, sometime this year, that will once again redefine the limits of the Net and set the stage for 1997 (Any readers out there with ideas? I'm willing to devote an issue of MEME to them). In the meantime, learn to "multi-task" -- get things done while waiting for your Web page to load. You can practice this by watching television, reading and talking on the telephone at the same time. It works.


CORRECTIONS

MEME 1.09, the previous issue, had an interview with James Gosling, lead architect behind Sun Microsystems' Java programming language. He said the following, "I created the EMACS editor. I did the original one for Unix. It is a text editor that has become really really popular on the Internet."

That it seems is open to debate. Simson Garfinkel, MIT alumnus, author of several books on Internet security and encryption, and WIRED contributor, responded:

Date: Sat, 30 Dec 1995 10:02:25 -0500
To: davidsol@panix.com
From: simsong@vineyard.net (Simson L. Garfinkel)
Subject: info on emacs

James Gosling should know better than to say that he "created the Emacs editor". The first Emacs was written in 1975 by Richard Stallman. It was an extensible editor, and users were asked to contribute improvements; the whole package was shared as free software.

Emacs inspired over 30 imitations, including one written by Gosling in the 1980s. Gosling acknowledge this relationship at the time; his first manual asked users to help improve the program to make it worthy of the name "Emacs". Along with the name and much of the design, this idea of cooperation also came from the original Emacs.

The users responded strongly to Gosling's appeal, contributing many improvements. But Gosling did not follow this spirit himself; instead, he turned the program into a commercial product. In response, Stallman created GNU Emacs , a free program which has largely superseded Gosling's version. GNU Emacs stays true to the spirit of mutual cooperation of the original Emacs.



ANNOUNCEMENT -- PLEASE REPOST

I (davidsol@panix.com) will be moderating I.D. (International Design) Magazine 1996 Annual Design Review in Interactive Media, meaning I get to sit around with three judges and write-up their evaluations of the best examples of Interactive Media produced in 1995. Interactive Media means anything that requires a computer to experience it (Web pages, CD-ROM, software interface design, video games, etc., are all eligible.) What follows is the official blurb from I.D. magazine asking for submissions. Please forward it, repost it, to software designers, web designers, who might be interested in submitting work. Thanks.

I.D. (International Design) Magazine 1996 Annual Design Review Deadline: Feb. 1. Annual competition for design in the following categories: Interactive Media, Consumer Products, Graphics, Furniture, Environments, Packaging, Equipment, Concepts and Student Work. Projects designed or introduced in North America and Europe in the 1995 calendar year will be accepted.

Winning projects will be featured on a CD ROM and in a special double issue of I.D. Magazine in July. For information and entry forms: Design Review Editor, I.D. Magazine, 440 Park Avenue South, 14th Floor, New York, NY 10016, 212/447-1400 phone, 212/447-5231 fax, IDMag@aol.com e-mail.

The entry form can also be downloaded from the Macromedia website: http://www.macromedia.com/Brain/Id.magazine/index.html

Thank you.

Meme 2.01 and its contents copyright 1996 by David S. Bennahum. First spawned by Into The Matrix at http://www.reach.com/matrix/welcome.html. Pass me along all you want, just include this signature file at the end.



Direct comments, bugs and so on to me at davidsol@panix.com.