The Matrix: MEME 2.15

MEME 3.01

Doug Engelbart: The Interview

We often forget that people conceived and designed the tools we use, and that a window or a mouse was not predestined to be the computer metaphor of our age.

In 1968, in front of 2,000 people, Douglas Engelbart introduced concepts that, for millions of people, now define the experience of using a computer. At the Fall Joint Computer Conference in San Francisco, he produced the first public unveiling of the "mouse," the "window," and the "point-and-click" metaphor. This Mother of Demos reverberated around the world, and permanently altered how we experienced computers.

The most remarkable aspects of his vision were its clarity -- that computers could help everyday people work and think -- and the way he went about turning this idea into a reality.

Doug's projects were funded by the same agency that funded the creation of the Internet. You could say that he worked on the "front end" -- the software that would allow the network to be experienced by non-experts. To a large degree, he was successful: hypertext, the web, and the icons that you click on were all inspired by Doug's work.

Howard Rheingold's Tools For Thought gives some good background on Doug Engelbart, especially in Chapter Nine, "The Loneliness of the Long Distance Thinker."

Doug now leads The Bootstrap Institute. I spoke with him recently for an hour or so. We discussed his early days in computing, how he created his milestone metaphors, and whether today's world of web, Windows, and mice matches his expectations.

Here's the first part of a three-part interview with Doug Engelbart, in his own words.

David Bennahum: What was it about computing that drew you in? What had you read about computers that inspired you to think about them as a tool to help people think better?

Doug Engelbart: Well, I'd read some things beginning in '48, just semi-popular things about how digital computers were emerging and some of the things they could do. And then I'd read a book called The Giant Brain. It was a popularized picture of what computers could do. You got the picture that a computer could read punchtape or punchcards, and likewise punch cards or paper tape or print on paper. I had enough technical background to know that if a computer could print on paper or punchcards, it could print anything you want to appear on a cathode ray tube. And if it could read those things, it could read contacts and different other instruments itself, so it could enhance what you were doing. So it could interact with you: what you did on the keyboard (or any other things you could wiggle or push), it could compute and do on the screen.

I saw that this could let people start exploring brand-new kinds of symbology and knowledge portrayal. Then I realized: If people are equipped with similar workstations (whatever I was thinking of calling them, terminals; I'm not sure), then they could be tied to the same computer complex and could collaborate in brand-new ways.

It didn't mean anything to me at the time whether people were ready to talk about that, or whether computers were ready to do it; that just became my commitment. So I quit my job and went to Berkeley, where they had a project to build a digital computer, and went into graduate school in the spring of '51.

DB: And you studied computer science. Or whatever they called it then.

DE: Yeah, it was just a freaky option in electrical engineering. I got a contract from ONR, The Office of Naval Research, to actually build a digital computer. So it was vacuum tubes and a magnetic drum.

DB: Did you start building things at that point, or was it still totally theoretical?

DE: It was just conceptual. We never got that computer going. The conceptual part -- I had to satisfy myself on doing some things with gaseous plasma discharge that were somewhat related to the way transistors work for my Ph.D. thesis. Then a faculty person from another department told me how advancement in a university environment goes. It's very much affected by your peer ratings, and also how you publish things. "And if you keep talking about something this wild and crazy, you will be an Acting Assistant Professor forever."

DB: So did you stop doing that?

DE: No. I left the university and went down to Stanford Research Institute (SRI), which was a separate not-for-profit, not really tied in with Stanford. I figured that if I could convince any management structure in the world about trying to do this, we could get the research going there.

DB: And your mandate at SRI when they first hired you was?

DE: Well: I had patents and such that came from my thesis, and having proven that I had done something, I thought I should sidle up to some research program that's going on there, and see if I could start working with them. So I did, and I got some more patents in the next few years.

DB: Was this in computing?

DE: It was a new kind of magnetic digital componentry. Working computers were not yet available I went to Stanford in '57. By '59 I had been thinking and talking about this to people quite a bit, and what I had persistently run into was that people would map it into their own ... well, later I learned the term "conceptual framework." Today they'd call it "paradigm."

The psychologists would map it into the world of psychology (cognitive psychology was just beginning to emerge): "Well, son, there's a lot to the brain and all that, and you're just an engineer!" And other people listened to me and said, "Well, we're information retrieval specialists, and that's all you're talking about, using the computer for information retrieval." Much as I tried, I couldn't change it.

So I went and got some research money from a specialty branch of the Air Force Research Agencies. I said, "Here's the thinking I've got, but somebody has to put it together." I had read a really interesting report from Rand Corporation that if you're going to do a multiple-disciplinary thing, you're going to find that the different disciplines you hire, the people you bring in will each bring an individual framework; what you have to do is build a common framework for them all, so that they can work together. So I started working on what turned out to be that "Augmenting Human Intellect" thing that came out several years later.

DB: Right. The title was "A Conceptual Framework For the Augmentation of Man's Intellect."

DE: Yeah. Or "Augmenting Human Intellect: A Conceptual Framework."

DB: October 1962. And this was the proposal that eventually led to the IPTO money and the creation of your lab, the Augmentation Research Center. Can you tell me a little bit about that story? My understanding was that you had come up with this paper, and Bob Taylor was partly responsible for pointing the people with the pursestrings toward you. Is that true?

DE: Well, he was interested, and he was at NASA at the time. But Licklider helped me get started initially. He was the first IPTO office director that bought into ARPA. He's the one who started giving me money. Taylor gave me some from NASA the next year, which helped.

DB: Do you know how Licklider came across the paper?

DE: Well, I was there knocking on his door.

DB: What did you do? Did you go to Washington to meet with him?

DE: Essentially sent stuff in to say: "Here's what I'd like to do, and here's the report, and here's a proposal."

DB: From what I've heard about Licklider, it seems he was pretty open-minded.

DE: Yeah. He had written a paper called "Man-Computer Symbiosis." My proposals had struck out a number of times among the government agencies. One of them said they'd send a site-visiting committee, and they later sent back: "We think it's a very interesting proposal you have, but since it would involve quite a lot of advanced computer programmers and since you're way out there in Palo Alto where there aren't any advanced computer programmers, we can't justify giving you the money."

DB: But when Licklider saw the paper, he felt like you had a proposal that matched his vision in some sense?

DE: Since he was advertising what he wanted to do, he really had to give me a try. But since it was "way out there in Palo Alto," as the saying goes, I'm pretty sure, from things I've heard from some of his compatriots at the time, that he thought it had a low probability of maturing because it was "way out there, but what the hell; give the kid a chance."

DB: How much money did they wind up giving you, do you think, during the course of the project?

DE: I think there was $13 to $14 or $15 million over the next 12 or 13 years.

DB: That seems like a fair sum.

DE: Well, it started out slow and bumpy, but by 1970 it got in gear. And then suddenly in 1976 it was judged that we were going on the wrong path. It was suddenly removed.

DB: If you were to recreate the project now, how would you describe what you proposed and why they funded it? What was your idea?

DE: The proposal was to start building an interactive system that we'd use for our own work, and then to evolve it. And the hypertext concept -- that was something Ted Nelson named years later. But I think what he was talking about is just linkage. From that Framework paper the concept emerged that you don't augment people with technology alone, but with technology and language and methods and skills and all sorts of things. You have to look at the whole system, if the technology part is going to erupt as much as it is. So this led me to thinking, well, there are a lot of surprises in the human-systems side. Where would we aim to explore and dig out first? It just seemed to me to be language, and the ways of externalizing your thought processes, versus symbolic language. And what are the new things you can do with that with the help of computers? You can structure it; you can do all kinds of things, including making exclusive citation interlinkage.

David Bennahum: How did you first come up, say, with the idea of a mouse? What was the logic there?

Doug Engelbart: Well, it was knowing that you're going to be sitting there looking at displays -- which wasn't taken for granted in those days, because they were so very expensive. But my assumption was that if a use was found for them on the computer, the price would come down rapidly. So: you're sitting there and you want to tell the computer what objects on the screen you want to do something with, you need something to select them with.

There were big wars. For some people the light pen was the accepted thing, and for others the tracking ball. Somehow neither one of them seemed to fit. We tested the different devices. Someplace in my notes there was this XY sort of thing. So we built it and put it in the tests.

DB: What about the Window? How did that idea come about?

DE: Well, that's pretty straightforward. You're sitting there working and just thinking: How can we do things differently? We have capability here that's different from the old documents we had, so you can say, "show me only the first two levels on the first line of every paragraph" or "let me see multiple windows so I can look at documents in different ways in different windows, and edit and cut between them" and the like.

DB: Were icons also invented at the lab? Or was it Xerox PARC, I guess, that came up with that idea?

DE: Well, I heard people talking about it even before PARC got going. But I went through the thinking about menus and it seemed to me that yes, if you could make menus, it would make things easier. But since we assumed we would have limited resources to go after how you really augment humans, I said, "Look, what I want to do is find out how you get power to humans." So things like that coding key-set, you'd look at it and say, "In the time it takes me to go up and pull it and then go down and click on something, I could have entered two to six characters with my left hand." And no menu selection is worth that much information.

On the other hand, during the time my cursor is moving up to select an object, my left hand can be telling me during that transit time what I want it to do. So by the time you get there, you're ready to click and execute. So we sort of disdained the menus to go with finding out how you could get real power.

DB: And it was sometime in 1968 that you had a system that was sufficiently robust that you were able to show it to people?

DE: Yeah. That was actually our fourth machine.

DB: And what you had was a series of computers that were networked locally in the room, and you could go into this one conference room and work together?

DE: No. We had one timesharing computer, and then we custom-made ourselves a display system so that it could drive and support up to 12 CRT displays. These were video-driven from the computer room -- we could run co-ax out to our lab and drive the workstations. That meant that when it came time to give this presentation we could lease two video lines up to the city, and send up video screens or video of the people in attendance, and then up in the city where the presentation was made we could also mix it with cameras from the stage.

DB: People say that the 1968 Fall Joint Computer Conference in San Francisco was a watershed. After seeing your demonstration, people left that room never thinking about computers the same way again. Would you say that's an accurate encapsulation?

DE: It was really giving them a new image. What I was really hoping was that the world would turn and say, "Wow! Really interactive display-oriented stuff with high power and all of that. It's really going. Let's all get going on it." It wasn't, though.

DB: It sounds like an amazing moment: not only did it succeed in redefining people's image of the computer, but it also showed a lot of people for the first time that computers could be intimate associates in everyday work, and not necessarily cold, calculating mathematical machines. I heard that Stewart Brand helped do the lights.

DE: He came and volunteered at the lab. He was a friend of ours in those days on that. We borrowed some tripods, and our video display generators actually each used an industrial video camera mounted in front of a small CRT in the computer room to carry that. We took off some of those, and needed cameramen, and we conscripted Stewart as one of the volunteer "cameramen." He was down in Menlo Park running cameras during that time.

DB: It seems like there was a cultural connection in which people like Stewart who were associated with the counterculture suddenly got involved in computers. I'm wondering if that struck you as strange.

DE: No. It just didn't register with many people in the "ordinary culture." The thing I've learned since the mid-fifties -- and it's terribly important for society to realize -- is how much the prevailing paradigm affects and limits the way one perceives what the future can do. Year after year after year we ran into this. The system that matured in the seventies was solid enough that when we got our research shut down, we could take it out into the commercial world. Between then and '89, all over the world, we had networks supporting something like 20 mainframe servers -- all kinds of capabilities still not duplicable in the web.

DB: This is the Tymshare system?

DE: Tymshare's Augment system. And McDonnell-Douglas bought Tymshare, so we got some very large application domains, and we had a chance to really get aerospace people to perceive this, and look at the scalability issues and the interoperability issues and all that. We could really show how you could redo the whole wave and do something, but somebody up the line would have to sign off on it, and some expert would tell them, "If IBM, DEC, Hewlett-Packard aren't doing anything crazy like this, I don't think you ought to gamble on it." We ran into that year after year. It's the World Wide Web explosion that has changed the view now. That's made a lot of difference.

DB: I've had this image in my mind: what we have now, with the web and the Internet and the personal computer with the graphical user interface and the windows, is an implementation, thirty years later, of what you demonstrated in 1968. It's taken this long to get here, and it may not be as sophisticated as what you proposed. The web exploded in 1994: that would be 26 years after your demo. It's amazing that it took us this long.

DE: Prevailing paradigms take a long time to shift. What really accomplished the shift was free access to Mosaic and the simple form that HTML took -- so that it could be replicated all around, and get enhanced, and people could start getting experience. That changed the profile. But in terms of the sorts of profound changes in the way people think and work together that are yet to evolve, this still is a minor paradigm shift, because they still get this thing over and over again where we can sort of automate what we used to do instead of looking at really doing things differently.

DB: So, in a sense, when you look at the web and at network computers online, there's still a big gap in your mind from really augmentation of our intellects and what we have now.

DE: It's a great start. The gap is in people's perception of where it's going to go. I feel that this technology will actually cause a larger-scale change in our society than anything since maybe the transition to agriculture.

DB: Why do you think it's that significant?

DE: I make an analogy: Look, you've got these funny organisms. They are social organisms called human organizations, and they're been puttering along with very weak interconnections all through the years. You can talk; you can wave your hands; pretty soon you can write; then you can print; then you can duplicate with Xerox machines, and so on. Suddenly, the digital computer and the network come about; they provide an improvement in what you could call the organizational nervous system. That's a huge step. It's like a mutation that's just fantastic.

All right. What is likely to be the evolutionary path of these social organisms? Their frame, their structure, everything is going to change. Massive changes throughout society. When geographical boundaries mean so little to your ability to cooperate, what are political/geographic boundaries going to become in the future? Suddenly you appreciate the stresses on the old geopolitical structures, not to speak of the economic structures, that will arise.

DB: Is this change inevitable? Is there something we have to do to make it happen?

DE: The change rate is inevitable.

What I call the Collective IQ can rise a great deal. If we pursue its expansion, it may help us weather the other changes that can cause dislocations and trauma of unprecedented magnitude -- trauma and stress to our society that may be more than our political-economic structures can handle.

DB: I'm wondering what role you see computer scientists or designers playing in all this. Do we need to reconceptualize some of the tools that we're using now to help this along?

DE: We have structural builders. And people who can build elevators. But what we really need is architects.

You have to architect in parallel the human systems of roles and skills and knowledge and language and organizational structure. We have the opportunity to redesign all of that. We can say: look, it's those things concurrently; it's the coevolution of those two sides we have to pay attention to, not just let the technology developers and vendors steer us along. It's a terribly important social outcome.

DB: Almost all of the breakthroughs in computer science have come not from the technology vendors but from people working way out there on their own research. Then, eventually the work is taken over.

DE: I've come to realize that the marketplace, which everyone takes for granted as the rock-solid way to let things evolve, is maybe the right way if you're trying to get cheaper and better products out, but it's not necessarily there to serve what society needs. If society really needs to improve its Collective IQ, we ought not to look for leadership from the vendors.

DB: The history of your research, the research under ARPA, and the research that took place in the universities, seems to show that all this innovation came in part through government-sponsored funding -- not from a competitive market.

DE: Right. Well, look, The personal computer erupted on the world in '83-'84 and made a huge impact about social awareness and everything. But networks had then been going for a long time. ARPAnet got moving in 1970. The personal-computer industry ignored it until the World Wide Web hit; then, it had to respond.

Doug Engelbart: Let's assume that the vendor world has to be driven from profits and market share: when you find the right hill you're going to climb, it can help you climb better and better towards cheaper and better product. But, hell, it's not going to be what helps you find the right hill. What you have to have is real experimental working groups. This is going to take a new kind of institution. So we ended up formulating -- for the last thirty years -- this bootstrapping consortium.

David Bennahum: The Bootstrap Institute.

DE: The Institute is there to say: "Hey, people, what you really need is a bootstrap alliance and the right kind of consortium." There's beginning to be movement about that now, which is very exciting. We're actually getting a not-for-profit corporate setup for a Bootstrap Alliance; a formulation for it and how it's going to do; and a conduit through which member organizations can participate.

DB: So Bootstrap is a kind of private initiative to recreate the kind of research and development that's no longer funded by the government?

DE: It's the kind of thing that's pretty hard to fund directly by the government. It's sort of like saying: if you want to find out how you can colonize the floor of the ocean, the government will subsidize quite a lot of the endeavor. Pretty soon, if you're serious, you've got to go into combination with the government -- because it gives the subsidies that support it. But someplace, you've got to find communities of intrepid people who'll say: "We're ready to go down there and start living now."

DB: Is that what happened in the late sixties at the Augmentation Research Center? -- did you guys totally live this system?

DE: That made a big difference. Then in the mid-seventies we started saying "Hey. Now we're ready to reach out to other organizations and start giving support to those who want to change" -- that's the way the Alliance is planned to be: it's there to find how you can give the best support to member organizations trying to improve their capability. Our web site has an increasing number of things that elucidate the stance and the goals, and some of the dialog that'll be on Electric Minds will be helping to develop those thoughts.

DB: What role do you see for everyday people in the attempt to design these tools for the future? Is that something we can all participate in? Or is it mostly up to the scientists?

DE: It depends on what kind of scientists you're talking about. But to my mind, it's a lot more pragmatic than that. If you really start looking at the way people can shift the way they think and work and collaborate, you might say: "If we had skills and new methodologies and new conventions between us like this and this and this, we could employ tools that would do this and that" -- for which there's no market yet today, because there's nobody doing this and that.

DB: What's one of the biggest things that's missing right now from the web and from our computers? If there was something you could fix tomorrow, what would you do?

DE: There are several categories here. One category are the things that we evolved in the Augment system. We found they were very, very valuable, and yet they apparently haven't shown up.

Look: it's stupid to have a separate editor and a separate browser. In the beginning, we envisioned reintegrated editing browsers. Then you say: the moment I create any fragment of a document, I want to be able to let people point to it. So I want every object in a document, at every stage of its evolution, to be addressable by a link-server, so you could cite it and talk about it from anyplace.

DB: So in a way it would be like having a domain name-server that would name every document separately.

DE: That's just a start. Every object in a document intrinsically has a name and an address. Several categories of addressing came about. One: it was given an independent identifier number as it was created; no matter where it was moved in the document, it still could be addressed by that number. Another was that we learned that the documents had a lot of payoff if you structured the document explicitly -- basic hierarchy was very valuable.

In any event, a document would also have a location indicator. So you could use either one. If the document was still evolving, the location was a little bit shaky. So then we said, there ought to be all sorts of optional ways in which to view a document. This is something where WYSIWYG was totally in the wrong direction. You don't want it to look like what it does on paper. Maybe that's one of the views you want, but you want optional views. The very simple ones, like folding up at the first line of every paragraph: boy, that was really neat! And you said: well, also I could control according to how many levels I want; then I can also filter by content. You could say these things have properties that I may want shown or not shown. These are other optional views.

The journal system that we built was very powerful. It circumvented the issue of somebody saying: "Yes, I can cite it, and yes, it would be very nice to cite each passage of something where I want to talk in detail about it and discuss that with you. But what if you modify that document or it goes away? Then my document that points to passages in yours will look stupid, because people will need the reference, and I didn't supply a great deal of context because you could go look." We figured that it was important to be able to consign documents to a frozen state (which was to be called published tape), and we had this publication environment called the Journal where you could set up any number of journals, each of which was like a library that guaranteed it would give it a unique identifier forevermore; and forevermore, a link to that library would get you that document as it was published, with the date, hour, minute of publication on it!

DB: That's crucially important now, because on the web stuff disappears and evolves. You can't really have an established research base or dialogue because the context keeps evaporating beneath your feet.

DE: We installed that system in 1970 -- the first year we had our email with hyperlinks in it. It took a year or so for people to feel comfortable using it, but boy, it just got so important.

So there are still things in our web site, from 1970 on, that have journal numbers we put there. When we put them out, every paragraph can automatically have its location number target tag on it. Those are just a few of the quite different things we implemented.

A paper, "Authorship Provisions in Augment," gives a lot of those details, and there's another paper called "Collaboration Support Provisions in Augment" that also describes the Journal.

DB: What is your impression, right now, of where the web and the Internet might be heading in the next few years?

DE: I think it's headed for a spiral of increasing utility and utilization. It's going to be that new social nervous system, for sure.

DB: What would Licklider make of all this?

DE: Oh, he'd be delighted -- he used to talk about the Intergalactic Network.

DB: After he visited your lab, Licklider wrote this paper with Bob Taylor, "The Computer As A Communications Device." It predicts that by 2000, we'll have this online community -- this world of people who are online -- and the big issues will be privacy and security: who will and won't have access.

DE: We had one big difference in all of this that surprised me terribly, in about 1976. He had come back to ARPA, and reviewing what we were doing, I was telling him: we have this system now and we had it on a server that was commercially run for us, and it was supporting customers out there, and we had actually recruited and trained a set of young women who had liberal-arts educations --who would be the sort of facilitator-trainers out there in the field. That upset him badly.

He said, "You've just admitted that your system is no damn good." He believed that that if the system were designed appropriately, it would teach humans all they needed to know to harness it. And we couldn't get him to say, "Well, at what time in the future do you think that will be the case?" In every installation we'd put in so far, you had to adapt and learn and adjust both how the system worked and how the people worked in order to comfortably get them started. He was adamant. He felt we had failed.

DB: Because it required too much training to use the system?

DE: Because it required any humans out there to help train. The computer would be so smart. One of the paradigm things that delayed current usage a lot was a belief that started in the artificial-intelligence world that you'd be able to understand human speech and all kinds of stuff -- how humans are problem-solving -- and make a model. The computer would watch the human interact with it for a while, and make a model that could adapt to the human.

DB: People still cleave to the "intelligent agent" metaphor. It seems a similar idea: that somehow these agents sense what we want and the computer adapts to our needs.

DE: Smart agents are going to play a good part in the future. But it's like automatic pilots in airplanes. They play a role, but they don't yet take off and fly the plane.

DB: There's a core debate in computer science that's been around for a while. Some people (I guess Licklider was one of them) had this idea that eventually computers would be very intelligent beings, able to anticipate our needs. The other side thinks that will never happen. The issue is the interface between the person and the machine, more than the hope that the machine will ever become able to think for itself or anticipate your needs.

DE: Well, there is the sort of in-between place where I fit. I say: there's no way of saying that computers won't get as smart as we are, or smarter. But if we as humans want a life and a society and all of that of our own, we can't turn it all over to them. We have augment ourselves as much as we can so that we can shape our own destiny. That has been my goal all this time.

I tell people: look, you can spend all you want on building smart agents and smart tools, and I'd bet that if you then give those to twenty people with no special training, and if you let me take twenty people and really condition and train them especially to learn how to harness the tools, the people with the training will always outdo the people for whom the computers were supposed to do the work. To learn what high-performance human teams can do is, I feel, one of the really salient challenges to which we should give a lot of attention and focus.

So much has come about. A lot of it is residue from the artificial-intelligence image, but a lot of it is in the marketing world. The idea is that simple and easy-tolearn-and-use are important when you're selling to someone new. But I tell everybody, "Hey, look: if you really believe that, I'd like to see the tricycle that you ride around on. Because you'd never have learned to ride a bicycle." The value of learning to handle special skills in order to harness some artifact -- a bicycle, skis, a skateboard, a sailboard -- those are important examples of what you can do if you coevolve your skills with what the technology can provide.

DB: I guess the tradeoff is that, down the line, a tool may not be that useful when you've mastered it.

DE: Look, we can change the system so it requires more training. You don't want to make it harder to learn than is necessary, but you don't want to limit it. I use a little example: "That's like saying, 'I'm going to design my whole automotive transport system -- cars, the way they're controlled, the way our highways are, the rules of the road, everything else -- in accord with the views that people had about such a system in 1905.'" Think of merging onto the freeway; you're going 60 miles an hour, and you've got to check over your shoulder, and keep checking in the rearview mirror and side mirror as you're merging, keeping track of stuff. If somebody in 1905 had said, "Well, yeah, drivers will be doing that every day," they wouldn't have been believed. In the first place, no one would have believed that people could handle stuff more than 30 miles an hour, and nobody used mirrors for anything like that. And if you said, "Women will do it too," you really would have been laughed off. So I'm trying to get people to say: "Look, let's start putting some teams together and really compete to get high-performance teams together." Then we'll find out the kind of skills that people are willing to learn.

DB: I think our conversation will give people a much better sense of how the tools they use came into being, and also of what some of the challenges are facing us right now.

DE: I appreciate the opportunity.

Anyway, we built all that.

Go to part two.

You can join others in discussing Engelbart's work, and this interview in the InterMinds section of Electric Minds. The topic area is called Engelbart, The Long-Distance Thinker. I am the host of the InterMinds Conference Area. In order to join the discussion, you will need to register with Electric Minds (free), and this can be done by clicking on the "Join" button on the bottom left-hand corner of the frame in the InterMinds discussion area.

MEME and its contents copyright by David S. Bennahum. Duplication for non-commerical use is permitted. Contact me if you have questions. Direct comments, bugs and so on to me at