Classic Computer Magazine Archive START VOL. 1 NO. 2 / FALL 1986

Silicon

Soothsaying

A Pro's Perspective

BY TIM OREN

Prophecy prognostication and logical prefiguration. Tim Oren gazes into his digital crystal ball, scanning past computers to predict the future of the Atari ST and its line of software.

In 1963, the TX-2 was the hottest computer at MIT. One of the first machines devoted to a single user, it boasted a light pen and a vector display capable of showing up to 1,000 points at a time. It had less than 64 kilobytes of memory.

In 1968, an SDS-940 mainframe had 195 kilobytes of core memory, and shared its 96 megabytes of hard disk among 12 terminals, which were the first to be equipped with mice.

In 1973, the Xerox Alto had 128 kilobytes of memory, and one of the first bit-mapped displays. It stored 2.5 megabytes of data on a removable 15" cartridge disk, and was dedicated to one user.

In 1975, the Altair, the first hobbyist computer, was built by MITS. A company named Atari released a game called Pong. In the same year, Gordon Bell, a designer with Digital Equipment, predicted the birth of the Atari ST.

He didn't know its name, of course, but he graphed the memory size and cost of machines like these, and noticed that the price of hardware was dropping by a factor of four every three years. By his schedule, the Apple II appeared right on time in 1977. Since then, the rate of change has accelerated, resulting now in a one-megabyte machine for under $1000.

One result of this pattern, of course, is that you can buy more and more power for the same money. A 1040 ST with a hard disk has more raw power than any of the machines listed above.

Another interesting result is that we can predict the future of a series of computers by looking at what has happened with higher-priced machines. Similar predictions seem to hold for peripherals and programming as well. I am going to use this observation to do some uninhibited speculation and exhortation on what the future could and should hold for ST hardware and software.

Hardware trends are fairly easy to predict, though how they will eventually fit into Atari's product lines is anyone's guess! Let's start with multiple processors. The next logical step beyond one computer per user is many computers per user. Already, the many plug-in cards for PCs include MPUs such as Z80s or 6800s. Peripheral processors to handle I/O for mainframes have existed for decades, and in 1986 we are seeing the advent of massive parallel processing in supercomputers, with thousands of processors devoted to a single task.

The effect of this trend for the ST and other personal computers will depend on the availability of sophisticated CAD/CAM programs and fabrication facilities for custom VLSI chips (Very Large Scale Integrated chips). This means that the "extra" chips will be tailored for functions such as graphics or I/O bus control, for example, the Intel 82786, the Amiga's graphics chips, and the rapid appearance of EGA (Enhanced Graphics Adapter) clone chip sets. Atari's pre-announcement of a "blitter" chip for the next ST is an acknowledgement of the trend. Atari owns such VLSI design equipment, and has some of the best designers in the business.

The companion of multiprocessing is multitasking. (Multiprocessing devotes more than one chip to a program; multitasking lets more than one program run at a time.) Multitasking is already useful to the single user for print spooling and background compiles. In the future, it will become more important as the computer assumes the task of sifting through online systems and large databases for interesting material while the user continues to work.

Multitasking is supported on mid-range workstations such as Sun and Apollo, and is now appearing on single-user PC systems. While multitasking is largely a software problem, it requires some hardware support, in particular a memory management unit that keeps one errant task from destroying the others. The conflicting demands of multiple tasks also require hardware or software locking capability for peripherals and data files.

We can take it for granted that new machines will continue to have more memory. Four megabytes for a next generation ST seems easily in sight. Developers will, of course, quickly consume it and ask for more. The way out of the cycle is to introduce something that has existed for years on mainframes: virtual memory. If I am using a VAX and need 200,000 pointers to articles in a CD-ROM database, 1 can simply say:

long int art_locs[200000];

because it automatically handles the paging of my address space from RAM to disk. If I am using an ST, I must write an entire memory management scheme from scratch to accomplish the same task. The use of virtual memory implies a whole new layer of page-fault detection hardware, optimal caching logic, and so on. It also mandates the inclusion of a fast hard disk as part of the system.

PERIPHERALS

As the bandwidth of the basic microcomputer rises, the ability to transfer information to and from the user becomes limiting, and critically important. While using specialized chips to drive graphics and sound will help, other components are just as critical. (Bandwidth is the rate at which a processor or data channel can move information. Put simply, how fast can you push the bits?)

Any programmer who has complained about running out of screen space recognizes one of these limits. Space to display information is at a premium, but help is on the way. Upright, page-sized monochrome displays with 100 dot-per-inch resolution will be the graphics device of choice for the next generation.

Contrasting with the need for a tableau of information is the desire for a visually intense, natural presentation for color graphics. Experience with displays like the AT&T Targa indicates that the number of displayable colors is more important than resolution for such uses. Given this dichotomy, the ST's choices of monochrome or color modes can be seen as forward looking, not a retrogression. The either/or plug will have to go, though!

The most ruinous limitation in peripherals is the low bandwidth of user input devices. Conventional keyboards, joysticks, and mice have hit their limits, even given good user interface design. As evidence, I offer the continuing debate between the devotees of one-button and two-button mice. The subject of this religious warfare may be recognized as an example of "overloading," in which one symbol or action (point-click) is made to carry more than one meaning. Overloading is usually a sign of a severe design constraint, in this case, the limitation of the mouse to pointing in a two-coordinate plane.

To break out of this bind, input devices must have more degrees of freedom. There seem to be two ways to get this. One is to take the keyboard approach, relying on the flexibility and trainability of the human mind to indicate its desires. Along this path lie MIDI keyboards for music input and pressure-sensitive touchpads with gesture recognition programs and software-definable control areas.

The second approach is to instrument or observe the human body, and let it behave naturally, moving in a virtual space defined by the computer's display (which might be viewed on a video projector). Researchers at MIT and the University of Connecticut have taken the tack of observing the movements with TV cameras, and extracting the relevant features. Limits in pattern-recognition methods, and the costs of such processing and the associated video gear, probably mean that it will be at least two generations before the computer will be watching us watching back.

For the short term, a significant breakthrough is the announcement of Jaron Lanier's "Glove," a device that is able to take independent input from five fingers as well as three-dimensional positioning. (See Infoworld, June 9, 1986.) Not only does this more than double the possible input bandwidth, it has profound implications for the design of virtual 3D workspaces. The continuing development of miniaturized strain gauges, accelerometers, and nonintrusive biomedical pickups for military uses says that the Glove is just a starting point.

The third trend in peripherals is the continuing merger of the computer into other forms of media, as correctly forseen by Alan Kay ten years ago. Sound and animation editors have progressed from toys on 8-bit machines to near professional quality in the current generation. The next generation will complete this trend, and begin adding hardware for integrating digital television into a total multimedia authoring and playback environment.

Already, computer power is being merged into a media standard as part of the Sony/Philips CD-I system. The ST's MIDI ports are an important toehold in this new world. Image scanners, frame grabbers, and genlock components will be needed next. Sorry, the oddball scan frequencies have to go, too.

Finally, the ability to access massive quantities of information will explode. CD-ROM for the ST, though much delayed, will become a reality, giving a desktop machine 200 times the data capacity of the Xerox Alto. It may also include the ability to retrieve sound tracks, computer graphics and video images.

The second part of this revolution, online communication and data retrieval, is well under way. Anyone who reads ANTIC ONLINE on CompuServe, who uses a computer bulletin board, or subscribes to Dow/Jones news service is already participating. A shakeout in this market is inevitable. Victory may go to the services that are best able to integrate their information with other sources such as CD, and with the microcomputer user interface.

An important question is how ST owners will get all of these goodies. Atari isn't large enough to develop them all. All this will be determined by the ST's fate in the marketplace.

The market for computer hardware and software add-ons is beginning to resemble that of "bolt-on" accessories for autos and aircraft. Given a wide variety of basic machines, a manufacturer will produce for those with the greatest distribution, with suitable mountings for accessories, and standardization across models.

The first "bolt-on" machine in microcomputers was the Apple II. It was followed by the IBM PC, and now the AT. Apple moved up to the Macintosh, but forgot to put in the bolt holes. They are now correcting this oversight. The true competition between the ST, the Mac, the Amiga and other computers is for acceptance as a "standard airframe," worthy of software and hardware investment.

SOFTWARE

My choice of example hardware at the beginning of this article was not accidental. The systems described were used to produce:

  • Sketchpad, the first graphics editor, written by Ivan Sutherland in 1963.

  • NLS/Augment, the first "idea processor." developed by Doug Engelbart at SRI.

  • Smalltalk, the windows-style interface, digital typefonts, Markup, and a host of other seminal applications by the Xerox PARC team.
Although our hardware today is superior to each of those systems, our software is markedly inferior to all of them, including one over two decades old! What has gone wrong here?

Part of the answer is quite natural. Minds that can produce such works do not stay long in the same place. They move on to new ideas that can be implemented on the ever-evolving hardware base. The transition of the ideas to the next lower class of hardware is left to other programmers, other companies. If their vision is less clear, if their resources are limited, the results may be a pale imitation of the original.

The exigencies of business promote this, too. As the price of the base hardware drops, the price obtainable [or its software drops as well. Since the effort to implement the software stays largely constant, a company must spread its costs over a large number of units sold, or it will be very dead, very quickly. This environment does not encourage risks on radical departures in software design, even if the concept has been proven on larger machines.

The same observation applies to hardware companies, since they have assumed the role of supplying operating systems and graphics software with their machines. The high prices of Apple computers are a direct reflection of that company's choice to invest in transfer and development of software technology. High development costs require high margins.

Atari is a different story, of course. When aiming for low price you do not invent-you rent. The innards of an ST are a roadmap of the company philosophy. Virtually every part is off the shelf, except for a few custom chips carefully optimized to replace the maximum number of components.

The software is the same. GEM and the rest were off the shelf from Digital Research. Atari wrote hardware drivers, made sure that some key applications were in place, and shot it out the door. To expect software leadership from Atari is unrealistic. Those who complain about lack of such support are wasting their time. It is simply inherent in the company's structure and position in a low-margin market.

This would all sound rather gloomy but there is a way out. Outside developers, professional and amateur can fill the need for good software, and be rewarded for doing so. To avoid the problems I've talked about, we have to keep a clear eye on two principles: leverage and idea transfer.

LEVERAGE

Leverage is making sure that your efforts pay off more than once. When dealing with software, leverage is spelled T-O-O-L-S. If you build the proper tools; you only pay that development cost once when you begin working a vein of good ideas. The rest comes free. Here are a few things I think we're going to need.

Tools come in many flavors: languages, libraries, utilities, and design and prototyping aids, to name a few. The basic languages for the ST are in place: C, Pascal, and assembler, as well as less known languages like Forth and Modula. What we need now are incremental compiling versions of these tools, with test runs conducted within the language environment. This is the way to break the interminable edit-compile-link-run-crash-debug cycle. Not only will this relieve frustration, but better applications will result from greater freedom to experiment.

Another big need is object-oriented extensions of the basic languages. ST implementations similar to the Macintosh's Object Pascal, the PC's Objective C, or Bell Labs' C+ + would make it easier to implement good ideas borrowed from the Smalltalk world. The current ST models don't have quite enough power to run the most recent versions of Smalltalk itself, but this would be a good time to look at that language with an eye toward creating useful subsets, or transferring more features to other languages or applications.

Reusable source programs for GEM started with the DRI DOODLE program, which was more of a stock of plagiarizable code than a true library. Since then, a good deal of modular code has been made available through the CompuServe data libraries, due to contributions from Atari, DRI, and other developers, and as part of my own columns on ANTIC ONLINE. A logical next step would be to collect, codify and extend this material and make it more generally available.
We also need a better skeleton application than the DOODLE program, which omits many important techniques. The best step would be in the direction of Apple's MacApp, which hides the basic event loops as part of system-level methods. Lacking the Object Pascal base on which to build such a system, a next best approach would be a standard top-level skeleton, with a source-code library of plug-in modules for such functions as drag recognition, window scrolling, and so on.

Linkers and debuggers are examples of tools I call "glue." A source-code-level debugger should be a part of any reasonably integrated development environment. Such a tool must also allow data to be examined in the context of its structural definition, not as unrelated bytes.


There are at
least three good ideas in
Sutherland's Sketchpad that
haven't shown up on
micros yet.


Leaving aside such long-range hopes, the single most useful bit of glue at this point would be a good incremental linker/librarian capable of handling both major ST object-module formats: Alcyon/Megamax and Lattice/GST. DRI's LINK68 is notoriously slow, and only handles one of these formats. A good implementation would produce a partially linked output, and then add any changed modules just before production of a loadable binary. That way you only pay overhead for the code you have actually changed.

The Resource Construction Set is the first interface design and prototyping tool for the ST. There are a number of directions in which this notion could be improved and expanded. One would be adding real-time demonstration features, so that an entire interface could be shown in action, as well as created. Dan Bricklin's Demo Program is an example of this concept, executed in alpha mode on the IBM PC.

The current RCS objects and editing rules mirror the capability of the AES library code. Graphics-code libraries effectively extend the AES with new capabilities. The creators of such libraries could also build an enhanced RCS which incorporated new objects and rules to go along with the new functions. If the program was made extensible, using such methods as template objects and rule tables, end users could continue to augment the AES themselves.

IDEAS

In last issue's Perspectives, Dave Small quoted the programming axiom: "Steal from the best". I couldn't agree more. As you might guess, I find the pickings best when I look for ideas that have fallen between the cracks, or that haven't had time to appear on the latest machines. Another source often overlooked by "serious" designers is microcomputer games. There's not enough room here to do justice, but the following are some highlights that I think deserve attention. And remember what The Great Lobachevsky said ."Plagiarize!"

Graphics Editing and Visual Programming- There are at least three good ideas in Sutherland's Sketchpad that haven't shown up on micros yet. One is the notion of active handles on the graphic objects, allowing them to be "glued" together at specified points. For instance, a resistor symbol would have two external points of attachment. There was also the ability to combine multiple objects into a group, effectively defining by example a new class of objects with associated handles. Finally Sketchpad allowed the entry of constraints such as parallel, coincident, and congruent to be automatically applied to the graphical elements. A more recent example of a constraint oriented graphics editor, with an improved user interface, is the Juno system developed at Xerox PARC.

Constraint-based editing is really an example of visual programming, in much the same way as PROLOG is a non-procedural way of implementing rule-based systems. Alan Borning's Thinglab program makes this aspect more explicit. There have been many attempts to implement procedural visual programming, with wide variations of success. One of the most interesting results is a game for the Macintosh called Chipwits, which uses a visual language to program a simulated robot. For more pointers to visual progamming ideas, see Myers' article on this topic, listed below.

New Ways to Think- Engelbart's NLS system was the earliest and maybe the best implementation of a hierarchically structured "idea processor." It incorporated several features we still lack, such as the ability to work cooperatively and mix video images of the participants with the computer output. For those who like a free-form tool, Ted Nelson's Hypertext ideas of linked dynamic documents are attractive. At KnowledgeSet, we have been building a Hypertext system based on static documents stored on CD-ROM. The work going on at Brown University is even more exciting because it combines several information sources in a dynamic hypermedia framework. One of the biggest upcoming challenges is finding meaningful ways to navigate in such massive databases.

Microworlds-Smalltalk is still the best platform for making microcosmic simulations because it provides explicit mechanisms for creating types of objects and defining the ways they interact. Since Smalltalk has not been practical on a micro, the most interesting microworlds have been created by game designers. Bill Budge's Pinball Construction Set was the inspiration for the Resource Construction Set, and is superior in that it animates the results of the construction. The construction set metaphor has proven very popular on the Macintosh, being put to such diverse uses as animation, music, and calculators. A recent extension is the association of "field rules" with areas on the display causing objects placed into an area to conform to predefined constraints.


When aiming
for low price you do not invent
- you rent.


Going Media - We are just now getting decent ST programs for editing text, static graphics and MIDI sound. The Audiolite presentations, available by using The Music Studio and Paint Works from Activision, give some idea of the results possible when these capabilities are mixed in a limited way. One of the hottest possibilities with the ST is to create an editing and playback system which integrates all of these data types and adds animated graphics, or eventually images from a video source.

ENVOI

The ST is a machine with an incredible amount of potential. If you have been feeling cramped by your old 8-bit machines, kick back and enjoy your new freedom. There's no need for individual developers to rebuild the same old programs; there are plenty of commercial organizations who will take the safe road and do just that. It's time to raise our sights, and realize how far we can go with some new ideas and a little careful engineering.

REFERENCE:

  • Sketchpad: A Man-Machine Graphical Communication System, Ivan Sutherland, Proceedings Spring Joint Computer Conference, 1963, pp. 329-346.

  • A Research Center for Augmenting Human Intellect, Douglas C. Engelbart and William K. English, Proceedings Fall Joint Computer Conference, 1968, pp. 395-410.

  • Personal Distributed Computing: The Alto and Ethernet Hardware, Charles P Thacker, Proceedings ACM Conference on the History of Personal Workstations, Palo Alto, California, Jan. 9-10, 1986, pp. 87-100.

  • Personal Distributed Computing: The Alto and Ethernet Software, Butler W. Lampson, Proceedings ACM Conference ot the History of Personal Workstations, Palo Alto; California, Jan. 9-10, 1986, pp. 101-131.

  • Toward a History of Personal Workstations, C. Gorden Bell, Proceedings ACM Conference on the History of Personal Workstations, Palo Alto, California, Jan. 9-10, 1986, pp. 1-17.

  • A Study in Two-Handed Input, William Buxton and Brad A. Myers, Proceedings CHI '86, Boston, Massachusetts, April 13-17, 1986, pp. 321-326.

  • Put That There: Voice and Gesture at the Graphic Interface, Richard A. Bolt, Proceedings SIGGRAPH 1980, pp. 262- 270.

  • Artifical Reality, Myron W Krueger, Addison-Wesley Publishing Co., Menlo Park, CA, 1983.

  • Personal Dynamic Media, Alan Kay and Adele Goldberg, IEEE Computer, March, 1977.

  • Smalltalh-80: The Language and its Implementation, Adele Goldberg and David Robson, Addison-Wesley Publishing Co., Menlo Park, CA, 1983.

  • The Programming Language Aspects of Thinglab, a Constraint-Oriented Simulation Laboratory, Alan Borning, ACM Transactions on Programming Languages and Systems, October, 1981, pp. 353-387

  • Visual Programming, Programming by Example, and Program Visualization: A Taxonomy, Brad A. Myers, Proceedings CHI '86, Boston, Massachusetts, April 13-17, 1986, pp. 59-66.

  • Dream Machines, Ted Nelson, Hugo's Book Source, Chicago, IL, 1974.

  • A New Home for the Mind, Ted Nelson, Datamation, March, 1982, pp. 169-180.

  • Reading and Writing the Electronic Book, Nicole Yankelovich, Norman Meyrowitz, and Adries van Dam, IEEE Computer, October, 1985, pp. 15-30.

  • An Overview of Information Retrieval Subjects, Martin Bartschi, IEEE Computer, May 1985, pp. 67-84.

  • Generalized Fisheye Views, George W. Fumas, Proceedings CHI' 86, Boston, Massachusetts, April 13-17, 1986, pp. 16-23.

The Music Studio
Paint Works
Activision
P.O. Box 7287
Mountain View, CA 94039


My thanks to the organizers of the ACM History of Personal Workstations conference, which was the direct inspiration for this article.-Tim Oren