Hello, Mr. Chips; the word bit, or straying off the data path. (microprocessors) John J. Anderson.
Hello, Mr. Chips
In the beginning there was the vacuum tube, and with that innovation electricity got its first real chance to become electronics. Circuit complexity translated into bulk, however, and if you wanted that new-fangled toy called a computer, you needed a building to devote to it, and the riches of Croesus to acquire it and keep it healthy.
The vacuum tube begat the transistor, and we saw it was good. Circuits of greater complexity could be designed more reliably, cooler, and in much less space. Central processing units (CPUs), the brainstems of computer circuitry, shrank to the size of mere refrigerators. And prices came down.
The transistor begat the integrated circuit, and we saw it was very good. A single chip of silicon could contain multiple transistors. There but for the grace of the integrated circuit went the aerospace advances of the sixties--things like walking on the moon. And prices came down.
But up until the end of that turbulent decade, digital IC technology was limited to arithmetic, logic, I/O controller, and memory chips. The CPU on a chip, and its ancillary development, the microcomputer, were children of the 70s.
Ironically, the first integrated circuit to closely resemble a CPU was developed in the U.S. by Intel, while under contract to a Japanese calculator company. ETI, a Japanese manufacturer of expensive desktop calculators, specified a new type of IC to spearhead a new line of machines. Marcian "Ted' Hoff, of Intel, envisioned extending ETI's specifications to include programmable characteristics. The result was the Intel 4004, which incorporated on a single chip the equivalent of more than 4000 transistors.
This was the genesis of the microprocessor. Under one cover, in a miniscule package, the business of computing now takes place. Nowadays even mini-and mainframe computers use IC-based central processing units, called microprocessors (MPUs) in place of multicomponent CPUs. One result of the MPU was the microcomputer; another was Creative Computing.
There are four basic criteria typically considered in judging the power of a microprocessor. They are:
Speed: The cycling rate at which instructions can be executed within the MPU.
Addressable memory: The maximum RAM size the MPU can access from a single state.
Instruction set: Includes both the number and complexity of instructions that can be invoked.
Word width: The "swatch' of bits (binary digits) upon which the MPU can act at one time.
It is impossible to put these criteria into an indisputable hierarchy, but without a doubt, word width is a very significant entity. The speed, addressable memory, and instruction set of a microprocessor are architecturally tied to its word width.
Unfortunately, the concept of word width has been popularized in a fashion that obscures rather than clarifies its importance. It is easy to state that a 16-bit MPU is twice a powerful as an 8-bit, and a 32-bit MPU twice again as powerful-- easy to state, and perhaps a powerful sales tool, but somewhat incorrect. At the least, such reasoning leads to serious oversimplification.
First off, let us consider speed. A 16-bit processor running at 2 MHz is certainly not twice as powerful as an 8-bit processor that runs at 5 MHz. How much more powerful one is than the other runs us immediately into some nasty shoals. Our quantification approach becomes marooned in value judgments more likely to reflect the biases of the arguers than the merits of the arguments.
Then we may consider the natures of instruction sets. These vary among chips and especially among families of chips. You can write the same assembly language program for different kinds of microprocessors, but the code itself, and more importantly the ease of writing such code, varies greatly. Programmers tend toward vehement chauvinism when it comes to MPU's, assuredly as a direct result of the effort they have put into learnings a system that works in a certain way. They may naturally resist the stress of change, even when a new slant makes things easier overall. Although we can say that the instruction set of one MPU is larger and more powerful than that of another, we cannot quantify the appeal of any one instruction set. A chip that makes one type of task easier might make another more arduous.
Passing the Word
Quantification of chip power is made even more difficult by the fact that word width can vary, even within a single chip. Typically the term word width is used to indicate the width of the registers within a processor. Any and all CPU's pull information out of memory, act upon it, then return it to RAM in a process known as fetch, alter, and store. Upon the execution of a fetch, the CPU loads the word fetched into a storage register. Then a specific instruction can be invoked to act upon data stored in that register.
Most mainframe computers make use of 32-bit registers, and as we shall see, micros are also moving quickly into the 32-bit realm. In college I occasionally had the dubious honor of programming a mainframe known as the CDC Cyber, which made use of whopping 60-bit words. I could never quite fathom any good reason for making an address register that wide. It made calculating offsets, the number of bits away from a given reference point, an easy way to lose track of reality.
Hoff's Intel 4004, the granddaddy of microprocessors, was a 4-bit machine. It could operate upon or transfer only four bits at a time. As a result, it could handle numbers but not alphabetic characters in a single "gulp.' Although wider registers could be simulated through piggy backed instructions, this made programming the 4004 rather convoluted. It quickly became obvious that for alphanumeric processing, a more powerful chip was necessary. The natural step was to 8-bit words, which could handle alphabetic coding as well as a capable pable instruction set relatively straightforwardly.
The Intel 8008, originally developed for the Computer Terminal Corp. (now Datapoint) was introduced in 1973. It was simply an 8-bit version of the 4004. This processor was still rather Byzantine in its architecture--actually better suited as a machine controller than a general purpose MPU. Intel then followed up with an improved chip, the 8080. Although most of the design philosophy of the 8080 came directly from the 8008, its instruction set could act as a bonafide CPU. The 8080 soon became the mind of Ed Roberts' Altair 8800, generally acknowledged to be the first commercial microcomputer.
The 8080 has six general purpose 8-bit gisters with stack pointer and program counter both 16 bits wide. Like many others, the 8080 is a stack-oriented MPU, which is used for temporary storage without the need for address pointers. The data path, which is the number of bits that a microprocessor can fetch from or store to memory in a single swatch, is eight bits wide on the Intel 8080.
Here we encounter one of the real rubs of the width myth. The width of the internal registers of an MPU often exceeds the width of its data path. By way of analogy, we might imagine loading six-packs into a carton to make a case of beer. The case holds 24 cans of beer-- therefore the address register for our Molson computer is 24 bits wide. However we will put in and lift out the cans by the six-pack, so the data path of our brew is six bits wide. If Molson made computers, it might claim it had a 24-bit beer. Moosehead would be quick to point out, however, that Molson was not a "true' 24-bit beer, as it has a six-bit data path. Doubtless as well, their marketing department would quickly point out that Moosehead is available in eight-packs.
As the 8080 has 8-bit registers and an 8-bit data path, in may be termed a "true' 8-bit processor. One might suspect that implies the existence of "false' processors, but it is better not to linger over that point.
Between 1973 and 1981, quite a bit of begetting went on in the 8-bit realm until the 8-bit processor began to yield to a new crop of 16-bit chips. We can trace the genealogy (see page 49) of two prominent families: those that trace their roots from the Intel 4004 and those that trace back to the Motorola 6800, first introduced in 1974. It should be noted, however, that in both genealogies there appear important contributions from people "outside the family.'
In fact, the Z80, a second generaton 8080, and the 6502, a very close cousin to the 6800, were developed outside of Intel and Motorla, respectively. These chips went on to recome the most important 8-bit processors--the ones that sowed the seeds of the microcomputer revolution. The Z80 made its way into successive generations of TAS-80 machines. while the 6502 was to form the heart of the Apple II, Atari, and Commodore computers.
The Better Bitters
Although the heyday of 8-bit processors is now behind us, they will remain important for years to come. They are still very capable chips, each with an established core of loyal programmers, and most important, they are now dirt cheap.
They do, however, pose certain limitations. An instruction set with an 8-bit width is limited to 256 total instructions. Although on some chips prefix bytes and other devices are introduced to get around this stricture, they again make the task of programming more burdensome Certainly, the next logical step was a 16-bit microprocessor on a chip. An instruction set with a 16-bit width is capable of 65,536 discrete instructions. As that is far more than generous, and a 9-bit instruction width is quite reasonable to imagine, remaining word width can be used for data.
As exphined above, more instructions mean a more powerful MPU. Instead of treating multiplication as recursive addition, or division as recursive subtraction, for example, a multiply or divide instruction can be added to the instruction set. (The processor may still treat the instruction recursively, but the programmer need not.) And through the addition of memory management logic, 16-bit processors can cross the address boundary of a single 64K chunk of RAM.
When, in 1981, IBM announced it would use the Intel 8088 in its first microcomputer, Intel was able to reassert itself as a major player in the microprocessor game. The 8088 is a special gase in itself; it is a 16-bit processor in 8-bit clothing. Its registers are a uniform 16-bits wide, while its data path is 8-bits. The 8088 is a version of Intel's true 16-bit chip, the 8086, with special bus hardware added. This ensured that the 8088 would remain compatible with the 8-bit memory and peripheral chips that proliferated at the time it was introduced. The downside of this customization is that the 8088 is slowed down substantially by overhead transfer time. The Creative Computing benchmark, in fact, logged the IBM PC as significantly slower than a number of 8-bit machines with quite decent cycle rates.
The 8086 and 8088 can address up to 1Mb of RAM (in segments of 64K), and include multiply and divide instructions. They were among the-first to use multiplexed address and data lines, wherein more than one signal shares a common circuit to reduce chip size and cost.
Zilog's answer to the 16-bit challenge was the Z8000. This chip might have been a much more serious contender if its introduction had not been plagued by delays, and in its early days by a lack of support. The chip is clearly superior to the 8086/8088, but in an industry in which timing is crucial, the effort misfired. Motorola's 6809 is also a powerful chip which upgraded in size and versatility the instruction set of the 6800 series in an MPU with 16-bit registers and an 8-bit data bus. It found its way into the highly underrated TRS-80 Color Computer, but not much else.
By far the most interesting Motorola entry is the MC68000, which in 1982 first levered a foot into the door of the 32-bit world. In one fell swoop, the 68000 launched Motorola right back into the fray. The data path and status register of the 68000 are 16-bits wide, but the other 15 available registers are all 32-bits. The 24-bit address bus allows fully 16 megabytes of RAM to be addressed linearly. The instruction set of the 68000 contains over 90 instructions, and the memory addressing configuration makes debugging assembly code on the 68000 much less painful than on the 8086.
Certainly the first 68000-based microcomputer to come to mind is the Apple Macintosh, and the Mac does serve as a good example of the power of the 68000--juggling programs, data, and a highly-refined user interface simultaneously. But the Mac was not the first 68000-based micro. That honor belongs to the Fortune Systems 32:16, which due to ill fortune, is no longer with us.
The 68000 does not multiplex signals, and so appears in a 64-pin package, as opposed to the 8086/8088 which is in 40-pin DIP configuration. The trade off is a bigger chip, but one that requires less external logic. Intel introduced a hybrid 8086 in 1982, called to 80186, which incorporates a substantial amount of support logic onboard--a move toward truly manufacturing a computer on a single silicon chip. The 80186 represented a significant step, offering better performance for substantially less cost than an 8086 with the requisite bevy of support chips required to drive it. Lowered chip count results not only in decreased manufacturing costs, but increased hardware reliability. The 80286 introduced by Intel last year took things a step further, and now finds itself ensconced in the muscular IBM PC AT.
There are other 16-bit processors, like the National Semiconductor 16032 and Texas Instruments TMS 9900, some with quite admirable specifications, but none of these has had much real impact on the microcomputer market. One might observe a striking parallel between the two major 8-bit contenders of the past, the Z80 and the 6502, and their combatant 16-bit progeny--Intel's 8086/8088 propped up by hordes of IBM PCs and PC clones, and the Motorola 68000 residing within the Apple Lisa and Macintosh.
The Bit Goes On
Already, however, the field is being cleared for the next big battle. This one will be fought by the true 32-bit titans, and never has the competition been so fierce. Semiconductor manufacturers are scratching, biting, and scrambling for position in a race that again will probably result in two big winners and a slew of battered also-rans. As the ante in developing a new chip is typically over $50 million, that represents a gamble indeed.
Why a 32-bit processor, you ask? The answers are as follows: speed, multitasking, multiuser capability, mini- and mainframe compatibility, the ability to tackle enormous tasks, like expert systems programming, artificial intelligence, voice recognition, and perhaps most important, Unix compatibility.
Like it or not, the microcomputer industry is rushing headlong toward the Unix operating system, and it will take a 32-bit processor to implement it in all its glory [sic.]. Programmers are talking more and more about this marvelous and mysterious language cryptically called C, and even the behemoth IBM has acknowledged that its 8086/8088 software base will be effectively ueutralized in a few short years. And so the race is on.
No fewer than 14 American companies have announced that they are or will be entrants in the 32-bit fracas. Intel and Motorola, the Hatfields and McCoys of the microprocessor industry, are siring the next generation of combatants. Intel is readying its 80386, which will combine Unix compatibility with PC compatibility on a single chip. Motorola has already introduced its MC68020, which is entirely compatible with the instruction set of the 68000, with a true 32-bit data path.
But it is important to note that 32-bit processors are architecturally much more independent than their ancestors. The software bases built around the 16-bit processors of Intel and Motorola are, therefore, not as likely to give those companies pole position in the 32-bit race. A dark horse has a better chance now than ever before to usurp pre-eminence in the microprocessor market.
Zilog is back and in the running with the aptly named Z80000. It of course is downwardly compatible with the Z8000. NCR is in the running with the 32-000, which packs the equivalent of 40,000 transistors on a single chip. In conjunction with an Address Translation Chip, the NCR microprocessor can address up to 300 Mb.
Also announced are entire from Inmos, Fairchild Camera, National Semiconductor, Texas Instruments, and Western Electric (no longer to be confused with AT&T). And chip manufacturers are no longer alone in their pursuits. Mini- and mainframe makers like DEC, IBM, and Data General are reverse engineering MPUs compatible with their existing machines. Even Hewlett Packard and AT&T are in the on-deck circle.
Let us not ignore Japan in the coming equation. NEC has announced it is working on a 32-bit MPU with no fewer than 700,000 transistors on board. Hitachi has announced the completion of a proprietary 32-bit chip. The time-frame announced for shipment of the Hitachi chip is 1986; for completion of NEC's superchip, 1987.
The Racing Form
It is difficult to predict how the 32-bit race will take shape and impossible to predict the winners. But there are a few predictions that can possibly be made more safely.
As in any marathon, many of the entrants will not finish. The microprocessor industry makes for strange bedfellows, and second-sourcing has resulted in some unholy alliances indeed. To sell a chip in quantity, a manufacturer typically accepts more orders than it can fill. It authorizes another company to manufacture its chips and gives "masks' of the chip to that company so it can do so. If a chip is very popular, it may be second-sourced to multiple companies. The 8086 was, in its golden years, manufactured by no fewer than seven companies.
Second-sourcing agreements in the 32-bit arena could be made into a soap opera for TV. Often second-sourcing is a major means of gleaning technology, and we find second-sources suddenly announcing their own chips. Marketing divisions live on Maalox, and fickleness is rampant. Fairchild had committed to second-sourcing for National, but now is pursuing design of its own proprietary CMOS 32-bit chip. Texas Instruments initially announced its own proprietary chip, but now has committed as a second-source for National. Fujitsu and Toshiba are second-sourcing for Intel. Don't be surprised when these companies break off and announce proprietary chips of their own. The bottom line is to take any and all announcements of 32-bit plans with a grain of Alka-Seltzer.
My predictions are as follows. You might be able to narrow the field down to four: the Motorola 68020 will find a niche, owing to a growing degree of loyalty to the power and elegance of the 68000 family; National Semiconductor will find itself back in the big leagues with the 32000 because of its suitability with Unix and Cand its proximity to the ultra-powerful VAX minicomputer; Intel's new chip is bound to find its way into IBM's 32-bit micro, as IBM now holds 20% ownership of Intel; and the field might be big enough, at least early on, to allow for one dark horse candidate, conceivably Japanese. In the end there will be one or two survivors. And I'm not about to guess who they might be.
The race is sure to continue even from there, but at a greatly slowed pace. Although there will undoubtedly appear 64-bit microprocessors toward the end of this decade, my experience with the Cyber leaves me with the hunch that we will hit a point of diminishing returns in that realm. My guess is that development will continue much more strongly along the lines of incorporating support chips onto the MPU, and even at some point including RAM memory. Sooner or later we will rid ourselves of circuit boards entirely. Fairchild is moving in the right direction with CMOS technology --that will find a niche in the future of microprocessors and RAM technology. A day will come when we look back at today's micros as dinosaurs of power consumption. And RISC chips (for "reduced instruction set chip'), as pursued by Inmos, DEC, and HP, pose an interesting angle. Their philosophy is that conventional microprocessors are burdened by many instructions that are rarely or never used. Chips can be made faster, cheaper, and better by tailoring them more carefully.
Finally, I expect parallel processing to come into its own by the end of the decade. In our entire discussion here we have conceived of computing in a traditionally serial manner; though it may happen at incredible speed, only one instruction is executed at a time. The next major breakthrough in computing will be the advent of machines with multiple 32-bit processors, each operating in its own bailiwick, while in full communication through some hierarchical structure with the other processors onboard. What might a machine of this kind be capable of doing? Well, among other things, it just might be able to grasp the English language. Perhaps then I'll ask it what exactly makes one MPU superior to another. I'll program it to laugh.
Photo: MPU Family Tree