Page 16 - DCAP210_INTRODUCTION__TO_MICROPROCESSORS
P. 16
Introduction to Microprocessors
Notes (D) 8086
In 1978, lntel’s W. Davidow, vice president of the microcomputer group, rushed to staff a 16 b
microcomputer development project. It was to have around 30 000 transistors, 12 times more
than the 4004.
Figure 1.11: 8086 Chip
This new computer had multiplication and division and a host of other new features. However,
it was constrained to be upwardly compatible with the 8080 (and 8008). Accordingly, the designers
decided to keep the 16 b basic addresses and to use segment registers to get extended 20 b addresses.
Two versions were created—the 8088 had an 8-b data bus for compatibility with 8-b memory
systems, and the 16 b 8086. With 1 megabyte of memory addressing, this processor was a serious
contender in the computer market place. This chip density required to match the 16-b
minicomputers was “arriving” as had been predicted.
The decision by IBM to use the 8088 in a word processor and personal computer created enormous
market momentum for Intel. The 186, 286, 386, 486 followed over the next 15 years, with some
shadow of 8008 features still apparent. These components would be “truly pervasive”.
Chip density should be in multiple of 2 in the microprocessor versions.
1.3 Historical Perspective
The promises of high density solid state circuitry were becoming apparent in the 1950’s. In 1959,
Holland contemplated large-scale computers built with densities of 108 components per cubic
foot. The integrated circuit was developed in parallel at both TI and Fairchild; the density of IC’s
was doubling every year. “Entire subsystems on a chip” were predicted if a high volume standard
chip could be defined. For a 1966 forecast of chip complexity, it was then estimated that about 10
k-20 k gates would fit on a chip and that a good portion of a CPU would therefore be on one chip.
1.3.1 Moore’s Law
Gordon Moore’s empirical relationship is cited in a number of forms, but its essential thesis is
that the numbers of transistors which can be manufactured on a single die will double every 18
months. The starting point for this exponential growth curve is usually set at 1959 or 1962, the
period during which the first silicon planar transistors were designed and tested.
We now have four decades of empirical data to validate Moore’s argument. Figure 1.12 depicts a
sampling of microprocessor transistor counts, against the basic form of Moore’s Law. Clearly the
empirical data supports the argument well, even allowing for considerable noise in the dataset.
Similar plots for high density Dynamic Random Access Memory (DRAM) devices yield a very
similar correlation between Moore’s Law and actual device storage capacity.
Two important questions can be raised. The first is that of “how do we relate the achievable
computing performance of systems to Moore’s Law?”. The second is the critical question of “how
long will Moore’s Law hold out?”. Both deserve careful examination.
10 LOVELY PROFESSIONAL UNIVERSITY