0% found this document useful (0 votes)
120 views12 pages

KL Intel D8086.jpg

The 8086 is a 16-bit microprocessor chip designed by Intel in 1978 that gave rise to the x86 architecture. It had an external 16-bit data bus and 20-bit address bus, allowing up to 1MB of physical memory. Notable features included eight 16-bit registers, 64KB of I/O space, and 256 interrupts. It helped popularize 16-bit processors and was used in early IBM PCs, establishing the x86 architecture that remains dominant today.

Uploaded by

sapnarawa198799
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views12 pages

KL Intel D8086.jpg

The 8086 is a 16-bit microprocessor chip designed by Intel in 1978 that gave rise to the x86 architecture. It had an external 16-bit data bus and 20-bit address bus, allowing up to 1MB of physical memory. Notable features included eight 16-bit registers, 64KB of I/O space, and 256 interrupts. It helped popularize 16-bit processors and was used in early IBM PCs, establishing the x86 architecture that remains dominant today.

Uploaded by

sapnarawa198799
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 12

Intel 8086

From Wikipedia, the free encyclopedia


Jump to: navigation, search
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may
be challenged and removed. (December 2007)

Intel 8086

Produced From 1978 to 1990s


Intel, AMD, NEC, Fujitsu, Harris (Intersil), OKI, Siemens AG, Texas
Common
Instruments, Mitsubishi.
manufacturer(s)

Max. CPU clock


5 MHz to 10 MHz
rate
Instruction set x86-16
40 pin DIP
Package(s)

The 8086[1] is a 16-bit microprocessor chip designed by Intel and introduced to the
market in 1978, which gave rise to the x86 architecture. The Intel 8088, released in 1979,
was a slightly modified chip with an external 8-bit data bus (allowing the use of cheaper
and fewer supporting logic chips[2]), and is notable as the processor used in the original
IBM PC.
Contents
[hide]

• 1 History

• 1.1 Background

• 1.2 Τ η ε φιρσ τ ξ 86 δ ε σ ι γ ν

• 2 Details

• 2.1 Buses and operation

• 2.2 Ρ ε γ ι σ τ ε ρ σ αν δ
ινστ ρ υ χ τ ι ο ν σ

• 2.3 Φ λ α γ σ

• 2.4 Σ ε γ µ ε ν τ α τ ι ο ν

• 2.4.1 Subsequent expansion

• 2.4.2 Π ο ρ τ ι ν γ ολ δ ε ρ
σοφ τ ω α ρ ε

• 2.5 Performance

• 2.6 Φ λ ο α τ ι ν γ πο ι ν τ

• 3 Chip versions

• 3.1 Derivatives and clones

• 4 Microcomputers using the 8086

• 5 Νο τ ε σ αν δ ρεφ ε ρ ε ν χ ε σ

• 6 Σε ε αλσ ο

• 7 Εξ τ ε ρ ν α λ λινκ σ

[edit] History
[edit] Background
In 1972, Intel launched the 8008, the first 8-bit microprocessor[3]. It implemented an
instruction set designed by Datapoint corporation with programmable CRT terminals in
mind, that also proved to be fairly general purpose. The device needed several additional
ICs to produce a functional computer, in part due to its small 18-pin "memory-package",
which ruled out the use of a separate address bus (Intel was primarily a DRAM
manufacturer at the time).
Two years later, in 1974, Intel launched the 8080HYPERLINK \l "cite_note-3"[4],
employing the new 40-pin DIL packages originally developed for calculator ICs to enable
a separate address bus. It had an extended instruction set that was source- (not binary-)
compatible with the 8008 and also included some 16-bit instructions to make
programming easier. The 8080 device, often described as the first truly useful
microprocessor, was nonetheless soon replaced by the 8085 which could cope with a
single 5V power supply instead of the three different operating voltages of earlier chips.
[5] Other well known 8-bit microprocessors that emerged during these years were
Motorola 6800 (1974), Microchip PIC16X (1975), MOS Technology 6502 (1975), Zilog
Z80 (1976), and Motorola 6809 (1977), as well as others.
[edit] The first x86 design
The 8086 was originally intended as a temporary substitute for the ambitious iAPX 432
project in an attempt to draw attention from the less-delayed 16 and 32-bit processors of
other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the
same time to top the successful Z80 (designed by former Intel employees). Both the
architecture and the physical chip were therefore developed quickly (in a little more than
two years[6]), using the same basic microarchitecture elements and physical
implementation techniques as employed by the older 8085, and for which it also
functioned as its continuation. Marketed as source compatible, it was designed so that
assembly language for the 8085, 8080, or 8008 could be automatically converted into
equivalent (sub-optimal) 8086 source code, with little or no hand-editing. This was
possible because the programming model and instruction set was (loosely) based on the
8080. However, the 8086 design was expanded to support full 16-bit processing, instead
of the fairly basic 16-bit capabilities of the 8080/8085. New kinds of instructions were
added as well; self-repeating operations and instructions to better support nested
ALGOL-family languages such as Pascal, among others.
The 8086 was sequenced[7] using a mix of random logic and microcode and was
implemented using depletion load nMOS circuitry with approximately 20,000 active
transistors (29,000 counting all ROM and PLA sites). It was soon moved to a new refined
nMOS manufacturing process called HMOS (for High performance MOS) that Intel
originally developed for manufacturing of fast static RAM products[8]. This was
followed by HMOS-II, HMOS-III versions, and, eventually, a fully static version
designed in CMOS and manufactured in CHMOS.[9] The original chip measured 33 mm²
and minimum feature size was 3.2 μm.
The architecture was defined by Stephen P. Morse and Bruce Ravenel. Jim McKevitt and
John Bayliss were the lead engineers of the development team and William Pohlman the
manager. While less known than the 8088 chip, the legacy of the 8086 is enduring;
references to it can still be found on most modern computers in the form of the Vendor
ID entry for all Intel devices, which is 8086H (hexadecimal). It also lent its last two digits
to Intel's later extended versions of the design, such as the 286 and the 386, all of which
eventually became known as the x86 family.
[edit] Details

The 8086 pin-assignments in min and max mode.


M
ai
n
re
gis
ter
s

A A
AX (primary accumulator)
H L
B B
BX (base, accumulator)
H L
C C
CX (counter, accumulator)
H L
D D
DX (accumulator, other functions)
H L
In
de
x
re
gis
ter
s

SI Source Index
DI Destination Index
BP Base Pointer
SP Stack Pointer
St
at
us
re
gis
ter
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 (bit position)
- - - - O D I T S Z - A- P - C Flags
Se
g
me
nt
re
gis
ter

CS Code Segment
DS Data Segment
ES ExtraSegment
SS Stack Segment
In
str
uc
tio
n
po
int
er
IP Instruction Pointer

The 8086 registers

[edit] Buses and operation


All internal registers as well as internal and external data buses were 16 bits wide, firmly
establishing the "16-bit microprocessor" identity of the 8086. A 20-bit external address
bus gave an 1 MB (segmented) physical address space (220 = 1,048,576). The data bus
was multiplexed with the address bus in order to fit a standard 40-pin dual in-line
package. 16-bit I/O addresses meant 64 KB of separate I/O space (216 = 65,536). The
maximum linear address space was limited to 64 KB, simply because internal registers
were only 16 bits wide. Programming over 64 KB boundaries involved adjusting segment
registers (see below) and was therefore fairly awkward (and remained so until the 80386).
Some of the control pins, which carry essential signals for all external operations, had
more than one function depending upon whether the device was operated in "min" or
"max" mode. The former was intended for small single processor systems whilst the latter
was for medium or large systems, using more than one processor.

[edit] Registers and instructions


The 8086 had eight (more or less general) 16-bit registers including the stack pointer, but
excluding the instruction pointer, flag register and segment registers. Four of them
(AX,BX,CX,DX) could also be accessed as (twice as many) 8-bit registers
(AH,AL,BH,BL, etc), the other four (BP,SI,DI,SP) were 16-bit only.
Due to a compact encoding inspired by 8085 and other 8-bit processors, most instructions
were one-address or two-address operations which means that the result were stored in
one of the operands. At most one of the operands could be in memory, but this memory
operand could also be the destination, while the other operand, the source, could be either
register or immediate. A single memory location could also often be used as both source
and destination which, among other factors, further contributed to a code density
comparable to (often better than) most eight bit machines.
Although the degree of generality of most registers were much greater than in the 8080 or
8085, it was still fairly low compared to the typical contemporary minicomputer, and
registers were also sometimes used implicitly by instructions. While perfectly sensible for
the assembly programmer, this complicated register allocation for compilers compared to
more regular 16- and 32-bit processors such as the PDP-11, VAX, 68000, etc; on the
other hand, compared to the 8085, or other simple (but popular) contemporary 8-bit
microprocessors (like the 6502 or 6809), it was significantly easier to generate code for
the 8086 design.
As mentioned above 8086 also featured 64 KB of 8-bit (or alternatively 32 K-word of 16-
bit) I/O space. A 64 KB (one segment) stack growing towards lower addresses is
supported by hardware; 2-byte words are pushed to the stack and the stack top (bottom) is
pointed out by SS:SP. There are 256 interrupts, which can be invoked by both hardware
and software. The interrupts can cascade, using the stack to store the return address.
The processor had some new instructions (not present in the 8085) to better support stack
based high level programming languages such as Pascal and PL/M; some of the more
useful ones were push mem-op, and ret size, supporting the "pascal calling convention"
directly. (Several others, such as push immed and enter, would be added in the
subsequent 80186, 80286, and 80386 designs.)

[edit] Flags
8086 has a 16 bit flag register. Out of these, 9 are active, and indicate the current state of
the processor. These are — Carry flag, Parity flag, Auxiliary flag, Zero flag, Sign flag,
Trap flag or Trace flag, Interrupt enable flag, Direction flag and Overflow flag.

[edit] Segmentation
There were also four sixteen-bit segment registers (CS, DS, SS, ES) that allowed the
CPU to access one megabyte of memory in an unusual way. Rather than concatenating
the segment register with the address register, as in most processors whose address space
exceeded their register size, the 8086 shifted the 16-bit segment only 4 bits left before
adding it to the 16-bit offset (16·segment + offset), therefore producing a 20-bit effective
(or physical or external) address from the 32-bit segment:offset pair. As a result, each
physical address could be referred to by 212 = 4096 different segment:offset pairs.
Although considered complicated and cumbersome by many programmers, this scheme
also had advantages; a small program (less than 64 kilobytes) could be loaded starting at
a fixed offset (such as 0) in its own segment, avoiding the need for relocation, with at
most 15 bytes of alignment waste. The 16-byte separation between segment bases was
called a paragraph.
Compilers for the 8086-family commonly supported two types of pointer, near and far.
Near pointers were 16-bit addresses implicitly associated with the program's code and/or
data segment and so made sense only within parts of a program small enough to fit in one
segment. Far pointers were 32-bit segment:offset pairs (resolving to 20-bit real
addresses). Some compilers also supported huge pointers, which were like far pointers
except that pointer arithmetic on a huge pointer treated it as a linear 20-bit pointer, while
pointer arithmetic on a far pointer wrapped around within its initial 64-kilobyte segment.
To avoid the need to specify near and far on every pointer and every function which took
or returned a pointer, compilers also supported "memory models" which specified default
pointer sizes. The "small", "compact", "medium", and "large" models covered every
combination of near and far pointers for code and data. The "tiny" model was like "small"
except that code and data shared one segment. The "huge" model was like "large" except
that all pointers were huge instead of far by default. Precompiled libraries often came in
several versions compiled for different memory models.
In principle the address space of the x86 series could have been extended in later
processors by increasing the shift value, as long as applications obtained their segments
from the operating system and did not make assumptions about the equivalence of
different segment:offset pairs. In practice the use of "huge" pointers and similar
mechanisms was widespread, and though some 80186 clones did change the shift value,
these were never commonly used in desktop computers.
According to Morse et al., the designers of the 8086 considered using a shift of eight bits
instead of four, which would have given the processor a 16-megabyte address space.[10].
[edit] Subsequent expansion
The 80286's protected mode extended the processor's address space to 224 bytes (16
megabytes), but not by increasing the shift value. Instead, the 16-bit segment registers
supply an index into a table of 24-bit base addresses, to which the offset is added. To
support old software the 80286 also had a "real mode" in which address calculation
mimicked the 8086. There was, however, one small difference: on the 8086 the address
was truncated to 20 bits, while on the 80286 it was not. Thus real-mode pointers could
refer to addresses between 100000 and 10FFEF (hexadecimal). This roughly 64-kilobyte
region of memory was known as the High Memory Area, and later versions of MS-DOS
could use it to increase available low memory.
The 80386 increased both the base address and the offset to 32 bits and introduced two
more general-purpose segment registers, FS and GS. The 80386 also introduced paging.
The segment system can be used to enforce separation of unprivileged processes in a 32-
bit operating system, but most operating systems using paging for this purpose instead,
and set all segment registers to point to a segment with an offset of 0 and a length of 232,
giving the application full access to its virtual address space through any segment
register.
The x86-64 architecture drops most support for segmentation. The segment registers still
exist, but the base addresses for CS, SS, DS, and ES are forced to 0, and the limit to 264.
In x86 versions of Microsoft Windows, the FS segment does not cover the entire address
space. Instead it points to a small data structure, different for each thread, which contains
information about exception handling, thread-local variables, and other per-thread state.
The x86-64 architecture supports this technique by allowing a nonzero base address for
FS & GS.
[edit] Porting older software
Small programs could ignore the segmentation and just use plain 16-bit addressing. This
allowed 8-bit software to be quite easily ported to the 8086. The authors of MS-DOS took
advantage of this by providing an Application Programming Interface very similar to
CP/M as well as including the simple .com executable file format, identical to CP/M.
This was important when the 8086 and MS-DOS was new, because it allowed many
existing CP/M (and other) applications to be quickly made available, greatly easing
acceptance of the new platform.
[edit] Performance

Block diagram over Intel 8088 (a variant of 8086).


Although partly shadowed by other design choices in this particular chip, the multiplexed
bus limited performance slightly; transfers of 16-bit or 8-bit quantities were done in a
four-clock memory access cycle.[11] As instructions varied from 1 to 6 bytes, fetch and
execution were made concurrent (as it remains in today's x86 processors): The bus
interface unit fed the instruction stream to the execution unit through a 6 byte prefetch
queue (a form of loosely coupled pipelining), speeding up operations on registers and
immediates, while memory operations unfortunately became slower (4 years later, this
performance problem was fixed with the 80186 and 80286). However, the full (instead of
partial) 16-bit architecture with a full width ALU meant that 16-bit arithmetic instructions
could now be performed with a single ALU cycle (instead of two, via carry), speeding up
such instructions considerably. Combined with orthogonalizations of operations versus
operand-types and addressing modes, as well as other enhancements, this made the
performance gain over the 8080 or 8085 fairly significant, despite cases where the older
chips may be faster (see below).
Execution times for typical instructions (in clock cycles):
Timings are best case, depending on prefetch status, instruction alignment, and other
factors.
MOV reg,reg: 2, reg,im: 4, reg,mem: 8+EA, mem,reg: 9+EA, mem,im: 10+EA
cycles
ALU reg,reg: 3, reg,im: 4, reg,mem: 9+EA, mem,reg: 16+EA, mem,im: 17+EA
cycles
JMP reg: 11, JMP label: 15, Jcc label: 16 (cc = condition code)
MUL reg: 70..118 cycles
IDIV reg: 101..165 cycles
EA: time to compute effective address, ranging from 5 to 12 cycles.
As can be seen from these tables, operations on registers and immediates were fast
(between 2 and 4 cycles), while memory-operand instructions and jumps were quite slow;
jumps took more cycles than on the simple 8080 and 8085, and the 8088 (used in the
IBM PC) was additionally hampered by its narrower bus. The reasons why most memory
related instructions were slow were threefold:

• Loosely coupled fetch and execution units are efficient for instruction prefetch,
but not for jumps and random data access (without special measures).

• No dedicated address calculation adder was afforded; the microcode routines had
to use the main ALU for this (although there was a dedicated segment + offset
adder).

• The address and data buses were multiplexed, forcing a slightly longer (33~50%)
bus cycle than in typical contemporary 8-bit processors.
It should be noted, however, that memory access performance was drastically enhanced
with Intel's next generation chips. The 80186 and 80286 both had dedicated address
calculation hardware, saving many cycles, and 80286 also had separate (non-multiplexed)
address and data buses.

[edit] Floating point


The 8086/8088 could be connected to a mathematical coprocessor to add floating point
capability. The Intel 8087 was the standard math coprocessor, operating on 80-bit
numbers, but manufacturers like Weitek soon offered higher performance alternatives.

[edit] Chip versions


The clock frequency was originally limited to 5 MHz (IBM PC used 4.77 MHz, 4/3 the
standard NTSC color burst frequency), but the last versions in HMOS were specified for
10 MHz. HMOS-III and CMOS versions were manufactured for a long time (at least a
while into the 1990s) for embedded systems, although its successor, the 80186/80188
(which includes some on-chip peripherals), has been more popular for embedded use.

[edit] Derivatives and clones

Soviet clone KP1810BM86.


OKI M80C86A QFP-56
Compatible and, in many cases, enhanced versions were manufactured by Fujitsu,
Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, Mitsubishi, and AMD. For
example, the NEC V20 and NEC V30 pair were hardware compatible with the 8088 and
8086, respectively, but incorporated the instruction set of the 80186 along with some (but
not all) of the 80186 speed enhancements, providing a drop-in capability to upgrade both
instruction set and processing speed without manufacturers having to modify their
designs. Such relatively simple and low-power 8086-compatible processors in CMOS are
still used in embedded systems.
The electronics industry of the Soviet Union was able to replicate the 8086 through both
industrial espionage and reverse engineering. The resulting chip, K1810BM86 was pin-
compatible with the original Intel 8086 (К1810ВМ86 was a copy of the Intel 8086, not
the Intel 8088) and had the same instruction set. However this IC was metric and was not
mechanically compatible with the Intel products. The Intel microprocessors I8086 and
I8088 were the core of the Soviet block-made PC-compatible ES1840 and ES1841
desktops. However, these computers had significant hardware differences from their
authentic prototypes (respectively PC/XT and PC): ES1840 was Intel 8088 based,
ES1841 was Intel 8086 based. Also, the data/address bus circuitry was designed
independently of original Intel products. ES1841 was the first PC compatible computer
with dynamic bus sizing (US Pat. No 4,831,514). Later some of the ES1841 principles
were adopted in PS2 (US Pat. No 5,548,786) and some other machines (UK Patent
Application, Publication No. GB-A-2211325, Published June. 28, 1989).

[edit] Microcomputers using the 8086


• One of the most influential microcomputers of all, the IBM PC, used the Intel
8088, a version of the 8086 with an eight-bit data bus (as mentioned above).

• The first commercial microcomputer built on the basis of the 8086 was the
Mycron 2000.

• The IBM Displaywriter[citation needed] word processing machine and the Wang
Professional Computer, manufactured by Wang Laboratories, also used the 8086.
Also, this chip could be found in the AT&T 6300 PC (built by Olivetti).

• The first Compaq Deskpro used an 8086 running at 7.14 MHz, but was capable of
running add-in cards designed for the 4.77 MHz IBM PC XT.
• The FLT86 is a well established training system for the 8086 CPU still being
manufactured by Flite Electronics International Limited in Southampton,
England

• The IBM PS/2 models 25 and 30 were built with an 8MHz 8086

• The Tandy 1000 SL-series machines used 8086 CPUs.

• The Amstrad PC1512, PC1640, and PC2086 all used 8086 CPUs at 8MHz.

• As of 2002, NASA was still using original 8086 CPUs on equipment for ground-
based maintenance of the Space Shuttle Discovery, to prevent software regression
that might result from upgrading or from switching to imperfect clones.[12]

You might also like