CHAPTER : SYSTEM DESIGN USING
MICROPROCESSOR
2.1 System Design
Fig 2.1 System Development Cycle
2.1.1 Feasibility Study
The feasibility study provides the background for actual system development. It helps in the
selection of a particular level of technology depending on the availability of expertise to operate
and maintain the end product. It also analyses various approaches like purchasing a system and
modifying for a particular requirement or designing the system from PCB or component level.
It also provides cost–time analysis. However, the fundamental question we must ask ourselves
after the application has been understood and before the start of system design, is: do we need
a microprocessor? This is the most crucial decision the system designer has to take. We should
not forget that despite the power of the microprocessor, in many cases a hardware solution
might be better and cheaper. We should avoid selecting the microprocessor just for fancy.
Figure 2.2 explains the alternatives available to the system designer to develop the system.
Fig 2.2 Different alternatives available to the system designer.
Clearly, if the system is not complex and the number of units being produced is small, the
random logic will be most suitable. The Application Specific Integrated Circuit (ASIC) or the
custom LSI will be suitable for a large number of units but with lesser complexity. The
microprocessor-based design is a compromise between random logic and custom LSI
alternatives. The microprocessor-based design may appear to be more expensive than the
random logic alternatives because of the high initial cost of software development involved.
However, the initial cost is offset by cheaper components and simple hardware circuitry, which
make the microprocessor solution eventually less costly over a certain range of production
volumes.
Random Logic vs. Microprocessor
Random logic offers the design advantage over the microprocessor-based system when one or
more of the following are applicable.
➢ The functions to be performed are minimal.
➢ The input and the output consist of a single channel.
➢ The system operates on only one function at any time (though there may be multiple
inputs) or the system has a single word transmission structure.
➢ A small system has to be custom-designed.
➢ High-speed operation is required.
Microprocessors offer advantages over random logic when one or more of the following are
applicable.
➢ Software can be traded off for additional hardware, so that the system capabilities can
be expanded readily without system redesign.
➢ Multiple inputs are needed.
➢ A large number of functions must be performed.
➢ Multidecision paths are required.
➢ Large memories are involved.
The following “Rule of Thumb” has been often used to determine the cost trade-off between
random logic and microprocessor.
A 50 IC random logic system costs about the same as a microprocessor plus 20 interface ICs.
Included in this rule are the costs of components, handling, testing, and interconnections.
A flowchart depicting the selection between random logic and microprocessor is shown in
Figure 2.3.
Figure 2.3 Flowchart depicting the selection between random logic and microprocessor.
If after considering all the points, the microprocessor-based approach is taken, a suitable
microprocessor has to be selected. The technical characteristics such as speed, word length,
addressing modes, number of scratch pad registers, single bit manipulation, instructions set,
arithmetic capabilities, etc. should be considered with respect to the system requirement for the
selection of a suitable microprocessor. The designer has also to decide between
microprocessor, microcomputer and microcontroller based on the application requirements.
Microcontrollers are preferred for all embedded applications since they have bit manipulation
instructions, built-in memory, I/O ports as well as powerful instruction sets.
2.1.2 System Specification
This is the stage in which we define our problem and the ultimate system that will solve our
problem. A system can be defined in terms of a set of entities and the relation between entities.
The system under consideration may itself be working as a subsystem of some larger system.
In that case, the system will be accepting inputs and providing the outputs to the larger system.
These inputs and outputs are called the environment of the system. A crude way of defining
the system is by defining the inputs and outputs from the system, but a systematic designer will
break the system into different functionally independent subsystems and define the
environment of each of these subsystems. This methodology of paying more attention during
the system definition phase is always helpful during the later period when some changes are
required.
Murphy’s law of random system design, “which states that the following things are likely to
be true if complex problems are tackled in a random way.
➢ Things are more complex than they seem to be.
➢ Things take longer than the expected time.
➢ Things cost more than the expected cost.
➢ If something can go wrong, it will.
➢ The output of such an exercise may be FRUSTRATION.”
The sound definition of the system will lead to an optimum design. Moreover, since more than
one system engineers are usually involved in the development, the interface between these
engineers will be facilitated by adopting the organized approach. It is here that the
documentation should be initiated. The initial document will further get refined and the details
will be added at every stage of development.
2.1.3 Initial Design
The initial design defines the functions that will be carried out by both hardware and software.
An analysis of the problem should be done to define performance requirements, basic hardware
configurations, basic software routines, etc. If a simulation facility is available, it should be
made use of, to make the initial design as perfect as possible. The initial design will help in
estimating the memory requirement, and other timing considerations. The selection of a
particular hardware configuration will strongly influence the software design. Though the
adoptation of software in place of hardware will definitely reduce the system cost, a clear
consistent structure of the hardware should not be compromised. The implementation of simple
hardware functions can often help considerably in reducing the software. There are certain
thresholds beyond which software development becomes very complex and expensive. For
example, if the software increases slightly over 1 kilobyte (abbreviated KB) or if more I/O lines
than those provided in an I/O port are required, a full additional memory chip or I/O port has
to be added, thus increasing the hardware cost.
2.1.4 Hardware Design
The matching of electrical characteristics and timings of ICs may create problems during
hardware design. The simplest solution (though not the cheapest) will be to use a complete
family of microprocessor and peripheral chips for which these conditions are satisfied. When
using a different manufacturer or series, some hardware adaptation may become necessary.
Similarly, if the components of different logic families are interfaced, additional hardware (i.e.
driving circuits, pull-up registers, etc.) may become necessary.
2.1.5 Software Design
The software design basically involves the following steps.
Environment and problem specification
Though the overall system specification is given at the beginning of the project design, the
parameters relevant to the software design should be documented. This specification should
contain a brief statement of the problem, operating characteristics needed for programming, for
example, data format, data transfer modes between peripherals, etc. and the software
environment of the system.
State transition diagram
The state transition diagram is a finite state model of the system, which shows all possible
states that the system can achieve. It also shows the external inputs that can be received by the
system to switch from one state to the other. Below are some of the symbols and definitions
which will help us in drawing and understanding the state transition diagrams.
Primary excitation: External inputs e.g switch closed
Secondary excitation: Internal inputs as a result of the reactions within the system due to
external inputs.
State: The system stays in state S until new excitation occurs.
Transition: (i) From state S to S1 due to primary excitation ‘Switch closed’.
(ii) From state S to S1 due to secondary excitation ‘Time delay’.
Decision symbol: Depending upon the internal condition, the transition can occur to one out of
several states upon primary excitation ‘Switch closed’.
Program schedule
In a real time application, the time constraints and the program sequence play an important
role, therefore, in such cases the program schedule can be very useful. It should show the
programming sequence, timing consideration of the system and the places where the time
criticality may occur.
Hierarchy model
A hierarchy model represents the various levels of modules in the program. Though it gets
modified from time to time during the total design cycle, it is very necessary to work out an
initial hierarchy for the software module. It is useful for the design of complex systems like
monitor, file management, etc. in which case a module will pass control (by subroutine
branching) to other modules to execute various functions.
Figure 2.4 Hierarchy model of software.
Flowchart
In this module the top-down software design approach is preferred in which the designer has
to define all the program requirements before attempting the implementation. In a bottom-up
approach, on the contrary, one starts at a very low level of detail, thereby often losing sight of
the overall objectives. A flowchart is a graphical representation of the problem involving
activities, operations and decisions in a logical and sequential manner. Thus conforming with
the top-down approach of system design, a program structure composed of main program
blocks should be designed. Each block, then, should further be refined. Thus several flowcharts
can be drawn in increasing level of refinement.
Memory, I/O and register organization
A general memory map should be made which will show the various memory blocks, their
capabilities, starting and ending addresses. Then, on that basis, separate maps for RAM and
ROM area allocation should be prepared. These maps should show the detailed organization
with the definitions of bytes and bits. Similarly, the I/O organization should be planned for the
mapping between the hardware I/O pins and the software bits. The I/O organization should
contain the I/O addresses, the port numbers, the data formats, etc. In the same way, the various
variables assigned to the registers must be defined. This is to find out the optimum utilization
of registers. The designer can also make use of a stack to store the values of registers,
temporarily.
Editing and assembling
The procedure of writing a program is called editing. It is done manually if the microprocessor
kit or the microcomputer does not have an editor. With the microprocessor development system
the editing is facilitated by the resident editor software. The program is then assembled, i.e. it
is translated into object codes. This job is either done by the assembler if it is available with
the machine or otherwise performed manually. A number of cross assemblers are available for
almost all microprocessors on the Internet. These can be used for editing and assembling the
program on personal computers.
2.1.6 Test and Debug
As opposed to the software development, the bottom-up approach should be used for testing,
i.e. first the smallest units, and then the larger blocks, and finally the complete system is tested.
The testing should be done preferably by simulating the environment in which the module is
to work. Because of the bottom-up approach, the errors at the lowest-level module can be
detected and rectified. Thus the lower modules cannot act as error sources to higher-level
modules. The microprocessor simulator software available on the Internet can be used to test
the assembled software on personal computers. However, a microprocessor development
system is best suited for this purpose. The errors in a microprocessor-based system can occur
not only due to hardware and software, but also due to the interaction between the two. The
system test starts with hardware. It is assumed that the peripheral hardware test has already
been carried out independently. The system hardware can best be tested by using small and
simple programs for testing each individual function. Once these tests are successfully
completed, the actual software is tested block by block. When testing individual software
blocks, it should be made sure that the transfer parameters satisfy the conditions of the
integrated software.
2.1.7 Integration
Once the testing of hardware and software subsystems is completed, the
integration of hardware and software is carried out and then the system is
tested under real/simulated conditions. It needs to be emphasized here
that due to various problems the integration of the two subsystems may
not be smooth, particularly if the software and the hardware have not
been developed in a coordinated and organized manner.
2.1.8 Documentation
Documentation of the system is as important as system development. However, it is often
overlooked and even de-emphasized by designers. For this empathy a heavy price is often paid
when a fault occurs or when one of the designers leaves the organization. All the activities of
both hardware and software should be documented during the development period itself.
Documentation helps in the coordinated approach to system development. The system
specifications, the environment, the hierarchical chart, the flowchart, the state transition
diagrams, the test procedures, etc. should all be included in the documentation, to facilitate
system development and its subsequent use.
2.2 DEVELOPMENT TOOLS
A number of approaches can be used to simplify the development of microprocessor-based
systems. Each of these depends upon the availability of suitable development tools. The four
main approaches used are:
• Microcomputer kit
• Dedicated microprocessor development system
• Universal microprocessor development system
• Software package/simulator
2.2.1 Microcomputer Kit
The main purpose of a microcomputer kit is to impart training on the microprocessors. The use
of a microcomputer kit to develop application software is perhaps the most crude and error-
prone of all the methods. A typical microcomputer kit essentially consists of the central
processing unit of the target machine on which the application program in binary code is
executed, a small amount of the random access memory, serial input/output ports for
interfacing to the peripherals, interrupt facility, and a quartz crystal controlled clock, etc. The
system program consists of a simple monitor, an assembler in some cases and a debugger. Such
a kit can only be used for the development of simple programs written in machine code or
assembly language. The debugging facilities provided are very limited. So it is very difficult
to develop an application program by using the microcomputer kit.
2.2.2 Dedicated Microprocessor Development System
A dedicated microprocessor development system (MDS) is a microcomputer system specially
designed to aid in the development of a prototype system incorporating a specific
microprocessor. It incorporates various user aids and test programs that enable a prototype
system to be fully analysed. As implied by the term dedicated, such systems can only be used
to develop software for one or a limited range of microprocessors manufactured by the same
manufacturer. The main drawback of an MDS system is that the user will have to base all his
future systems on one particular microprocessor. A typical microprocessor development
system will have the following configuration.
• A CPU
• RAM upwards from 16 KB
• A visual display monitor
• An alphanumeric keyboard
• a printer
• a secondary mass storage device, i.e. either a floppy disk or a hard disk
• Ports for interfacing other peripherals
• Hardware debugging aids
• PROM/ EPROM Programmer, UV eraser
• An in-circuit emulator
• Slots for additional boards.
The heart of the system which actually facilitates system integration and debugging is the in-
circuit emulator (ICE). The hardware debugging aids include Logic State Analyzer, Signature
Analyzer, etc.
Logic state analyser
When errors occur during data transfers or interrupt servicing, it is often necessary to examine
a number of the streams of data which have been handled. If we think of an 8-bit
microprocessor, there are 16 address lines and 8 data lines in addition to the control lines to
investigate. So a single- or twin-beam oscilloscope is not a suitable tool for this purpose. An
instrument developed particularly for this situation is the Logic State Analyzer. It has 16–48
data probes which can be attached to the bus lines to capture data on the rising edge of each
clock pulse. Generally, a block of 250 successive bits are captured by each probe and stored in
a semiconductor memory. The sequence is started by a trigger pulse which can be either an
electrical signal such as an interrupt or a timing pulse or a derived form of a particular bit
pattern which appears on the line under test. Once captured, the data can be displayed either as
a sequence of words coded in hexadecimal, octal or binary format or as a set of waveforms.
Where the program appears to jump out of the intended sequence, the probes can be put on the
address lines with the sampling enabled by the Valid Memory Address (VMA) signal. The data
captured will then be a list of addresses from which the microprocessor has fetched successive
instructions and data. If any interface package appears to be working incorrectly, its address
can be used as the trigger signal and the data lines can be examined for the correct signals.
Although this procedure gives a useful record of the data flow and addresses used, it is
essentially a sampled system. Any short-duration transients which occur between the clock
pulses could cause false data to be received, though not perceivable by the display. In order to
check for this, it is usually possible to operate the analyzer asynchronously. In this mode, the
data sampling block is derived in the analyzer and is not locked to the microprocessor clock.
By running the sampling clock 5–10 times faster than the microprocessor clock, transients
which are long enough to cause a logical error will normally be sampled and stored and
also seen on the display.
Signature analyser
The signature analyser is an important tool for hardware debugging. This was originally
developed as a quick method of checking the complex digital instruments but is equally
applicable to the microprocessors as well. This allows a stream of data to be checked easily for
any single-bit error. The process is similar to the one that is used to generate the check
character, normally sent after each block in many kinds of digital transmission, and uses a shift
register with feedback connections. By itself the signature conveys no meaning; it is used in a
comparison process by first measuring the signature of a suitable set of nodes for a set of
standard input signals in a system known to be working correctly. These form a standard signal
for the system. By measuring the signature at successive nodes of a faulty system, it is possible
to locate the fault within some particular section of the equipment. It should be noted that
however long the train of data signal may be, if only one bit is in error, the signature obtained
will differ markedly from the correct one. The test thus has a high discrimination capability.
PROM/EPROM programmer and UV eraser
Once the application software has been developed and checked in the prototype hardware
module, the next task is to store the software. It can be either in a disk, floppy, or in
semiconductor memory. In an application where the software does not require frequent
alteration or modification, it is better to load the program in the semiconductor memory
(PROM/EPROM). PROM is a programmable ROM and once it is programmed, i.e. data is
stored in it, the same cannot be erased. But in an EPROM, though the data is stored in the same
way, it is erasable and we can store the new data after erasing. The ultraviolet rays are allowed
to fall on the small window of the EPROM chip to erase the previously stored data. The UV
erasers and EPROM programmers are available commercially.
In-circuit emulator (ICE)
The ICE is used to combine software testing with hardware testing. The MDS with the ICE
option contains at least two microprocessors. The MDS processor supervises the system
resources, executes the system monitor commands, and drives the system peripherals. The
second processor, i.e. the ICE processor, interfaces directly to the designer’s prototype or
production system via an external cable. In-circuit emulation is carried out by pulling out the
socket-mounted microprocessor chip from the prototype microcomputer system for which
the software is being developed and replacing it by the in-circuit emulator. Thus the hardware
system to be tested has access to all the resources of the MDS and can be tested in real time.
As the system operates in real-time, the response of the input/output system to particular events
can be tested by interrupting program execution and by checking the contents of memory
locations and address registers. Sometimes it may be required to emulate the system more
precisely. In real-time emulation we may not get the detailed trace data. The single step facility
is usually available to get the detailed trace data. This single stepping differs from what is
available in a microprocessor kit. Here, the number of steps to be emulated can be programmed.
We can also specify the particular conditions for halting. At the end of emulation, the debugged
code can be stored as well. This facility of combined hardware and software testing of the target
system is extremely useful and can considerably shorten the system development time.
The MDS also includes considerable software facility. Typical software modules included in
MDS constitute a single or multitask disk operating system which contains
Assemblers
High-level language compilers/interpreters
Loaders
Linkers
Text editors
Debug monitors
File handling/management routines.
An operating system provides interface between the hardware and the user. It consists of the
set of program modules, as mentioned above, for allocating and controlling the resources
available on the system automatically.
2.2.3 Universal Microprocessor Development System
A dedicated MDS can only be used for developing the microcomputer hardware/software for
a single microprocessor or a few of the same series manufactured by the same company.
Though it is true that majority of the applications can be implemented using any of the available
microprocessors, efficiency wise, some microprocessors may be superior to the others. So the
MDS manufacturers have come forward with a universal MDS, which can be used to develop
and test the hardware/software for a number of different systems. The term “universal” is rather
misleading because no single development system can be used to develop and test the
hardware/software for all the available microprocessors. However, manufacturers do provide
an attractive method of developing the hardware/software for a wide range of microprocessors
provided by a number of different suppliers. In addition to all the normal facilities available in
a dedicated system, the universal system has the facility to plug in different modules which
result in the development system emulating the behaviour of a particular microprocessor.
2.2.4 Simulators
Simulator is an alternative to the MDS for the development and testing of the microcomputer
software. As the term implies, simulators are used to simulate the features, such as memory
and registers of a target microprocessor on a host computer (personal computer). The
application program object code for the target microprocessor, generated by a cross compiler
or a cross assembler is executed on the host computer system. As the execution proceeds, the
contents of the simulated registers and memory locations are altered as they would be in the
real system. These values can then be printed out and analysed. Here the execution takes
place in simulated time and not in real time. The distinct advantages of simulators are given
below.
a) As there are no real-time execution constraints, a more thorough check can be carried out
on the program output thereby helping with the process of debugging the program.
b) It is also possible to evaluate the likely performance of a microprocessor for a particular
application.
The main drawback lies in the difficulty in simulating the effect of input and output values.
Testing and debugging should be carried out in real time, being not possible on a simulator.