Embedded System Fundamentals Overview
Embedded System Fundamentals Overview
It has hardware.
It has Real Time Operating system (RTOS) that supervises the application software and
provide mechanism to let the processor run a process as per scheduling by following a
plan to control the latencies. RTOS defines the way the system works. It sets the rules
during the execution of application program. A small scale embedded system may not
have RTOS.
Tightly constrained − All computing systems have constraints on design metrics, but
those on an embedded system can be especially tight. Design metrics is a measure of an
implementation's features such as its cost, size, power, and performance. It must be of a
size to fit on a single chip, must perform fast enough to process data in real time and
consume minimum power to extend battery life.
Reactive and Real time − Many embedded systems must continually react to changes in
the system's environment and must compute certain results in real time without any
delay. Consider an example of a car cruise controller; it continually monitors and reacts
to speed and brake sensors. It must compute acceleration or de-accelerations repeatedly
within a limited time; a delayed computation can result in failure to control of the car.
Memory − It must have a memory, as its software usually embeds in ROM. It does not
need any secondary memories in the computer.
Connected − It must have connected peripherals to connect input and output devices.
HW-SW systems − Software is used for more features and flexibility. Hardware is used
for performance and security.
Advantages
Easily Customizable
Low cost
Enhanced performance
Disadvantages
Sensor − It measures the physical quantity and converts it to an electrical signal which
can be read by an observer or by any electronic instrument like an A2D converter. A
sensor stores the measured quantity to the memory.
A-D Converter − An analog-to-digital converter converts the analog signal sent by the
sensor into a digital signal.
Processor & ASICs − Processors process the data to measure the output and store it to
the memory.
D-A Converter − A digital-to-analog converter converts the digital data fed by the
processor to analog data
Actuator − An actuator compares the output given by the D-A Converter to the actual
(expected) output stored in it and stores the approved output.
Processor is the heart of an embedded system. It is the basic unit that takes inputs and produces
an output after processing the data. For an embedded system designer, it is necessary to have the
knowledge of both microprocessors and microcontrollers.
Processors in a System
A processor has two essential units −
The CU includes a fetch unit for fetching instructions from the memory. The EU has circuits
that implement the instructions pertaining to data transfer operation and data conversion from
one form to another.
The EU includes the Arithmetic and Logical Unit (ALU) and also the circuits that execute
instructions for a program control task such as interrupt, or jump to another set of instructions.
A processor runs the cycles of fetch and executes the instructions in the same sequence as they
are fetched from memory.
TypesofProcessors
o Microprocessor
o Microcontroller
o Embedded Processor
o Media Processor
Peripherals
An embedded system has to communicate with the outside world and this is done by
peripherals.
Input peripherals are usually associated with sensors that measure the external
environment and thus effectively control the output operations that the embedded system
performs.
Binary outputs
These are simple external pins whose logic state can be controlled by the processor to
either be a logic zero (off) or a logic one (on). Serial outputs
These are interfaces that send or receive data using one or two pins in a serial mode.
They are less complex to connect but are more complicated to program.
A parallel port looks very similar to a memory location and is easier to visualize and thus
use. A serial port has to have data loaded into a register and then a start command issued.
The data may also be augmented with additional information as required by the protocol.
Analogue values
While processors operate in the digital domain, the natural world does not and tends to
orientate to analogue values. As a result, interfaces between the system and the external
environment needto be converted from analogue to digital and vice versa.
Displays
Displays are becoming important and can vary from simple LEDs and seven segment
displays to small alpha-numeric LCD panels.
Time derived outputs
• Timers and counters are probably the most commonly used functions within an embedded
system.
Software
The software components within an embedded system often encompasses the technology that
adds value to the system and defines what it does and how well it does it. The software can
consist of several different components:
• Initialisation and configuration
Operating system or run-time environment
• The applications software itself
• Error handling
• Debug and maintenance support.
Algorithms
• Algorithms are the key constituents of the software that makes an embedded system behave in
the way that it does.
• They can range from mathematical processing through to models of the external environment
which are used to interpret information from external sensors and thus generate control signals.
• With the digital technology in use today such as MP3 and DVD players, the algorithms that
digitally encode the analogue data are defined by standards bodies.
• While this standardization could mean that the importance of selecting an algorithm is far less
than it might be thought, the reality is far different. The focus on getting the right
implementation
Microcontroller
• Microcontrollers can be considered as self-contained systems with a processor, memory and
peripherals so that in many cases all that is needed to use them within an embedded system
is to add software.
• The processors are usually based on 8 bit stack-based architectures such as the MC6800
family. There are 4 bit versions available such as the National COP series which further
reduce the processing power and reduce cost even further.
• These are limited in their functionality but their low cost has meant that they are used in
many obscure applications. Microcontrollers are usually available in several forms:
Devices for prototyping or low volume production runs
Devices for low to medium volume production runs
Most microcontroller families have parts that support external expansion and have an
external memory and/or I/O bus which can allow the designer to put almost any
configuration together.
This is often done by using a parallel port as the interface instead of general-purpose I/O.
Many of the higher performance microcontrollers are adopting this approach.
In the example shown on the previous page, the microcontroller has an expanded mode
that allows the parallel ports A and B to be used as byte wide interfaces to external RAM
and ROM. In this type of configuration, some microcontrollers disable access to the
internal memory while others still allow it.
Microprocessor based
The use of processors in the PC market continued to provide a series of faster and faster
processors such as the MC68020, MC68030 and MC68040 devices from Motorola and
the 80286, 80386, 80486 and Pentium devices from Intel.
These CISC architectures have been complemented with RISC processors such as the
PowerPC, MIPS and others. These systems offer more performance than is usually
available from a traditional microcontroller.
However, this is beginning to change. There has been the development of integrated
microprocessors where the processor is combined with peripherals such as parallel and
serial ports, DMA controllers and interface logic to create devices that are more suitable
for embedded systems by reducing the hardware design task and costs.
As a result, there has been almost a parallel development of these integrated processors
along with the desk-top processors.
Typically, the integrated processor will use a processor generation that is one behind the
current generation. The reason is dependent on silicon technology and cost.
By using the previous generation which is smaller, it frees up silicon area on the die to
add the peripherals and so on.
Board based
So far, the types of embedded systems that we have considered have assumed that the
hardware needs to be designed, built and debugged.
An alternative is to use hardware that has already been built and tested such as board-
based systems as provided by PCs and through international board standards such as
VME bus.
The main advantage is the reduced work load and the availability of ported software that
can simply be utilized with very little effort.
The disadvantages are higher cost and in some cases restrictions in the functionality that
is available.
The compilation process
When using a high level language compiler with an IBM PC or UNIX system, it is all
too easy to forget all the stages that are encountered when source code is compiled into
an executable file. Not only is a suitable compiler needed, but the appropriate run-time
libraries and linking loader to combine all the modules are also required.
The problem is that these may be well integrated for the native system, PC or work-
station, but this may not be the case for a VME bus system
Compiling code
Like many compilers, such as PASCAL or C, the high level language only generates a
subset of its facilities and commands from built-in routines and relies on libraries to
provide the full range of functions.
.The first stage involves pre-processing the source, where include files are added to it.
These files define constants, standard functions and so on.
The output of the pre-processor is fed into the compiler, where it produces an assembler
file using the native instruction codes for the processor. This file may have references to
other software files, called libraries. The assembler file is next assembled and converted
into an object file.
lOMoARcPSD|284 206 67
This contains the hexadecimal coding for the instructions, except that memory addresses
and file references are not completed; these are resolved by the loader (sometimes known
as a linker) that finally creates an executable file.
The loader calculates all the memory addresses and takes software routines from library
files to supply the standard functions called by the program.
The pre-processor
The pre-processor, as its name suggests, processes the source code before it goes through
the compiler. It allows the programmer to define constants, variable types and other
information. It also includes other files (include files) and combines them into the
program source. These tasks can be conditionally performed, depending on the value of
constants, and so on. The pre- processor is programmed using one of five basic
commands which are inserted into the C source.
lOMoARcPSD|284 206 67
#define
#define identifier string
This statement replaces all occurrences of identifier with string. The normal convention is to put
the identifier in capital letters so it can easily be recognized as a pre-processor statement. In this
example it has been used to define the values of TRUE and FALSE. The main advantage of this
is usually the ability to make C code more readable by defining names to be certain values.
#endif
This statement conditionally includes code, depending on whether the identifier has been
previously defined using a #define statement. This is extremely useful for conditionally altering
the program, depending on definitions. It is often used to insert machine dependent software into
programs.
Compilation
This is where the processed source code is turned into assembler modules ready for the
linker to combine them with the run-time libraries.
There are several ways this can be done. The first may be to generate object files directly
without going through a separate assembler stage. The usual approach is to create an
assembler source listing which is then run through an assembler to create an object file.
During this process,
The standard C compiler for UNIX systems is called cc and from its command line, C
programs can be pre-processed, compiled, assembled and linked to create an executable
file. Its basic options shown below have been used by most compiler writers and
therefore are common to most compilers, irrespective of the platform.
This procedure can be stopped at any point and options given to each stage, as needed.
The options for the compiler are:
-c Compiles as far as the linking stage and leaves the object file (suffix .o). This is used to
compile programs to form part of a library.
-p Instructs the compiler to produce code which counts the number of times each routine is
called.
-f Links the object program with the floating point software rather than using a hardware
processor.
-g Generates symbolic debug information for debuggers like sdb. Without this information,
the debugger can only work at assembler level and not print variable values and so on.
lOMoARcPSD|284 206 67
-O Switch on the code optimiser to optimise the program and improve its performance.
-Wc,args Passes the arguments args to the compiler process indicated by c, where c is one of
p012al and stands for pre-processor, compiler first pass, compiler second pass,
optimiser, assembler and linker, respectively.
-S Compiles the named C programs and generates an assembler language output file only.
-E Only runs the pre-processor on the named C programs and sends the result to the standard
output.
-P Only runs the pre-processor on the named C programs and puts the result in the
corresponding files suffixed .i.
-Dsymbol Defines a symbol to the pre-processor. This mechanism is useful in defining a
constant which is then evaluated by the pre-processor, without having to edit the original source.
-Usymbol Undefine symbol to the pre-processor. This is useful in disabling pre-processor
statements.
object files cannot be executed as the object file generated by the assembler contains the
basic pro-gram code but is not complete.
The linker, or loader as it is also called, takes the object file and searches library files to
find the routines it calls. It then calculates all the address references and incorporates any
symbolic information. Its final task is to create a file which can be executed. This stage is
often referred to as linking or loading.
The linker gives the final control to the programmer concerning where sections are
located in memory, which routines are used (and from which libraries) and how
unresolved references are reconciled.
Symbols, references and relocation
When the compiler encounters a printf() or similar statement in a program, it creates an
external reference which the linker interprets as a request for a routine from a library.
When the linker links the program to the library file, it looks for all the external
references and satisfies them by searching either default or user defined libraries. If any
of these references cannot be found, an error message appears and the process aborts.
This also happens with symbols where data types and variables have been used but not
specified.
The linker supplies the missing pieces, fits them and makes sure that the jigsaw is
complete.
lOMoARcPSD|284 206 67
The linker does not stop there. It also calculates all the addresses which the program
needs to jump or branch to.
Linking
Locating
lOMoARcPSD|284 206 67
Compiling
2. Extended file format (ELF) o Object files generally have the following structure:
lOMoARcPSD|284 206 67
Linking
The Linker does this by merging the various sections like text, data, and bss of the individual
object files. The output of the linker will be a single file which contains all of the machine
language code from all of the input object files that will be in the text section of this new file,
and all of the initialized and uninitialized variables will reside in the new data section and bss
section respectively.
Locating
The locator will use this information to assign physical memory addresses to each of the
code and data sections within the relocatable program code. Finally it produces an output
file that contains a binary memory image that can be loaded into the target processors
ROM.
Debugging techniques
The advantage of this is that it allows a parallel development of the hardware and
software and added confidence, when the two parts are integrated, that it will work.
Using this technique, it is possible to simulate I/O using the keyboard as input or another
task passing input data to the rest of the modules. Another technique is to use a data
table which contains data sequences that are used to test the software.
This simulation allows logical testing of the software but rarely offers quantitative
information unless the simulation environment is very close to that of the target, in terms
of hardware and software environments.
There are simulation tools available for these routines as well. CPU simulators can
simulate a processor, memory system and, in some cases, some peripherals and allow
low level assembler code and small HLL programs to be tested without the need for the
actual hardware. These tools tend to fall into two categories:
the first simulate the programming model and memory system and offer simple
debugging tools similar to those found with an onboard debugger.
These are inevitably slow, when compared to the real thing, and do not provide timing
information or permit different memory configurations to be tested. However, they are
very cheap and easy to use and can provide a low cost test bed for individuals within a
large software team.
There are even shareware simulators for the most common processors such as the one
from the University of North Carolina which simulates an MC68000 processor.
The second category extends the simulation to provide timing information based on the
number of clock cycles. Some simulators can even provide information on cache
perform-ance, memory usage and so on, which is useful data for making hardware
decisions.
Different performance memory systems can be exercised using the simulator to provide
performance data. This type of information is virtually impossible to obtain without
using such tools. These more powerful simulators often require very powerful hosts with
large amounts of memory. SDS provide a suite of such tools that can simulate a
processor and memory and with some of the integrated
lOMoARcPSD|284 206 67
processors that are available, even emulate onboard peripherals such as LCD controllers and
parallel ports.
Onboard debugger
The onboard debugger provides a very low level method of debugging software. Usually
supplied as a set of EPROMs which are plugged into the board or as a set of software
routines that are combined with the applications code, they use a serial connection to
communicate with a PC or workstation.
They provide several functions: the first is to provide initialization code for the processor
and/or the board which will normally initialize the hardware and allow it to comeup into
a known state. The second is to supply basic debugging facilities and, in some cases,
allow simple access to the board8s peripherals. Often included in these facilities is the
ability to download code using a serial port or from a floppy disk.
The second method, which relies on processor support, allows the vector table to be
moved elsewhere in the memory map. With the later M68000 processors, this can also be
done by changing the vector base register which is part of the supervi-sor programming
model.
The debugger usually operates at a very low level and allows basic memory and processor
register display and change, setting RAM-based breakpoints and so on. This is normally
performed using hexadecimal notation, although some debuggers can provide a simple
disassembler function.
The onboard debugger provides a simple but some-times essential way of debugging VMEbus
software. For small amounts of code, it is quite capable of providing a method of debugging
which is effective, albeit not as efficient as a full blown symbolic level debugger 4 or as
complex or expensive. It is often the only way of finding out about a system which has hung or
crashed.
routine but it cannot set them for particular task functions and operations. It is possible to set a
breakpoint at the start of the routine that sends a message, but if only a particular message is
required, the low level approach will need manual inspection of all messages to isolate the one
that is needed 4 an often daunting and impractical approach!
To solve this problem, most operating systems provide a task level debugger which works at the
operating system level. Breakpoints can be set on system circumstances, such as events,
messages, interrupt routines and so on, as well as the more normal memory address. In addition,
the ability to filter messages and events is often included. Data on the current executing tasks is
provided, such as memory usage, current status and a snapshot of the registers.
Symbolic debug
The ability to use high level language instructions, functions and variables instead of the more
normal addresses and their contents is known as symbolic debugging. Instead of using an
assembler listing to determine the address of the first instruction of a C function and using this to
set a breakpoint, the symbolic debugger allows the breakpoint to be set by quoting a line
reference or the function name. This interaction is far more efficient than working at the
assembler level, although it does not necessarily mean losing the ability to go down to this level
if needed.
The reason for this is often due to the way that symbolic debuggers work. In simple terms, they
are intelligent front ends for assembler level debuggers, where software performs the automatic
look-up and conversion between high level language structures and their respective assembler
level addresses and contents.
Emulators
A Remote debugger is helpful for monitoring and controlling the state of embedded software
prior to downloading it only.
An Emulator allows you to examine the state of the processor on which that program is actually
running. It is itself an embedded system, with its own copy of the target processor, RAM, ROM,
and its own embedded software
An Emulator takes the place of-or emulates-the processor on the target board.
Emulator uses a remote debugger for its human interface.
Emulator supports such powerful debugging features such as hardware breakpoints and real-time
tracing. Hardware breakpoints allow you to stop execution in response to a wide variety of
events. These events include instruction fetches, memory and I/O reads and writes and interrupts.
Real Time tracing allows you to see the exact order in which events occurred, so it can help you
answer questions related to specific errors.
ROM Emulator
It is a device that emulates a read only memory device like ICE (in-circuit emulator).
It connects to the target embedded system and communicates with the host.
When a target connection is via a ROM socket to embedded system it looks like any other read
only memory. But when it is to the remote debugger it looks like a debug monitor.
Advantages:
There is no need to port the debug monitor code to particular target hardware.
The ROM emulator supplies its own serial or network connection to the host
The ROM emulator is a true replacement for the original ROM, so none of the target8s memory
is used up by the debug monitor code
Simulators
A simulator is a completely host-based program that simulates the functionality and instructions
set of the target processor.
Advantage: A Simulator can be quite valuable in the earlier stage of a project when there has not
yet been any actual hardware implementation for the programmers to experiment with.
Disadvantage: One of the disadvantages of simulator is that it only simulates the processors.