Module5 - ARM Microcontroller and Embedded System Design
Module5 - ARM Microcontroller and Embedded System Design
17EC62
MODULE 5:RTOS&IDE FOR ESD
Module – 5
RTOS & IDE FOR ESD
Operating System
• An operating system (OS) is a software, consisting of programs and data, that runs on
computers and manages the computer hardware and provides common services for
efficient execution of various application software.
✓ OS manages the system resources and makes them available to the user applications/tasks on
a need basis
User Applications
Application Programming
Interface (API)
Memory Management
Kernel Services
Process Management
Time Management
File System Management
I/O System Management
Device Driver
Interface
Underlying Hardware
✓ May inject random delays into application software and thus cause slow responsiveness of an
application at unexpected times
✓ Personal Computer/Desktop system is a typical example for a system where GPOSs are
deployed.
✓ Operating Systems, which are deployed in embedded systems demanding real-time response
✓ Deterministic in execution behavior. Consumes only known amount of time for kernel
applications
✓ Implements scheduling policies for executing the highest priority task/application always
✓ Windows CE, QNX, VxWorks MicroC/OS-II etc are examples of Real Time Operating
Systems (RTOS)
Task:
✓ Task is a piece of code or program that is separate from another task and can be executed
independently of the other tasks.
✓ In embedded systems, the operating system has to deal with a limited number of tasks
depending on the functionality to be implemented in the embedded system.
✓ Multiple tasks are not executed at the same time instead they are executed in pseudo parallel
i.e. the tasks execute in turns as the use the processor.
✓ From a multitasking point of view, executing multiple tasks is like a single book being read by
multiple people, at a time only one person can read it and then take turns to read it.
✓ Different bookmarks may be used to help a reader identify where to resume reading next time.
✓ An Operating System decides which task to execute in case there are multiple tasks to be
executed. The operating system maintains information about every task and information about
the state of each task.
✓ The information about a task is recorded in a data structure called the task context. When a
task is executing, it uses the processor and the registers available for all sorts of processing.
When a task leaves the processor for another task to execute before it has finished its own, it
should resume at a later time from where it stopped and not from the first instruction. This
requires the information about the task with respect to the registers of the processor to be
stored somewhere. This information is recorded in the task context.
Task States
• In an operation system there are always multiple tasks. At a time only one task can be
executed. This means that there are other tasks which are waiting their turn to be
executed.
• Depending upon execution or not a task may be classified into the following three
states:
• Running state - Only one task can actually be using the processor at a given time that
task is said to be the “running” task and its state is “running state”. No other task can
be in that same state at the same time
• Ready state - Tasks that are not currently using the processor but are ready to run are
in the “ready” state. There may be a queue of tasks in the ready state.
• Waiting state - Tasks that are neither in running nor ready state but that are waiting for
some event external to themselves to occur before the can go for execution on are in
the “waiting” state.
Process Concept:
Process: A process or task is an instance of a program in execution. The execution of a process must
program in a sequential manner. At any time at most one instruction is executed. The process includes
the current activity as represented by the value of the program counter and the content of the
processors registers. Also it includes the process stack which contain temporary data (such as method
parameters return address and local variables) & a data section which contain global variables.
Process state: As a process executes, it changes state. The state of a process is defined by the correct
activity of that process. Each process may be in one of the following states.
✓
Many processes may be in ready and waiting state at the same time.
Process scheduling:
Scheduling is a fundamental function of OS. When a computer is multiprogrammed, it has multiple
processes completing for the CPU at the same time. If only one CPU is available, then a choice has to
be made regarding which process to execute next. This decision making process is known as scheduling
and the part of the OS that makes this choice is called a scheduler. The algorithm it uses in making this
choice is called scheduling algorithm.
Scheduling queues: As processes enter the system, they are put into a job queue. This queue consists
of all process in the system. The process that are residing in main memory and are ready & waiting to
execute or kept on a list called ready queue.
A process control block contains many pieces of information associated with a specific process. It
includes the following informations.
✓ Process state: The state may be new, ready, running, waiting or terminated state.
✓ Program counter: it indicates the address of the next instruction to be executed for this purpose.
✓ CPU registers: The registers vary in number & type depending on the computer architecture.
It includes accumulators, index registers, stack pointer & general purpose registers, plus any
condition- code information must be saved when an interrupt occurs to allow the process to
be continued correctly after- ward.
✓ CPU scheduling information: This information includes process priority pointers to scheduling
queues & any other scheduling parameters.
✓ Memory management information: This information may include such information as the value
of the bar & limit registers, the page tables or the segment tables, depending upon the memory
system used by the operating system.
✓ Accounting information: This information includes the amount of CPU and real time used,
time limits, account number, job or process numbers and so on.
✓ I/O Status Information: This information includes the list of I/O devices allocated to this
process, a list of open files and so on. The PCB simply serves as the repository for any
information that may vary from process to process
Threads
Applications use concurrent processes to speed up their operation. However, switching between
processes within an application incurs high process switching overhead because the size of the process
state information is large, so operating system designers developed an alternative model of execution
of a program, called a thread, that could provide concurrency within an application with less overhead
To understand the notion of threads, let us analyze process switching overhead and see where a saving
can be made. Process switching overhead has two components:
• Execution related overhead: The CPU state of the running process has to be saved and the CPU state
of the new process has to be loaded in the CPU. This overhead is unavoidable.
• Resource-use related overhead: The process context also has to be switched. It involves switching of
the information about resources allocated to the process, such as memory and files, and interaction of
the process with other processes. The large size of this information adds to the process switching
overhead.
Consider child processes Pi and Pj of the primary process of an application. These processes inherit
the context of their parent process. If none of these processes have allocated any resources of their
own, their context is identical; their state information differs only in their CPU states and contents of
their stacks. Consequently, while switching between Pi and Pj ,much of the saving and loading of
process state information is redundant. Threads exploit this feature to reduce the switching overhead.
A process creates a thread through a system call. The thread does not have resources of its own, so it
does not have a context; it operates by using the context of the process, and accesses the resources of
the process through it. We use the phrases ―thread(s) of a process‖ and ―parent process of a thread‖
to describe the relationship between a thread and the process whose context it uses.
POSIX Threads:
POSIX Threads, usually referred to as pthreads, is an execution model that exists independently from
a language, as well as a parallel execution model. It allows a program to control multiple different flows
of work that overlap in time. Each flow of work is referred to as a thread, and creation and control
over these flows is achieved by making calls to the POSIX Threads API. POSIX Threads is an API
defined by the standard POSIX.1c, Threads extensions (IEEE Std 1003.1c-1995).
Implementations of the API are available on many Unix-like POSIX-conformant operating systems
such as FreeBSD, NetBSD, OpenBSD, Linux, Mac OS X, Android[1] and Solaris, typically bundled as
a library libpthread. DR-DOS and Microsoft Windows implementations also exist: within the
SFU/SUA subsystem which provides a native implementation of a number of POSIX APIs, and also
within third-party packages such as pthreads-w32,[2] which implements pthreads on top of existing
Windows API.
Win32 Threads:
Win32 threads are the threads supported by various flavors of Windows Operating Systems.
TheWin32Application Programming Interface (Win32API) libraries provide the standard set of Win
32 thread creation and management functions.
Pre-emptive Scheduling
✓ Employed in systems, which implements preemptive multitasking
✓ When and how often each process gets a chance to execute (gets the CPU time) is dependent
on the type of preemptive scheduling algorithm used for scheduling the processes
✓ The scheduler can preempt (stop temporarily) the currently executing task/process and select
another task from the ‘Ready’ queue for execution
✓ When to pre-empt a task and which task is to be picked up from the ‘Ready’ queue for execution
after preempting the current task is purely dependent on the scheduling algorithm
✓ The act of moving a ‘Running’ process into the ‘Ready’ queue by the scheduler, without the
process requesting for it is known as ‘Preemption’
✓ Time-based preemption and priority-based preemption are the two important approaches
adopted in preemptive scheduling
Three processes with process IDs P1 P2 P3 with estimated completion time 10 5 7 milliseconds
respectively enters the ready queue together A new process P 4 with an estimated completion time of
2 ms enters the queue after 2 ms
At the beginning, there are only three processes (P1 P2 and P3 available in the ready queue and the
SRT scheduler picks up the process with the shortest remaining time for execution completion (In this
example P2 with remaining time 5 ms) for scheduling
Now process P4 with estimated execution completion time 2 ms enters the Ready queue after 2 ms of
start of execution of P2 .The processes are re scheduled for execution in the following order
WT for P2 = 0 ms + 2 ms = 2ms (P2 starts executing first and is interrupted by P4 and has to wait till
the completion of P4 to get the next CPU slot)
WT for P4 = 0 ms (P4 starts executing by pre-empting P2 since the execution time for completion of
P4 (2ms) is less than that of the Remaining time for execution completion of P2 (3ms here))
TAT for P2 = 7 ms
= (2-2) + 2
= 2 ms
This type of algorithm is designed only for the time sharing system. It is similar to FCFS scheduling
with preemption condition to switch between processes. A small unit of time called quantum time or
time slice is used to switch between the processes. The average waiting time under the round robin
policy is quiet long.
Three processes P 1 P 2 P 3 with estimated completion times of 6 4 2 ms respectively, enters the ready
queue together in the order P 1 P 2 P 3 Calculate the WT, AWT, TAT ATAT in RR algorithm with
Time slice= 2 ms
The scheduler sorts the Ready queue based on the FCFS policy and picks up P 1 from the ‘queue and
executes it for the time slice 2 ms
When the time slice is expired, P 1 is preempted and P 2 is scheduled for execution The Time slice
expires after 2 ms of execution of P 2 Now P 2 is preempted and P 3 is picked up for execution. P3
completes its execution within the time slice and the scheduler picks P1 again for execution for the
next time slice This procedure is repeated till all the processes are serviced
WT for P1 = 0 + (6-2) + (10-8) = 6ms (P1 starts executing first and waits for two time slices to get
execution back and again 1 time slice for getting CPU time)
WT for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1 executes for 1 time slice and
waits for two time slices to get the CPU time)
WT for P3 = (4-0) = 4ms (P3 starts executing after completing the first time slices for P1 & P2 and
completes its execution in a single time slice.)
Task Communication :
A shared memory is an extra piece of memory that is attached to some address spaces for their owners
to use. As a result, all of these processes share the same memory segment and have access to it.
Consequently, race conditions may occur if memory accesses are not handled properly. The following
figure shows two processes and their address spaces. The yellow rectangle is a shared memory attached
to both address spaces and both process 1 and process 2 can have access to this shared memory as if
the shared memory is part of its own address space. In some sense, the original address spaces is
"extended" by attaching this shared memory.
A pipe is a method used to pass information from one program process to another. Unlike other types
of inter-process communication, a pipe only offers one-way communication by passing a parameter or
output from one process to another. The information that is pas
sed through the pipe is held by the system until it can be read by the receiving process. also known as
a FIFO for its behavior. In computing, a named pipe (also known as a FIFO) is one of the methods
for intern-process communication. It is an extension to the traditional pipe concept on Unix. A
traditional pipe is “unnamed” and lasts only as long as the process. A named pipe, however, can last
as long as the system is up, beyond the life of the process. It can be deleted if no longer used. Usually
a named pipe appears as a file, and generally processes attach to it for inter-process communication. A
FIFO file is a special kind of file on the local storage which allows two or more processes to
communicate with each other by reading/writing to/from this file. A FIFO special file is entered
into the filesystem by calling mkfifo() in C. Once we have created a FIFO special file in this way, any
process can open it for reading or writing, in the same way as an ordinary file. However, it has to be
open at both ends simultaneously before you can proceed to do any input or output operations on it.
Message passing: Message passing can be synchronous or asynchronous. Synchronous message passing
systems require the sender and receiver to wait for each other while transferring the message. In
asynchronous communication the sender and receiver do not wait for each other and can carry on
theiThe advantage to synchronous message passing is that it is conceptually less complex. Synchronous
message passing is analogous to a function call in which the message sender is the function caller and
the message receiver is the called function. Function calling is easy and familiar. Just as the function
caller stops until the called function completes, the sending process stops until the receiving process
completes. This alone makes synchronous message unworkable for some applications. For example, if
synchronous message passing would be used exclusively, large, distributed systems generally would not
perform well enough to be usable. Such large, distributed systems may need to continue to operate
while some of their subsystems are down; subsystems may need to go offline for some kind of
maintenance, or have times when subsystems are not open to receiving input from other systems. r
own computations while transfer of messages is being done.
Message queue: Message queues provide an asynchronous communications protocol, meaning that the
sender and receiver of the message do not need to interact with the message queue at the same time.
Messages placed onto the queue are stored until the recipient retrieves them. Message queues have
implicit or explicit limits on the size of data that may be transmitted in a single message and the number
of messages that may remain outstanding on the queue. Many implementations of message queues
function internally: within an operating system or within an application. Such queues exist for the
purposes of that system only.[1][2][3] Other implementations allow the passing of messages between
different computer systems, potentially connecting multiple applications and multiple operating
systems.[4] These message queueing systems typically provide enhanced resilience functionality to
ensure that messages do not get "lost" in the event of a system failure. Examples of commercial
implementations of this kind of message queueing software (also known as message-oriented
middleware) include IBM WebSphere MQ (formerly MQ Series) and Oracle Advanced Queuing (AQ).
There is a Java standard called Java Message Service, which has several proprietary and free software
implementations.
Mail box:
Mailboxes provide a means of passing messages between tasks for data exchange or task
synchronization. For example, assume that a data gathering task that produces data needs to convey
the data to a calculation task that consumes the data. This data gathering task can convey the data by
placing it in a mailbox and using the SEND command; the calculation task uses RECEIVE to retrieve
the data. If the calculation task consumes data faster than the gatherer produces it, the tasks need to be
synchronized so that only new data is operated on by the calculation task. Using mailboxes achieves
synchronization by forcing the calculation task to wait for new data before it operates. The data
producer puts the data in a mailbox and SENDs it. The data consumer task calls RECEIVE to check
whether there is new data in the mailbox; if not, RECEIVE calls Pause() to allow other tasks to execute
while the consuming task is waiting for the new data.
Signaling : signals are commonly used in POSIX systems. Signals are sent to the current process telling
it what it needs to do, such as, shutdown, or that it has committed an exception. A process has several
signal-handlers which execute code when a relevant signal is encountered. The ANSI header for these
tasks is <signal.h>, which includes routines to allow signals to be raised and read. Signals are essentially
software interrupts. It is possible for a process to ignore most signals, but some cannot be blocked.
Some of the common signals are Segmentation Violation (reading or writing memory that does not
belong to this process), Illegal Instruction (trying to execute something that is not a proper instruction
to the CPU), Halt (stop processing for the moment), Continue (used after a Halt), Terminate (clean up
and quit), and Kill (quit now without cleaning up).
RPC: Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-
server based applications. It is based on extending the conventional local procedure calling, so that the
called procedure need not exist in the same address space as the calling procedure. The two processes
may be on the same system, or they may be on different systems with a network connecting them.
Process Synchronization
A co-operation process is one that can affect or be affected by other processes executing in the system.
Co-operating process may either directly share a logical address space or be allotted to the shared data
only through files. This concurrent access is known as Process synchronization.
Consider a system consisting of n processes (P0, P1, ………Pn -1) each process has a segment of code
which is known as critical section in which the process may be changing common variable, updating a
table, writing a file and so on. The important feature of the system is that when the process is executing
in its critical section no other process is to be allowed to execute in its critical section.
The execution of critical sections by the processes is a mutually exclusive. The critical section problem
is to design a protocol that the process can use to cooperate each process must request permission to
enter its critical section. The section of code implementing this request is the entry section. The critical
section is followed on exit section. The remaining code is the remainder section.
Example:
While (1)
Entry Section;
Remainder Section;
A solution to the critical section problem must satisfy the following three conditions.
1. Mutual Exclusion: If process Pi is executing in its critical section then no any other process can be
executing in their critical section.
2. Progress: If no process is executing in its critical section and some process wish to enter their critical
sections then only those process that are not executing in their remainder section can enter its critical
section next.
3. Bounded waiting: There exists a bound on the number of times that other processes are allowed to
enter their critical sections after a process has made a request.
Deadlock:
In a multiprogramming environment several processes may compete for a finite number of resources.
A process request resources; if the resource is available at that time a process enters the wait state.
Waiting process may never change its state because the resources requested are held by other waiting
process. This situation is known as deadlock.
146 DEPT. OF ELECTRONICS & COMMUNICATION ENGG. | MIT MYSORE
ARM MICROCONTROLLER AND EMBEDDED SYSTEMS |
17EC62
MODULE 5:RTOS&IDE FOR ESD
Deadlock Characteristics: In a deadlock process never finish executing and system resources are tied
up. A deadlock situation can arise if the following four conditions hold simultaneously in a system.
✓ Mutual Exclusion: At a time only one process can use the resources. If another process requests
that resource, requesting process must wait until the resource has been released.
✓ Hold and wait: A process must be holding at least one resource and waiting to additional
resource that is currently held by other processes.
✓ No Preemption: Resources allocated to a process can’t be forcibly taken out from it unless it
releases that resource after completing the task.
✓ Circular Wait: A set {P0, P1, …….Pn} of waiting state/ process must exists such that P0 is
waiting for a resource that is held by P1, P1 is waiting for the resource that is held by P2 …..
P(n – 1) is waiting for the resource that is held by Pn and Pn is waiting for the resources that is
held by P4.
One aim of the IDE is to reduce the configuration necessary to piece together multiple development
utilities, instead providing the same set of capabilities as a cohesive unit. Reducing that setup time can
increase developer productivity, in cases where learning to use the IDE is faster than manually
integrating all of the individual tools. Tighter integration of all development tasks has the potential to
improve overall productivity beyond just helping with setup tasks. For example, code can be
continuously parsed while it is being edited, providing instant feedback when syntax errors are
introduced. That can speed learning a new programming language and its associated libraries.
Some IDEs are dedicated to a specific programming language, allowing a feature set that most closely
matches the programming paradigms of the language. However, there are many multiple-language
IDEs, such as Eclipse, ActiveState Komodo, IntelliJ IDEA, Oracle JDeveloper, NetBeans, Codenvy
and Microsoft Visual Studio. Xcode, Xojo and Delphi are dedicated to a closed language or set of
programming languages.
While most modern IDEs are graphical, text-based IDEs such as Turbo Pascal were in popular use
before the widespread availability of windowing systems like Microsoft Windows and the X Window
System (X11). They commonly use function keys or hotkeys to execute frequently used commands or
macros.
A cross compiler is a compiler capable of creating executable code for a platform other than the one
on which the compiler is running. For example in order to compile for Linux/ARM you first need to
obtain its libraries to compile against.
MIT MYSORE | DEPT. OF ELECTRONICS & COMMUNICATION ENGG. 147
ARM MICROCONTROLLER AND EMBEDDED SYSTEMS |
17EC62
MODULE 5:RTOS&IDE FOR ESD
A cross compiler is necessary to compile for multiple platforms from one machine. A platform could
be infeasible for a compiler to run on, such as for the microcontroller of an embedded system because
those systems contain no operating system. In paravirtualization one machine runs many operating
systems, and a cross compiler could generate an executable for each of them from one main source.
Cross compilers are not to be confused with a source-to-source compilers. A cross compiler is for
cross-platform software development of binary code, while a source-to-source "compiler" just
translates from one programming language to another in text code. Both are programming tools.
The fundamental use of a cross compiler is to separate thebuild environment from target environment.
This is useful in a number of situations:
Embedded computers where a device has extremely limited resources. For example, a microwave oven
will have an extremely small computer to read its touchpad and door sensor, provide output to a digital
display and speaker, and to control the machinery for cooking food. This computer will not be powerful
enough to run a compiler, a file system, or a development environment. Since debugging and testing
may also require more resources than are available on an embedded system, cross- compilation can be
less involved and less prone to errors than native compilation.
Compiling for multiple machines. For example, a company may wish to support several different
versions of an operating system or to support several different operating systems. By using a cross
compiler, a single build environment can be set up to compile for each of these targets.
Compiling on a server farm. Similar to compiling for multiple machines, a complicated build that
involves many compile operations can be executed across any machine that is free, regardless of its
underlying hardware or the operating system version that it is running.
Bootstrapping to a new platform. When developing software for a new platform, or the emulator of a
future platform, one uses a cross compiler to compile necessary tools such as the operating system and
a native compiler.
What is a Disassembler?
In essence, a disassembler is the exact opposite of an assembler. Where an assembler converts code
written in an assembly language into binary machine code, a disassembler reverses the process and
attempts to recreate the assembly code from the binary machine code.
Since most assembly languages have a one-to-one correspondence with underlying machine
instructions, the process of disassembly is relatively straight-forward, and a basic disassembler can often
be implemented simply by reading in bytes, and performing a table lookup. Of course, disassembly has
its own problems and pitfalls, and they are covered later in this chapter.
Many disassemblers have the option to output assembly language instructions in Intel, AT&T, or
(occasionally) HLA syntax. Examples in this book will use Intel and AT&T syntax interchangeably. We
will typically not use HLA syntax for code examples, but that may change in the future.
Decompilers
Decompilers take the process a step further and actually try to reproduce the code in a high level
language. Frequently, this high level language is C, because C is simple and primitive enough to facilitate
the decompilation process. Decompilation does have its drawbacks, because lots of data and readability
constructs are lost during the original compilation process, and they cannot be reproduced. Since the
science of decompilation is still young, and results are "good" but not "great", this page will limit itself
to a listing of decompilers, and a general (but brief) discussion of the possibilities of decompilation.
Tools
As with other software, embedded system designers use compilers, assemblers, and debuggers to
develop embedded system software. However, they may also use some more
specific tools:
For systems using digital signal processing, developers may use a math workbench such as Scilab /
Scicos, MATLAB / Simulink, EICASLAB, MathCad, Mathematica,or FlowStone DSP to simulate the
mathematics. They might also use libraries for both the host and target which eliminates developing
DSP routines as done in DSPnano RTOS.
Model based development tool like VisSim lets you create and simulate graphical data flow and UML
State chart diagrams of components like digital filters, motor controllers, communication protocol
decoding and multi-rate tasks. Interrupt handlers can also be created graphically. After simulation, you
can automatically generate C-code to the VisSim
RTOS which handles the main control task and preemption of background tasks, as well as automatic
setup and programming of on-chip peripherals.
Debugging
Embedded debugging may be performed at different levels, depending on the facilities available. From
simplest to most sophisticated they can be roughly grouped into the following areas:
Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g.
Forth and Basic) External debugging using logging or serial port output to trace operation using either
a monitor in flash or using a debug server like the Remedy Debugger which even works for
heterogeneous multicore systems.
An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or
Nexus interface. This allows the operation of the microprocessor to be controlled externally, but is
typically restricted to specific debugging capabilities in the processor.
An in-circuit emulator (ICE) replaces the microprocessor with a simulated equivalent, providing full
control over all aspects of the microprocessor.
A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be
controlled and modified, and allowing debugging on a normal PC. The downsides are expense and
slow operation, in some cases up to 100X slower than the final system.
For SoC designs, the typical approach is to verify and debug the design on an FPGA prototype board.
Tools such as Certus are used to insert probes in the FPGA RTL that make signals available for
observation. This is used to debug hardware, firmware and software interactions across multiple FPGA
with capabilities similar to a logic analyzer.
Unless restricted to external debugging, the programmer can typically load and run software through
the tools, view the code running in the processor, and start or stop its operation. The view of the code
may be as HLL source-code, assembly code or mixture of both.
Simulation is the imitation of the operation of a real-world process or system over time. The act of
simulating something first requires that a model be developed; this model represents the key
characteristics or behaviors/functions of the selected physical or abstract system or process. The model
represents the system itself, whereas the simulation represents the operation of the system over time.
Simulation is used in many contexts, such as simulation of technology for performance optimization,
safety engineering, testing, training, education, and video games. Often, computer experiments are used
to study simulation models.
Key issues in simulation include acquisition of valid source information about the relevant selection of
key characteristics and behaviours, the use of simplifying approximations and assumptions within the
simulation, and fidelity and validity of the simulation outcomes.
Emulator
This article is about emulators in computing. For a line of digital musical instruments, see E-mu
Emulator. For the Transformers character, see Circuit Breaker (Transformers).#Shattered Glass. For
other uses, see Emulation (disambiguation).
In computing, an emulator is hardware or software or both that duplicates (or emulates) the functions
of one computer system (the guest) in another computer system (the host), different from the first one,
so that the emulated behavior closely resembles the behavior of the real system (the guest).
The above described focus on exact reproduction of behavior is in contrast to some other forms of
computer simulation, in which an abstract model of a system is being simulated.
OUT-OF-CIRCUIT: The code to be run on the target embedded system is always developed on the
host computer. This code is called the binary executable image or simply hex code.
The process of putting this code in the memory chip of the target embedded system is called
Downloading.
****************************************************************************************
- Theodore Roosevelt