21.
frame structure of IC2 bus
22. Advantages of RTOS.
Real-Time Operating System (RTOS) can have many advantages, including:
A
Reduced downtime: RTOSs ensure that systems use more resources while
keeping all devices active, resulting in minimal downtime.
1. Task administration: RTOSs can switch from one task to another in 3
microseconds or less.
2. Availability: RTOS systems are accessible 24/7, making them ideal for
applications that must run continuously.
3. Dependability: Hard RTOSs are error-free.
4. Predictability: Tasks are executed within a certain deadline, regardless of
the system load or external interruptions.
5. Resource optimization: Tasks are intelligently scheduled based on priority,
urgency, and duration.
6. Easy to layout, develop, and execute: RTOSs allow for easy layout,
development, and execution of real-time applications.
7. Maximum utilization of devices and systems: RTOSs have extra compact
working structures that require less memory space.
8. Focus on running applications: RTOSs focus on running applications and
less importance to applications that are in the queue.
23. Task and task scheduler in embedded systems.
ask scheduling is the process of assigning tasks to be executed at specific
T
times using scheduling algorithms. The scheduler is the software that determines
which task should be run next. Task scheduling ensures that all tasks running on
the system are executed efficiently and promptly, while allocating resources and
processing power optimally.
Here are some types of schedulers:
● Run to Completion (RTC) Scheduler
● T his scheduler is simple and uses minimal resources. It calls the top level
function of each task in turn.
● Round Robin (RR) Scheduler
● This scheduler is similar to RTC, but more flexible. Tasks can relinquish the
CPU at any time.
● Preemptive scheduler
● When a higher priority task arrives, it suspends any lower priority task that
may be executing and takes up the higher priority task for execution. This
is important for ensuring that critical tasks meet their deadlines, even if
lower-priority tasks are already running.
24. Different task states.
The different task states in an embedded system are:
Idle: The task has been created, but the kernel has not scheduled it.
●
● Ready: The task is ready to execute, but is not running because a higher
priority task is executing.
● Running: The task is executing and has access to all the shared resources
it needs.
● Blocked: The task is waiting for an external or temporal event.
● Dormant: The task has been created, but the kernel has not scheduled it.
● Interrupted (ISR): The CPU is servicing an interrupt.
25. significance of Different task states.
In an embedded system, a task's state indicates its milestone in a process. The
number of possible task states depends on the RTOS.
Some task states include:
● Dormant: The task has been created and allocated memory, but the kernel
has not scheduled it
● Ready: The task is ready and scheduled by the kernel, but is not running
because a higher priority task is executing
● Running: The task has access to all the shared resources required for its
execution
● Waiting: The task is blocked and waiting for an external or temporal event
● Interrupted (ISR): The task is in the ISR state when an interrupt has
occurred and the CPU is servicing it
● Blocked: The task's execution suspended after saving the needed
parameters into its context
● Deleted: The created task has memory de-allocated to its structure and
needs to be re-created
26. IEEE 802.11 protocol.
● T
he IEEE 802.11 protocol is a set of specifications for wireless local area
networks (WLANs). It's the first wireless networking standard to be widely
adopted and is the basis for Wi-Fi wireless networks. The protocol
specifies the interface that enables over-the-air signaling between two or
more wireless clients. It's the main bearer of communication for different
electronic devices, including laptops, tablets, televisions, and actuators.
● T
he IEEE 802.11 protocol architecture consists of three layers: logical link
control (LLC), media access control (MAC), and physical. The LLC layer
provides an interface to higher layers and performs basic link layer
functions.
● T
he IEEE 802.11 MAC frame defines three types of frames: management
frames, control frames, and data frames. Management frames are used for
network management, control frames are used for coordination between
Wi-Fi devices, and data frames are used for the transmission of actual
data.
27. ALU with diagram. .
n Arithmetic Logic Unit (ALU) diagram shows the inputs, processes, outputs,
A
and storage registers of the ALU. The ALU is the core of a computer, performing
arithmetic operations on numbers such as addition and subtraction.
A basic ALU has three parallel data buses, consisting of two input operands (A
and B) and a result output (Y). Each data bus is a group of signals that conveys
one binary integer number. The ALU performs the operation as C = A op B,
where the input data are stored in A and B.
An ALU diagram may include the following components:
● N OT Gate: A transistor and one input logic gate that creates outputs that
are the opposite of the input
● OR Gate: Has multiple transistors and two inputs, with an output of 1 only
if the first or second input is a 1
● AND Gate: Has multiple transistors and two inputs
● Decoder: Serves as a selector that only activates one of the enable lines at
a time, so that only one operation result will be forwarded to the OR gate
that leads to the output.
28. NAND gate using CMOS.
29. Deadlock situation.
deadlock is a situation where multiple processes in a system are preventing
A
each other from completing tasks. Deadlocks can occur when:
T
● here aren't enough resources for each process
● Not enough resources have been allocated to one process or the other
● All the following conditions in a system are fulfilled simultaneously:
1. Mutual exclusion
2. Circular wait
3. Resource holding
4. No preemption
30. Use of semaphore.
In computer science, semaphores are used to control access to shared
resources and coordinate the execution of multiple processes or threads.
They are a type of synchronization primitive that can be used to:
● C ontrol access to a shared device: For example, a printer, where you don't
want two tasks to send to the printer at once
● Task synchronization: By tasks taking and giving the same semaphore,
you can force them to perform operations in a desired order
● Asynchronous event notification: Semaphores can be used
asynchronously without acquiring a mutex lock.
31. Difference between FSM and FSMD.
32. Dsp based system.
DSP-based system uses a specialized processor to process digital signals.
A
DSP stands for Digital Signal Processing or Processor. DSP-based systems are
more efficient and perform better at processing complex signals, making them
suitable for applications like audio and video processing, telecommunications,
and radar systems.
DSPs have the following characteristics:
● They can provide the best performance
● The memory used to store a program is different from the memory used to
store data
● DSPs don't provide multitasking supported hardware
● They have special instructions for modulo and reversed bit addressing
● They can be used as a DMA (direct memory access) device
● They include specially designed architecture to fetch multiple data
33. Datapath operation for general purpose processor.
➢
Datapath:
The datapath is a crucial component of a processor (CPU). It consists of various
functional units responsible for data manipulation and storage.
❖
Key elements within the datapath include:
1. Registers: Fast memory used for efficient execution of programs and
operations.
2. ALU (Arithmetic Logic Unit): Performs arithmetic and logic operations.
3. Buses: Transfer data between different components.
The datapath is responsible for executing instructions by performing the
necessary operations on data1122.
➢
Functional Units in the Datapath:
● Program Counter (PC): Holds the address of the next instruction to be
executed from memory.
● Register File: Contains general-purpose registers for temporary data
storage.
● Instruction Memory: Stores program instructions.
● ALU: Performs arithmetic (addition, subtraction, multiplication, etc.) and
logic (AND, OR, NOT) operations.
● Data Buses: Transfer data between components.
● Control Unit: Directs the datapath on when and how to route and operate
on data33.
➢
Execution of Instructions:
❖ The datapath is involved in executing instructions through the following
steps:
1. Fetch: Retrieve the instruction from memory (controlled by the PC).
2. Decode: Interpret the instruction.
3. Fetch Operands: Retrieve data (operands) required for the operation.
4. Execute: Perform the specified operation (e.g., add, compare, shift).
5. Write Result: Store the result back (if needed).
hese steps ensure that instructions are processed correctly and efficiently.
T
34. Design methodology in embedded system.
he design process for an embedded system involves several steps, depending
T
on the design methodology:
● Ideation and purpose: Determine the product's purpose and possible
needs
● Requirements: Gather informal descriptions of what the customer wants
● Specification: Refine the requirements into a specification
● Architecture design: Decompose the functionality into major components
● Hardware and software components: Determine what components are
needed based on the architectural description
● System integration: Integrate the built components
● Verification and testing: Ensure the design meets the specification
35. OS command function for a device.
n operating system (OS) is a software that manages computer hardware and
A
software resources and provides common services for computer programs. The
operating system is an essential component of the system software in a
computer system. Application programs usually require an operating system to
function.
Embedded operating systems are a type of operating system that is designed to
be used on a specific device. Embedded operating systems are typically much
smaller and simpler than general-purpose operating systems, and they are often
designed to run on a single task.
Some examples of embedded operating systems include:
● R eal-time operating systems (RTOS): RTOS are designed to be used in
devices that need to respond to events in real time, such as industrial
control systems and medical devices.
● Microcontroller operating systems (MOS): MOS are designed to be used in
small devices with limited resources, such as microcontrollers and smart
sensors.
● Mobile operating systems (MOS): MOS are designed to be used in mobile
devices, such as smartphones and tablets.
36. Memory organization of a general purpose processor.
emory organization in a general purpose processor involves using different
M
types of memory, such as RAM, ROM, cache memory, virtual memory, flash
memory, and magnetic disks, for specific purposes. The main memory of a
general purpose computer is made up of RAM integrated circuit chips, but a
portion of the memory may be constructed with ROM chips.
37. USB protocol and different data transfer in USB.
SB stands for Universal Serial Bus, and it's a standard method for transferring
U
data between a host device, like a computer, and a peripheral device, like a
mouse. USB cables have two sets of wires: power, which carries the current, and
data, which transfers data signals.
SB supports three types of speed: low speed (1.5 Mbps), full speed (12 Mbps),
U
and high speed (480 Mbps). USB 2.0 has a maximum data transfer speed of 480
Mbps, and USB 3.2 has a maximum data transfer speed of 5 Gbps.
SB uses a serial communication protocol, which means that data is transmitted
U
one bit at a time, rather than in parallel. There are four types of USB data
transfers: control, isochronous, interrupt, and bulk.
● Control: Non-periodic transfers, typically used for device configuration,
commands, and status operation
● Isochronous: A unidirectional USB pipe that transfers data
● Interrupt: Similar to asynchronous transfers, but the data transmitted is
small so it can be repeated to ensure full data delivery
● Bulk: For large amounts of data that don't have to be transmitted quickly
38. 3 level bus architecture model.
three-bus organization in computer architecture uses two buses as source
A
buses and one as a destination bus. The source buses move data out of
registers (out-bus), and the destination bus may move data into a register
(in-bus). Each of the two out-buses is connected to an ALU input point.
39. Function of cache memory.
ache memory is a faster and smaller segment of memory that acts as a buffer
C
between the CPU (central processing unit) and the main memory (RAM). Let’s
explore its functions and benefits:
Faster Access:
● Cache memory provides faster access compared to main memory.
● It resides closer to the CPU, often on the same chip or in close proximity.
● By storing frequently accessed data and instructions, cache memory
reduces the time needed to retrieve them.
Reducing Memory Latency:
● Memory access latency refers to the time taken for processes to retrieve
data from memory.
● Caches are designed to exploit the principle of locality, which means that
programs tend to access a relatively small portion of their address space
repeatedly.
● Cache memory helps reduce the time gap between accessing data and
acting on it, improving overall system performance.
Lowering Bus Traffic:
● Accessing data from main memory involves transferring it over the system
bus.
● The bus is a shared resource, and excessive traffic can lead to congestion
and slower data transfers.
● By utilizing cache memory, the processor can reduce the frequency of
accessing main memory, resulting in less bus traffic and improved system
efficiency.
Increasing Effective CPU Utilization:
● Cache memory allows the CPU to operate at a higher effective speed.
● The CPU spends more time executing instructions rather than waiting for
memory access.
● This leads to better utilization of the CPU’s processing capabilities and
higher overall system performance.
Enhancing System Scalability:
● Cache memory helps improve system scalability by reducing the impact of
memory latency on overall performance.
● As systems scale up, efficient memory access becomes critical, and cache
memory plays a vital role in maintaining performance.
40. Use of real time clock.
real time clock (RTC) is a digital clock that keeps accurate time and date
A
information for a system, even when the device is shut down, loses power, or has
an energy-saving “sleep mode” function. RTCs are often integrated circuits and
can have the following benefits:
● Reliable timekeeping: RTCs can maintain time during system states like
hangs, reboots, and full shutdowns.
● Low power consumption: RTCs can be important when running from
alternate power.
● Time-critical tasks: RTCs can free up the main system for time-critical
tasks.
● Accuracy: RTCs can sometimes be more accurate than other methods.
RTCs often have an alternate power source, such as a lithium battery.
Some applications that depend on having a reliable time reference include:
● A thermostat needs to be able to change the temperature set point on a
precise schedule.
● Applications that log information may need to record time and date for a
log entry.
● The most familiar application of an RTC is how time is kept on a wearable,
such as a fitness tracker.
41. Suprscaler and VLIW operations.
uperscalararchitecture is a method of parallel computing that uses a central
S
processing unit (CPU) to manage multiple instruction pipelines, allowing several
instructions to be executed simultaneously during a clock cycle. Superscalar
processors achieve this by implementing a form of parallelism called
instruction-level parallelism within a single processor.
uperscalar architectures allow multiple operations to be launched by a single
S
instruction issue, which is achieved through the incorporation of multiple
arithmetic logic units (ALUs), including both floating-point and integer/logical
functional units. A superscalar processor analyzes an instruction stream and
executes multiple instructions in parallel as long as they do not depend on each
other.
ery Long Instruction Word (VLIW)or VLIW Machines. VLIW uses Instruction
V
Level Parallelism, i.e. it has programs to control the parallel execution of the
instructions.
LIW Architecture deals with it by depending on the compiler. The programs
V
decide the parallel flow of the instructions and to resolve conflicts. This increases
compiler complexity but decreases hardware complexity by a lot.
❖ Features :
● T he processors in this architecture have multiple functional units, fetch
from the Instruction cache that have the Very Long Instruction Word.
● Multiple independent operations are grouped together in a single VLIW
Instruction. They are initialized in the same clock cycle.
● Each operation is assigned an independent functional unit.
● All the functional units share a common register file.
● Instruction words are typically of the length 64-1024 bits depending on the
number of execution units and the code length required to control each
unit.
● Instruction scheduling and parallel dispatch of the word is done statically
by the compiler.
● The compiler checks for dependencies before scheduling parallel
execution of the instructions.
42. Define FSM and optimize it
Finite State Machine (FSM) is a mathematical model of computation that can
A
be in a finite number of states at any given time. The FSM can change from one
state to another in response to some inputs; the change from one state to
nother is called a transition. FSMs are widely used to model and control
a
systems with discrete and sequential behavior, such as software protocols, digital
circuits, robotic systems, and user interfaces, among others.
SMs can be optimized in a number of ways :
F
One way is to reduce the number of states – This can be done by merging states
that have the same output and the same transitions.
Another way to optimize FSMs is to increase the number of unassigned state
codes. This can be done by using a heuristic to improve state-assignment and
logic-optimization