0% found this document useful (0 votes)
79 views99 pages

Real-Time Operating Systems Overview

The document provides an overview of Real-Time Operating Systems (RTOS), detailing their structure, characteristics, and the types of real-time systems: hard and soft. It discusses task management, synchronization mechanisms like semaphores, and inter-process communication methods such as message passing and pipes. Additionally, it covers the architecture of RTOS, including various kernel types and their functionalities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views99 pages

Real-Time Operating Systems Overview

The document provides an overview of Real-Time Operating Systems (RTOS), detailing their structure, characteristics, and the types of real-time systems: hard and soft. It discusses task management, synchronization mechanisms like semaphores, and inter-process communication methods such as message passing and pipes. Additionally, it covers the architecture of RTOS, including various kernel types and their functionalities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module III

Real Time Operating Systems: Structure and


characteristics of Real Time Systems, Task: Task states,
Task synchronization -Semaphores- types, Inter task
communication mechanisms: message queues, pipes,
event registers, signals, Exceptions and interrupt
handling.
Real-time System
• Real-time System is a system that is
put through real time which means
response is obtained within a specified
timing constraint or system meets the
specified deadline.
• Real time system is of two types – Hard
and Soft.
Real Time System

• A system is said to be Real Time if it is


required to complete it’s work & deliver
it’s services on time.
• Example – Flight Control System
• All tasks in that system must execute on
time.
• Non Example – PC system
Hard and Soft Real Time Systems
• Hard Real Time System
• Failure to meet deadlines is fatal
• example : Flight Control System
• Soft Real Time System
• Late completion of jobs is undesirable but not
fatal.
• System performance degrades as more & more
jobs miss deadlines
• Online Databases
• Qualitative Definition.
Hard and Soft Real Time Systems
(Operational Definition)

• Hard Real Time System


• Validation by provably correct procedures or
extensive simulation that the system always meets
the timings constraints
• Soft Real Time System
• Demonstration of jobs meeting some statistical
constraints suffices.
• Example – Multimedia System
• 25 frames per second on an average
Role of an OS in Real Time Systems

• Standalone Applications
• Often no OS involved
• Micro controller based Embedded Systems
• Some Real Time Applications are huge &
complex
• Multiple threads
• Complicated Synchronization Requirements
• Filesystem / Network
• OS primitives reduce the software design time
CHARACTERISTICS OF RTOS
• Time Constraints: Time constraints
related with real-time systems simply
means that time interval allotted for the
response of the ongoing program.
• This deadline means that the task should
be completed within this time interval.
• Real-time system is responsible for the
completion of all tasks within their time
intervals.
CHARACTERISTICS OF RTOS
• Correctness: Correctness is one of the
prominent part of real-time systems.
• Real-time systems produce correct result
within the given time interval.
• If the result is not obtained within the given
time interval then also result is not
considered correct.
• In real-time systems, correctness of result is
to obtain correct result in time constraint.
CHARACTERISTICS OF RTOS
• Embedded: All the real-time systems are
embedded now-a-days.
• Embedded system means that
combination of hardware and software
designed for a specific purpose.
• Real-time systems collect the data from
the environment and passes to other
components of the system for processing.
CHARACTERISTICS OF RTOS
• Safety: Safety is necessary for any system
but real-time systems provide critical safety.
• Real-time systems also can perform for a
long time without failures.
• It also recovers very soon when failure
occurs in the system and it does not cause
any harm to the data and information.
CHARACTERISTICS OF RTOS
• Concurrency: Real-time systems are
concurrent that means it can respond to a
several number of processes at a time.
• There are several different tasks going on
within the system and it responds
accordingly to every task in short intervals.
• This makes the real-time systems
concurrent systems.
CHARACTERISTICS OF RTOS
• Distributed: In various real-time systems,
all the components of the systems are
connected in a distributed way.
• The real-time systems are connected in such
a way that different components are at
different geographical locations.
• Thus all the operations of real-time systems
are operated in distributed ways.
CHARACTERISTICS OF RTOS
• Stability: Even when the load is very
heavy, real-time systems respond in the
time constraint i.e. real-time systems
does not delay the result of tasks even
when there are several task going on a
same time.
• This brings the stability in real-time
systems.
CHARACTERISTICS OF RTOS
• Resource management: Real-time
systems must manage their resources
efficiently, including processing power,
memory, and input/output devices.
• The system must ensure that resources
are used optimally to meet the time
constraints and produce correct results.
CHARACTERISTICS OF RTOS
• Security: Real-time systems may
handle sensitive data or operate in
critical environments, which makes
security a crucial aspect.
• The system must ensure that data is
protected and access is restricted to
authorized users only.
Features of RTOS’s

• i. Multithreading and preemptability


– The scheduler should be able to
preempt any task in the system and
allocate the resource to the thread that
needs it most even at peak load.
• ii. Thread Priority – All tasks are
assigned priority level to facilitate
[Link] highest priority task
that is ready to run will be the task that
• iii. Inter Task Communication &
Synchronization – Multiple tasks pass
information among each other in a timely
fashion and ensuring data integrity
• iv. Priority Inheritance – RTOS should have
large number of priority levels & should
prevent priority inversion using priority
inheritance.
• v. Short Latencies – The latencies are short
and predefined.
• • Task switching latency: The time needed
to save the context of a currently executing
task and switching to another task.
• • Interrupt latency: The time elapsed
between execution of the last instruction of
the interrupted task and the first instruction
in the interrupt handler.
• • Interrupt dispatch latency: The time from
the last instruction in the interrupt handler
to the next task scheduled to run.
RTOS Architecture
• For simpler applications, RTOS is usually
a kernel but as complexity increases,
various modules like networking
protocol stacks debugging facilities,
device I/Os are includes in addition to
the kernel.
KERNAL
• RTOS kernel acts as an abstraction layer
between the hardware and the
applications.
• There are three broad categories of
kernels
Monolithic kernel
• Monolithic kernels are part of Unix-like operating systems like
Linux.
• A monolithic kernel is one single program that contains all of the
code necessary to perform every kernel related task.
• It runs all basic system services (i.e. process and memory
management, interrupt handling and I/O communication, file
system, etc) and provides powerful abstractions of the
underlying hardware.
• Amount of context switches and messaging involved are greatly
reduced which makes it run faster than microkernel.
Microkernel
• It runs only basic process communication
(messaging) and I/O control.
• It normally provides only the minimal services
such as managing memory protection, Inter
process communication and the process
management.
• The other functions such as running the
hardware processes are not handled directly
by microkernels.
CONTD
• Thus, micro kernels provide a smaller set
of simple hardware abstractions.
• It is more stable than monolithic as the
kernel is unaffected even if the servers
failed ([Link] System).
• Microkernels are part of the operating
systems like AIX, BeOS, Mach, Mac OS X,
Etc
Hybrid Kernel
• Hybrid kernels are extensions of
microkernels with some properties of
monolithic kernels.
• Hybrid kernels are similar to
microkernels, except that they include
additional code in kernel space so that
such code can run more swiftly than it
would were it in user space.
• These are part of the operating systems
such as Microsoft Windows NT, 2000
Exokernel
• Exokernels provides efficient control
over hardware.
• It runs only services protecting the
resources (i.e. tracking the ownership,
guarding the usage, revoking access to
resources, etc) by providing low-level
interface for library operating systems
and leaving the management to the
application.
Services Offered By a RTOS
System
RTOS Services

28
Task Management &
Scheduling
• Task is considered as a scheduling unit which
implements a function

• OS services that collectively move tasks from one


state to another

• Provide context switching services

• Task parameters like deadline, period, priority etc


need to be specified
29
Task Management
• Task states in a typical RTOS:

• Dormant
• Waiting
• Ready
• Running
• Interrupted

30
Ready State:
• The task is ready to run and is waiting for
the CPU to be assigned to it.
• It has all necessary resources available,
and the task's priority is sufficient to be
scheduled for execution.
• Tasks in this state are placed in the ready
queue by the RTOS scheduler.
Running State:
• The task is currently being executed by
the CPU.
• There can only be one task running at a
time in a single-core system, but in
multi-core systems, multiple tasks can
be in the running state simultaneously
(one per core).
Blocked (or Waiting)
State:
• The task cannot proceed because it is
waiting for a resource or an event to
occur (e.g., waiting for I/O completion, a
semaphore, or a message from another
task).
• Once the condition for which the task is
waiting is met, it transitions back to the
ready state
Interrupted:
• The task is temporarily paused and not
eligible for execution.
• This is a state where the task is not
considered for scheduling.
• The task could be suspended by the OS, a
higher-priority task, or due to a system
call. This state can be either temporary
(waiting for some manual intervention or
system change) or permanent
(terminated).
Terminated (or Dormant)
State:
• The task has completed its execution or
has been explicitly terminated by the
system or another task.
• After a task enters the terminated state, it
is removed from the system, and its
resources are reclaimed.
Task synchronization
• Task synchronization in RTOS
(Real-Time Operating System) is the
process of coordinating tasks and
threads to communicate, share data,
and avoid deadlocks.
Task synchronization
• Task synchronization is a critical aspect of
real-time and multi-tasking systems, ensuring
that tasks (or threads) cooperate and access
shared resources in a way that prevents
conflicts, data corruption, and other errors
that can occur due to concurrent access.
• In a real-time operating system (RTOS), it is
important to manage synchronization
carefully to meet the timing and consistency
requirements of tasks.
Critical Section:
• A critical section is a part of the code
where shared resources (e.g., data,
variables, hardware devices) are
accessed.
• Proper synchronization is required to
ensure that only one task can access
the critical section at a time to prevent
race conditions.
Race Condition:
• A race condition occurs when two or
more tasks attempt to access or modify
shared data simultaneously, leading to
inconsistent or erroneous results.
• Synchronization mechanisms are used
to prevent race conditions.
How is task synchronization done in RTOS?

●Semaphores:
●A signal between tasks that doesn't carry data.
● A task can "take" a semaphore and block until another
task signals it.
●Mutexes:
●A binary semaphore that protects critical sections.
● A task "takes" a mutex before entering a critical section
and "gives" it when it's done.
●Queues:
●A mechanism for tasks to communicate and share data.
●Events:
●A mechanism for synchronizing internal activities.
●Message passing:
Semaphore
• A semaphore is essentially a variable
or abstract data type that is used to
control access to a shared resource in a
concurrent system, such as a
multitasking environment.
• Semaphores are used to signal
between tasks and synchronize their
execution.
There are two primary types
of semaphores:
• Binary Semaphore (or Mutex):
• A binary semaphore can only have two values:
0 or 1.
• It is primarily used for mutual exclusion, ensuring
that only one task can access a critical section or
a shared resource at a time.
• It works like a lock:
• when a task acquires the semaphore, its value
becomes 0, and other tasks must wait until the
semaphore is released (its value becomes 1
again).
Counting semaphore
• A counting semaphore can have a value that is an
integer, and its value can be greater than 1.
• It is used when there are multiple instances of a
shared resource (e.g., a pool of identical resources,
like a set of printers or buffers).
• The semaphore value represents how many
instances of the resource are available.
• Tasks can "take" (decrement) the semaphore when
they access the resource and "give" (increment) it
when they release the resource.
Operations
• The process of using Semaphores provides
two operations:
• wait (P): The wait operation decrements the
value of the semaphore
• signal (V): The signal operation increments
the value of the semaphore.
• When the value of the semaphore is zero,
any task that performs a wait operation will
be blocked until another task performs a
signal operation.
Wait (P operation):
• Wait (P operation): This operation
checks the semaphore’s value.
• If the value is greater than 0, the task
is allowed to continue, and the
semaphore’s value is decremented by 1.
• If the value is 0, the task is blocked
(waits) until the semaphore value
becomes greater than 0.
Signal (V operation):
• Signal (V operation): After a task is
done using the shared resource, it
performs the signal operation.
• This increments the semaphore’s value
by 1, potentially unblocking other
waiting tasks and allowing them to
access the resource.
Inter process communication

• Inter process communication (IPC) allows


different programs or processes running on a
computer to share information with each
other.
• IPC allows processes to communicate by
using different techniques like sharing
memory, sending messages, or using files.
• It ensures that processes can work together
without interfering with each other.
The two fundamental models of Inter
Process Communication are:

• Shared Memory
• Message Passing
Shared Memory
• IPC through Shared Memory is a
method where multiple processes are
given access to the same region of
memory.
• This shared memory allows the
processes to communicate with each
other by reading and writing data
directly to that memory area.
Message Passing
• IPC through Message Passing is a method
where processes communicate by sending
and receiving messages to exchange data.
• In this method, one process sends a
message, and the other process receives it,
allowing them to share information.
• Message Passing can be achieved through
different methods like Sockets, Message
Queues or Pipes.
Methods
• Pipes
• Message Queues
• Shared Memory
• Remote Procedure Calls (RPC)
• Semaphores
• Sockets.
Pipes
• In an operating system (OS), a pipe is a
temporary connection that allows
programs to communicate with each
other.
●A pipe is a virtual channel that allows data to be
transferred between processes.
●A pipe can be one-way or two-way.
●In a one-way pipe, one process writes data to
the pipe, and the other process reads from it.
●In a two-way pipe, both processes can read and
write data.
Types of Pipes
• Anonymous Pipes:Used for communication
between related [Link] using
system calls like pipe() (in Unix-like systems
• Named Pipes (FIFOs):Can be used by
unrelated processes to communicate
• .Created using the mkfifo() system call or
command-line tools.
• They appear as files in the filesystem and
can be opened by processes like regular
files.
How pipe works
• A pipe provides a communication channel
between two processes.
• The process sending data to the pipe writes
to the pipe, while the process receiving data
from the pipe reads from it.
• Producer Process (Writer): This process
writes data into the pipe.
• Consumer Process (Reader): This process
reads data from the pipe.
• The pipe has two ends:
• Write End: The process sends (writes)
data here.
• Read End: The process receives
(reads) data here.
a) Creating a Pipe
• A pipe is created in
the operating int pipefd[2]; //
system by using a Array to store file
system call or
descriptors for the
function like pipe()
in Unix-like pipe
systems. pipe(pipefd); //
• This call returns Creates the pipe;
two file pipefd[0] is read,
descriptors: one pipefd[1] is write
for the read end
and one for the
b) Forking a Process
• In most cases, pid_t pid = fork();
the pipe is used if (pid == 0) {
between a parent // Child process:
and child
Read from the pipe
process.
} else {
• So after creating
// Parent process:
the pipe, the
parent process Write to the pipe
typically calls }
fork() to create a
c) Writing to the Pipe:
write(pipefd[1],
• The parent process
"Hello, World!", 13);
(or any writer
process) writes // Write to the pipe
data into the pipe (write end)
using write().
• The data is
buffered in the
kernel until the
reader process is
ready to read it.
Reading from the Pipe:
• The child process (or char buffer[128];
any reader process)
read(pipefd[0];
reads the data from
the pipe using read().
• The data is passed
from the pipe's buffer
to the reading process.
How Data is Transferred:
• The pipe is a buffer that temporarily holds data that
one process writes and another process reads.
• Pipes are unidirectional, meaning data can only
flow in one direction at a time: from the writer to the
reader.
• The operating system manages the data in the pipe,
ensuring that the writer can continue writing until the
buffer is full, and the reader can continue reading
until the buffer is empty.
• If the writer tries to write when the pipe is full, it may
block until the reader consumes some data, creating
Simple C program example
showing IPC via pipes:
• #include <stdio.h>
• #include <unistd.h>
• #include <string.h>
• int main() {
• int pipefd[2];
• pid_t pid;
• char buffer[128];
• // Create a pipe
• // Fork a child process
• pid = fork();

• if (pid == 0) { // Child process


• close(pipefd[1]); // Close write
end
• read(pipefd[0], buffer,
sizeof(buffer)); // Read from pipe
• printf("Child received: %s\n",
buffer);
• close(pipefd[0]);
• } else { // Parent process
• close(pipefd[0]); // Close read end
• const char *message = "Hello from
Parent!";
• write(pipefd[1], message,
strlen(message) + 1); // Write to pipe
• close(pipefd[1]);
• }

• return 0;
•}
PIPE STATES
Message Queues
• Queues are the primary form of intertask
communications.
• They can be used to send messages between
tasks, and between interrupts and tasks.
• In most cases they are used as FIFO (First
In First Out) buffers with new data being sent
to the back of the queue, although data can
also be sent to the front.

Primary functions of the message queues:

• Message Storage:
• A message queue stores messages sent
by the one process until they are
retrieved by the another process.
• Each message is placed in the queue
and remains there until it is read by the
receiving process.
• Ordered Communication:
• The Message queues ensure that
messages are delivered in the order
they were sent.
• This ordered communication is vital in
the scenarios where the sequence of
the operations matters.
• Asynchronous Communication:
• With message queues, processes do not
need to be synchronized or directly
connected.
• A sending process can place a message
in the queue and receiving process can
retrieve it later allowing for the
asynchronous communication.
• Decoupling of Processes:
• The Message queues allow processes to
be decoupled meaning the sending and
receiving processes do not need to be
aware of the each other’s existence or
state.
• They interact indirectly through the
queue which can improve modularity
and scalability.
• Prioritization:
• The Some message queue
implementations allow the messages to
be prioritized where certain messages
can be processed before others based
on the priority levels.
Atomic operations
• A queue is a simple FIFO system with
atomic reads and writes.
• “Atomic operations” are those that
cannot be interrupted by other tasks
during their execution.
• This ensures that another task cannot
overwrite our data before it is read by
the intended target.
• Task A writes some data to a
Example queue.
• No other thread can
interrupt Task A during that
writing process.
• After, Task B can write some
other piece of data to the
queue.
• Task B’s data will appear
behind Task A’s data, as the
queue is a FIFO system
Types of Message Queues
• The two types of Message Queues:
• Point-to-Point
• Publish/Subscribe.
Point-to-Point
• Point-to-Point or P2P messaging model
delivers one message to only one
consumer application.
• There may be multiple receiver
applications attached to the Message
Queue, but each message in the queue
is only consumed by one receiver
application to which it is addressed.
P2P
• In P2P, the sender and receiver applications
have no timing dependencies.
• The receiver can fetch the message regardless
of whether it was running when the sender
application sent it.
• Once it’s successfully received, the receiver
acknowledges the successful processing of a
message to the Message Queue so that it can
be removed from the queue.

Pub/Sub
• A Publish/Subscribe messaging model, also
pronounced Pub/Sub, delivers messages
to all subscribers that are subscribed
to that Topic.
• In Pub/Sub, message producers are known
as Producers, and message consumers are
called Subscribers
• Topics are essentially entities that contain
the message and additional details like
publisher and subscriber information.
Pub/Sub
• Compared to P2P messaging, Pub/Sub can deliver a
single message to multiple subscribers.
• All subscribers subscribed to that topic will receive
messages and consume them.
• Another difference comes from the fact that P2P
messaging requires the sender application to know
the address of the receiver application.
• In Pub/Sub, the producer doesn’t need to know
about the subscribers.
• This characteristic provides high decoupling for
applications.
IPC using message queue
• Message Queue is a linked list of
messages stored within the kernel and
identified with a unique key
called Message Queue Identifier.
Steps
• Step 1: Create a new Message Queue or connect to
one that already exists using msgget().
• The msgget() function returns the Message Queue
Identifier associated with the argument key.
• Step 2: Insert a message into the queue
using msgsnd().
• Every message that is added to a queue, has a
positive long integer type field, a non-negative length,
and the actual data bytes (corresponding to the
length), which are all supplied to msgsnd().

Steps
• Step 3: Read a message from the
queue using msgrcv(). Each message
has an identifier so that the user
process can detect and select the
desired message.
• Step 4: Execute Message Queue
Control Operations using msgctl().
Example
• Companies use email services for a
variety of purposes –
• gain customer signups
• run marketing campaigns
• deliver their weekly/monthly product
updates to their customers
• customer account reset, and so on.
• Using Message Queues, for example, a
company can run:
• Multiple producer services like marketing
campaigns, account reset, product updates,
and newsletters using formattable email
elements into the queue.
• A single consumer microservice dedicated to
email delivery functions and totally
independent of the source of the email. This
service can read messages from the queue
one at a time and send emails accordingly.
Event register
• A special register as part of each task’s
control block,
• It consists of a group of binary event
flags used to track the occurrence of
specific events .
• An external source, such as another
task or an ISR, can set bits in the event
register to inform the task that a
particular event has occurred.
Class Assignment
• Describe how interrupt handling and
exception handling are managed in a
Real-Time Operating System. Design a
strategy for minimizing interrupt latency
in a system with strict real-time
constraints. Evaluate the potential
impact of interrupt handling on the
overall system's performance and
real-time guarantees.

You might also like