0% found this document useful (0 votes)
12 views

Unit 4 ES

Uploaded by

sgg866425
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Unit 4 ES

Uploaded by

sgg866425
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Unit 4

1. Control Data Flow Graph for Real-Time Program Analysis


A **Control Data Flow Graph (CDFG)** is a specialized representation used to analyze real-
time programs by merging the aspects of control flow and data flow.

Components of CDFG:
- **Control Flow:** It captures the execution order of instructions, usually represented as
nodes for individual instructions and edges that denote control transfers (like branches and
jumps).
- **Data Flow:** This shows how data values are produced and consumed by different
operations. It indicates dependencies between different variables and functions within the
program.

#### Importance in Real-Time Systems:


In real-time systems, tasks must complete within strict timing constraints. CDFGs are crucial
for several reasons:
- **Deadline Analysis:** By analyzing the critical paths (the longest path through the graph
that determines the completion time), developers can ascertain whether tasks can meet
their deadlines.
- **Resource Allocation:** Understanding the flow of data and control helps in optimizing
resource usage, ensuring that processors are utilized efficiently without overloading any
single component.
- **Optimization:** CDFGs enable compiler optimizations by revealing opportunities for
instruction reordering, parallel execution, and reducing the overhead caused by unnecessary
data movements.

2. Data Flow Diagram with an Example


A **Data Flow Diagram (DFD)** visually represents how data moves through a system,
emphasizing the processes, data stores, and external entities that interact with the system. It
typically consists of four main components:

- **Processes:** Activities that transform input data into output data.


- **Data Flows:** Arrows that represent the flow of data between processes, data stores,
and external entities.
- **Data Stores:** Repositories for data that can be accessed by processes.
- **External Entities:** Outside systems or users that interact with the system.
Example: Online Shopping System
1. **Processes:**
- **Browse Products:** Users can view products available for sale.
- **Add to Cart:** Users can select products to place in their shopping cart.
- **Checkout:** Users can finalize their purchases.

2. **Data Stores:**
- **Product Database:** Stores details of all available products.
- **Shopping Cart:** Temporary storage for items selected by the user before purchase.
- **User Database:** Contains user information, order history, and payment details.

3. **External Entities:**
- **User:** The person interacting with the shopping system.
- **Payment Gateway:** An external service that processes payments.
3. Explanation of Key Concepts
a. Process
A **Process** is a program in execution and represents a dynamic entity. It encompasses:
- **Program Code:** The actual instructions to be executed.
- **Current Activity:** Identified by the program counter, which points to the next
instruction to execute.
- **Resources:** Memory allocation, file descriptors, and other resources that the process
needs to run.

Processes operate in separate memory spaces, which provides isolation and protects them
from each other's data. Context switching, the act of switching the CPU from one process to
another, incurs overhead due to saving and restoring process state.

b. Thread
A **Thread** is a lightweight process, the smallest sequence of programmed instructions
that can be managed independently by a scheduler. Threads within the same process share
the same address space and resources but have separate stacks for maintaining their state.

**Advantages of Threads:**
- **Lower Overhead:** Creating and managing threads is less resource-intensive compared
to processes.
- **Concurrency:** Threads enable concurrent execution within the same application,
improving responsiveness (for example, a web browser can load pages while allowing user
input).
- **Resource Sharing:** Threads within the same process can easily communicate and share
data, leading to efficient resource utilization.

c. Tasks
A **Task** is a specific unit of work or an action that needs to be executed. In the context of
real-time systems, tasks are often defined with:
- **Priority:** The importance of the task, influencing scheduling.
- **Timing Constraints:** Deadlines by which the task must be completed.
Tasks can be implemented using processes or threads, depending on the isolation required.
In real-time applications, task scheduling and management are crucial for meeting deadlines
and ensuring system reliability.

4. Problems Associated with Sharing Data by Multiple Tasks


Sharing data among multiple tasks can lead to several significant issues:

a. Race Conditions
A **Race Condition** occurs when two or more tasks read and write shared data
simultaneously. The outcome depends on the timing of task execution, which can lead to
unpredictable and erroneous results. For example, if two tasks increment the same counter
variable at the same time, they may overwrite each other's updates, leading to an incorrect
count.

b. Deadlocks
A **Deadlock** occurs when two or more tasks are waiting for each other to release
resources, causing all involved tasks to be indefinitely blocked. For instance, if Task A holds a
lock on Resource 1 and waits for Resource 2 while Task B holds a lock on Resource 2 and
waits for Resource 1, both tasks will wait indefinitely, leading to a deadlock situation.

c. Data Corruption
When tasks access shared data without proper synchronization, it can lead to **data
corruption**. This corruption can occur when one task modifies data while another task
reads it, resulting in inconsistencies. For example, if Task 1 is writing to a shared buffer while
Task 2 is reading from it, Task 2 might read incomplete or invalid data.

These problems highlight the necessity of proper synchronization mechanisms to manage


data access and ensure data integrity in concurrent systems.

5. Inter-Process Communication (IPC) and Solutions


**Inter-Process Communication (IPC)** refers to the mechanisms that allow processes to
communicate and synchronize their actions. IPC helps avoid direct data sharing issues by
providing structured communication channels.
IPC Mechanisms:
1. **Message Queues:** Allow processes to send and receive messages in a queue format,
providing asynchronous communication. This method enables processes to operate
independently while still being able to share information.

2. **Pipes:** A unidirectional communication channel where data flows from one process to
another. Named pipes allow communication between unrelated processes, while
anonymous pipes are typically used between parent and child processes.

3. **Shared Memory:** Processes can access a common memory space for fast data
exchange. However, synchronization mechanisms (like semaphores or mutexes) are required
to manage access and avoid race conditions.

4. **Sockets:** Facilitate communication between processes over a network. They can be


used for inter-machine communication, allowing distributed systems to interact seamlessly.

Advantages of IPC:
- **Decoupling:** IPC reduces the coupling between processes, enabling them to operate
independently.
- **Scalability:** By avoiding shared data, systems can be designed to scale more effectively.
- **Resource Management:** IPC mechanisms allow efficient use of system resources by
controlling how processes interact.

6. Tight Coupling & Loose Coupling in a Multiprocessor System


In multiprocessor systems, the design can vary in how tightly processors are connected and
share resources:

a. Tight Coupling
In a **tightly coupled system**, multiple processors share a common memory and are
interconnected through a fast communication bus. Characteristics include:
- **Shared Memory:** All processors can access the same physical memory, facilitating fast
data sharing.
- **Low Latency:** Since processors share memory, communication delays are minimized.
- **Increased Bandwidth:** Tightly coupled systems often have high bandwidth due to the
proximity of processors.

**Challenges:**
- **Scalability Issues:** As more processors are added, the contention for shared resources
can lead to bottlenecks.
- **Complexity in Synchronization:** Managing access to shared data requires sophisticated
synchronization mechanisms, increasing system complexity.

b. Loose Coupling
In a **loosely coupled system**, each processor has its own local memory, and
communication occurs through message passing. Characteristics include:
- **Independent Memory:** Processors do not share a physical memory space, enhancing
data isolation.
- **Scalability:** These systems can scale more easily as processors can operate
independently without contention for shared memory.
- **Fault Tolerance:** A failure in one processor does not directly impact others, improving
system robustness.

**Challenges:**
- **Higher Latency:** Communication through message passing introduces delays compared
to direct memory access.
- **Increased Complexity in Communication:** Designing efficient communication protocols
can be challenging.

7. FSM States in a Program Model for ACVM


An **Abstract Control-Flow Model (ACVM)** often uses a Finite State Machine (FSM) to
describe the various states of a program during execution. The FSM comprises the following
states:
1. **Idle State:** The program is waiting for input or events. It consumes minimal resources
and is not performing any processing.
2. **Running State:** The program actively executes instructions. It may perform
computations, handle input/output operations, and interact with other components.

3. **Blocked State:** The program is waiting for a resource or event to proceed. This state
can occur due to waiting for I/O operations, locks, or signals from other tasks.

4. **Terminated State:** The program has completed execution, either successfully or due
to an error. Resources allocated to the program are released in this state.

You might also like