0% found this document useful (0 votes)
4 views

DSA-2 unit1

The document outlines the syllabus for a B.Tech course on Data Structures and Algorithms, focusing on graph representations, including adjacency matrices and lists, as well as dynamic graphs. It explains the concepts of undirected and directed graphs, provides Java programs for implementing these representations, and discusses the importance of dynamic graphs in modeling real-world systems. Additionally, it highlights the necessity of dynamic graphs for efficient updates, complex pattern detection, and predictive analysis.

Uploaded by

krishnapspk2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

DSA-2 unit1

The document outlines the syllabus for a B.Tech course on Data Structures and Algorithms, focusing on graph representations, including adjacency matrices and lists, as well as dynamic graphs. It explains the concepts of undirected and directed graphs, provides Java programs for implementing these representations, and discusses the importance of dynamic graphs in modeling real-world systems. Additionally, it highlights the necessity of dynamic graphs for efficient updates, complex pattern detection, and predictive analysis.

Uploaded by

krishnapspk2006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

BRANCH : AIE REGULATION : R-22

Course: B.Tech SUBJECT: Data Structures and Algorithms 2

SubjectCode:22AIE203 Year & Sem-II Year-I Sem


Name of the Instructor Naresh Cherukuri

UNIT_1
Syllabus: Graphs- Representations of graphs, Adjacency and Incidence matrices, Adjacency
List, Dynamic Graphs and persistence - Sparse Matrices- Key Value and Structural
implementations, Scalability and data driven parallelism, Block and band matrices.
Generalized Matrix and Vector interface. Standard implementations in NumPy (Python) and
ND Array (Java) - Temporal manipulation and persistence

1.1. Graphs- Representations of graphs, Adjacency and Incidence matrices,


Adjacency List
A Graph is a non-linear data structure consisting of nodes and edges. The nodes
are sometimes also referred to as vertices and the edges are lines or arcs that
connect any two nodes in the graph. More formally a Graph can be defined as, A
Graph consisting of a finite set of vertices(or nodes) and a set of edges that
connect a pair of nodes.

There are many types of graphs exists but here we are only concentrating on
1.Undirected Graph
2. Directed Graph

1.Undirected Graph:

• A graph is called an un-directed graph if all the edges present between any
graph nodes are un-directed.
• By un-directed edges, we mean the edges of the graph that cannot be
determined from the node it is starting and at which node it is ending. All
the edges for a graph need to be non-directed to call it a n-directed graph.
All the edges of a non-directed graph don't have any direction.

CH NARESH 1
Fig 1.1 Undirected Graph
• The graph that is displayed above is an example of an undirected graph.
This graph is called a disconnected graph because there are four vertices
named vertex A, vertex B, vertex C, and vertex D.
• There are also exactly four edges between these vertices of the graph. And
all the vertices that are present between the different nodes of the graph are
not directed, which means the edges don't have any specific direction.
2.Directed Graph:

• Another name for the directed graphs is digraphs.


• A graph is called a directed graph or digraph if all the edges present
between any vertices or nodes of the graph are directed or have a defined
direction. By directed edges, we mean the edges of the graph that have a
direction to determine from which node it is starting and at which node it
is ending.

Fig1.2 Directed Graph


• All the edges for a graph need to be directed to call it a directed graph or
digraph. All the edges of a directed graph or digraph have a direction that
will start from one vertex and end at another.

Representations of Graph
Here are the two most common ways to represent a graph : For simplicity, we
are going to consider only unweighted graphs in this post.
1. Adjacency Matrix
2. Adjacency List

CH NARESH 2
Adjacency Matrix
An adjacency matrix is a way of representing a graph as a matrix of boolean
(0’s and 1’s)
Let’s assume there are n vertices in the graph So, create a 2D
matrix adjMat[n][n] having dimension n x n.
• If there is an edge from vertex i to j, mark adjMat[i][j] as 1.
• If there is no edge from vertex i to j, mark adjMat[i][j] as 0.
Representation of Undirected Graph as Adjacency Matrix:
The below figure shows an undirected graph. Initially, the entire Matrix is
initialized to 0. If there is an edge from source to destination, we insert 1 to
both cases (adjMat[destination] and adjMat[destination]) because we can
go either way.

Representation of Directed Graph as Adjacency Matrix:


The below figure shows a directed graph. Initially, the entire Matrix is
initialized to 0. If there is an edge from source to destination, we insert 1 for
that particular adjMat[destination].

Develop a Java program to represent a graph using adjacency matrix


representation
import java.util.Scanner;

CH NARESH 3
public class AdjacencyMatrix {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);

// Input the number of vertices and if the graph is directed


System.out.print("Enter the number of vertices: ");
int numVertices = scanner.nextInt();
System.out.print("Is the graph directed? (true/false): ");
boolean isDirected = scanner.nextBoolean();

// Create the adjacency matrix


int[][] adjacencyMatrix = new int[numVertices][numVertices];

// Add edges
System.out.print("Enter the number of edges: ");
int numEdges = scanner.nextInt();
for (int i = 0; i < numEdges; i++) {
System.out.print("Enter the source and destination of edge " + (i + 1) + ": ");
int src = scanner.nextInt();
int dest = scanner.nextInt();

// Add the edge to the matrix


if (src >= 0 && src < numVertices && dest >= 0 && dest < numVertices) {
adjacencyMatrix[src][dest] = 1;
// If the graph is undirected, add an edge in the opposite direction
if (!isDirected) {
adjacencyMatrix[dest][src] = 1;
}
} else {
System.out.println("Invalid vertex!");
}
}

// Display the adjacency matrix


System.out.println("Adjacency Matrix:");
for (int i = 0; i < numVertices; i++) {
for (int j = 0; j < numVertices; j++) {
System.out.print(adjacencyMatrix[i][j] + " ");
}
System.out.println();
}

scanner.close();
}
}
Output:
Enter the number of vertices: 4
Is the graph directed? (false)
Enter the number of edges: 3

CH NARESH 4
Enter the source and destination of edge 1: 0 1
Enter the source and destination of edge 2: 1 2
Enter the source and destination of edge 3: 2 3
Adjacency Matrix:
0100
1010
0101
0010

2.Adjacency List
An array of Lists is used to store edges between two vertices. The size of array
is equal to the number of vertices (i.e, n). Each index in this array represents
a specific vertex in the graph. The entry at the index i of the array contains a
linked list containing the vertices that are adjacent to vertex i.
Let’s assume there are n vertices in the graph So, create an array of list of
size n as adjList[n].
• adjList[0] will have all the nodes which are connected (neighbour) to
vertex 0.
• adjList[1] will have all the nodes which are connected (neighbour) to
vertex 1 and so on.
Representation of Undirected Graph as Adjacency list:
The below undirected graph has 3 vertices. So, an array of list will be created
of size 3, where each indices represent the vertices. Now, vertex 0 has two
neighbours (i.e, 1 and 2). So, insert vertex 1 and 2 at indices 0 of array.
Similarly, For vertex 1, it has two neighbour (i.e, 2 and 0) So, insert vertices 2
and 0 at indices 1 of array. Similarly, for vertex 2, insert its neighbours in array
of list.

Undirected Graph to Adjacency list


Representation of Directed Graph as Adjacency list:

CH NARESH 5
The below directed graph has 3 vertices. So, an array of list will be created of
size 3, where each indices represent the vertices. Now, vertex 0 has no
neighbours. For vertex 1, it has two neighbour (i.e, 0 and 2) So, insert vertices
0 and 2 at indices 1 of array. Similarly, for vertex 2, insert its neighbours in
array of list.

Develop a Java program to represent a graph using adjacency List


representation

import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Scanner;

public class AdjacencyListGraph {


public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);

// Input the number of vertices and if the graph is directed


System.out.print("Enter the number of vertices: ");
int numVertices = scanner.nextInt();
System.out.print("Is the graph directed? (true/false): ");
boolean isDirected = scanner.nextBoolean();

// Create the adjacency list (ArrayList of LinkedLists)


ArrayList<LinkedList<Integer>> adjacencyList = new ArrayList<>();
for (int i = 0; i < numVertices; i++) {
adjacencyList.add(new LinkedList<>());
}

// Add edges
System.out.print("Enter the number of edges: ");
int numEdges = scanner.nextInt();
for (int i = 0; i < numEdges; i++) {
System.out.print("Enter the source and destination of edge " + (i + 1) + ": ");

CH NARESH 6
int src = scanner.nextInt();
int dest = scanner.nextInt();

// Add the edge to the adjacency list


if (src >= 0 && src < numVertices && dest >= 0 && dest < numVertices) {
adjacencyList.get(src).add(dest);
// If the graph is undirected, add an edge in the opposite direction
if (!isDirected) {
adjacencyList.get(dest).add(src);
}
} else {
System.out.println("Invalid vertex!");
}
}

// Display the adjacency list


System.out.println("Adjacency List:");
for (int i = 0; i < numVertices; i++) {
System.out.print(i + ": ");
for (int vertex : adjacencyList.get(i)) {
System.out.print(vertex + " ");
}
System.out.println();
}

scanner.close();
}
}
Output:
Enter the number of vertices: 4
Is the graph directed? (false)
Enter the number of edges: 3
Enter the source and destination of edge 1: 0 1
Enter the source and destination of edge 2: 1 2
Enter the source and destination of edge 3: 2 3
Adjacency List:
0: 1
1: 0 2
2: 1 3
3: 2

CH NARESH 7
1.2 Dynamic Graphs:
A dynamic graph is like a regular graph, but it evolves over time. To break
this down, let’s first define a graph in simple terms:
1. Graph: A collection of nodes (also called vertices) connected by edges
(lines between the nodes). Think of a social network: people are nodes, and
friendships between them are edges.
2. Static Graph: This is a graph where the structure (nodes and edges)
doesn’t change once it’s set. For example, a road map where cities (nodes)
and roads (edges) are fixed and do not change.
However, in many real-world scenarios, graphs are not static. Relationships
and connections change. In a dynamic graph, nodes and edges can appear,
disappear, or change over time. This "dynamic" quality makes them powerful
for modeling complex, changing systems.
Analogies to Understand Dynamic Graphs
Imagine a train network where stations are nodes, and train routes are edges.
In a static graph, this network doesn’t change: trains always run the same
routes between fixed stations. But in reality, train routes can change daily due
to maintenance, new stations might open, and some routes might be closed
temporarily. A dynamic graph models these changes, capturing a more realistic
picture of how the train network evolves.
Another analogy is a social media network:
• When someone makes a new friend, an edge is added to the graph.
• If two friends stop communicating, the edge between them might weaken
or disappear.
• New people (nodes) join the network, and some leave, changing the graph
structure.
Why Dynamic Graphs are Necessary
Dynamic graphs are crucial because they model systems that change over
time. Let’s explore some reasons in detail:
1. Real-World Applications: Many networks in the real world are inherently
dynamic:
o Social Networks: Friendships, following relationships, and
interactions change over time.
o Communication Networks: The internet is constantly evolving;
nodes (devices) join, leave, or change their connections.

CH NARESH 8
o Biological Networks: In cell biology, interactions between proteins,
genes, or neurons can fluctuate depending on environmental
conditions or internal states.
o Transportation Networks: Traffic patterns vary throughout the day,
road closures happen, and public transport routes can change.
A static graph cannot accurately capture the evolving nature of these networks.
Dynamic graphs, however, can represent the changes over time, providing
more insights into the behavior and properties of such systems.
2. Complex Pattern Detection: Many complex phenomena can only be
understood when changes are tracked over time. For instance:
o Disease Spread: To model how a disease spreads through a
population, you need to consider how interactions between
individuals (nodes) change over time. Dynamic graphs allow
researchers to simulate and study different scenarios, like how
reducing contact (cutting edges) affects the spread.
o Network Vulnerabilities: In cybersecurity, networks change as new
nodes are added or old ones removed. By studying how these
changes affect network security, one can identify weak points and
respond more effectively to threats.
3. Efficient Updates: When dealing with a large, constantly changing
network, recalculating everything from scratch every time a small change
occurs would be extremely inefficient. Dynamic graphs allow for
incremental updates, meaning you only need to adjust the parts of the
graph that change. This can significantly save computational time and
resources.
4. Prediction and Analysis: In dynamic graphs, historical changes can be
tracked, allowing us to predict future behavior. For instance, in a social
network, observing how certain relationships form or dissolve over time
might help in predicting future interactions or trends.
Technical Breakdown of Dynamic Graphs
From a technical perspective, dynamic graphs require specific algorithms and
data structures that can handle changes efficiently:
1. Data Structures: Dynamic graphs use specialized data structures to allow
quick updates:
o Dynamic adjacency lists: These lists efficiently track connections
(edges) for each node, allowing for easy insertion and deletion.
o Temporal edge lists: Instead of storing just the presence of an edge,
they store when edges are active, enabling the tracking of temporal
changes.
2. Algorithms:
o Incremental algorithms: These update only the affected part of the
graph when a change occurs. For example, if a new road opens in a

CH NARESH 9
city, only the routes connected to that road need recalculating, not
the entire city's road map.
o Streaming algorithms: In cases where the graph changes
continuously (like data packets in a communication network),
streaming algorithms can process updates on the fly without
requiring the entire graph to be stored in memory.
3. Time Complexity: The efficiency of operations in dynamic graphs (adding
or removing nodes/edges) often needs to be faster than recalculating static
graphs. Therefore, researchers have developed dynamic algorithms
optimized for different types of graph changes.
Practical Example: Dynamic Shortest Paths
Imagine Google Maps, which uses a dynamic graph of the road network to
calculate routes. Roads (edges) can close or open, and traffic (weights on
edges) changes throughout the day. A static graph approach would mean
recalculating the entire route map every time a change happens. In contrast, a
dynamic graph approach would just update the parts of the map affected by
changes, allowing for real-time route adjustments.
The Importance of Dynamic Graphs in Data Science and Beyond
In data science, dynamic graphs are used for:
• Anomaly detection: Identifying unusual patterns, such as fraudulent
transactions in a banking network, based on changes over time.
• Community detection: In social networks, dynamic graphs help reveal
how communities form, merge, or split.
In computer science, dynamic graphs are key in:
• Routing algorithms: For adapting routes in changing networks like the
internet.
• Machine learning: In graph neural networks, modeling dynamic
relationships can improve predictions for social interactions or
recommendations.

1.2.1 Write a java program to implement the dynamic graph


// File: DynamicGraph.java

import java.util.ArrayList;
import java.util.Scanner;

public class DynamicGraph {


public static void main(String[] args) {
// Create an adjacency list to store the graph
ArrayList<ArrayList<Integer>> graph = new ArrayList<>();
Scanner scanner = new Scanner(System.in);

// Initial setup: Input the number of vertices


CH NARESH 10
System.out.print("Enter the number of initial vertices: ");
int vertices = scanner.nextInt();

// Initialize the adjacency list for each vertex


for (int i = 0; i < vertices; i++) {
graph.add(new ArrayList<>());
}

// Menu-driven operations on the graph


while (true) {
System.out.println("\nChoose an operation:");
System.out.println("1. Add Edge");
System.out.println("2. Remove Edge");
System.out.println("3. Add Vertex");
System.out.println("4. Remove Vertex");
System.out.println("5. Display Graph");
System.out.println("6. Exit");
int choice = scanner.nextInt();

if (choice == 1) { // Add Edge


System.out.print("Enter source vertex: ");
int source = scanner.nextInt();
System.out.print("Enter destination vertex: ");
int destination = scanner.nextInt();

// Ensure vertices exist


if (source >= 0 && source < graph.size() && destination >= 0 &&
destination < graph.size()) {
graph.get(source).add(destination);
graph.get(destination).add(source); // Assuming an undirected
graph
} else {
System.out.println("Invalid vertices!");
}
} else if (choice == 2) { // Remove Edge
System.out.print("Enter source vertex: ");
int source = scanner.nextInt();
System.out.print("Enter destination vertex: ");
int destination = scanner.nextInt();

// Ensure vertices exist


if (source >= 0 && source < graph.size() && destination >= 0 &&
destination < graph.size()) {
CH NARESH 11
graph.get(source).remove((Integer) destination);
graph.get(destination).remove((Integer) source); // Assuming an
undirected graph
} else {
System.out.println("Invalid vertices!");
}
} else if (choice == 3) { // Add Vertex
graph.add(new ArrayList<>());
System.out.println("Vertex " + (graph.size() - 1) + " added.");
} else if (choice == 4) { // Remove Vertex
System.out.print("Enter vertex to remove: ");
int vertex = scanner.nextInt();

if (vertex >= 0 && vertex < graph.size()) {


// Remove edges associated with the vertex
graph.remove(vertex);
for (ArrayList<Integer> adjList : graph) {
adjList.remove((Integer) vertex);
}
System.out.println("Vertex " + vertex + " removed.");
} else {
System.out.println("Invalid vertex!");
}
} else if (choice == 5) { // Display Graph
System.out.println("Graph adjacency list:");
for (int i = 0; i < graph.size(); i++) {
System.out.print(i + ": ");
for (int j : graph.get(i)) {
System.out.print(j + " ");
}
System.out.println();
}
} else if (choice == 6) { // Exit
break;
} else {
System.out.println("Invalid choice! Please try again.");
}
}

// Close the scanner


scanner.close();
}
}
CH NARESH 12
Explanation of the Dynamic Operations:

• Adding an Edge: Takes source and destination vertices as input, adds them to each
other's adjacency lists (for an undirected graph).
• Removing an Edge: Takes source and destination vertices as input, removes the
corresponding entries from their adjacency lists.
• Adding a Vertex: Simply adds a new ArrayList to represent the new vertex.
• Removing a Vertex: Removes the vertex's adjacency list and updates other vertices'
adjacency lists to remove any references to this vertex.
• Displaying the Graph: Prints the adjacency list representation of the graph.

1.3 Persistence:
Persistence in data structures refers to the ability of a data structure to
preserve its previous versions even after it has been modified. In other words,
when a data structure is persistent, you can access and use any of its past states
without affecting the current or future states. This is useful for applications
like undo functionalities, version control systems, and time-travel debugging.
There are two main types of persistence in data structures:
1. Partial Persistence:
o In partial persistence, you can access all previous versions of the data
structure, but only the latest version can be modified.
o Essentially, you can "look back" at older states of the data structure,
but you can't change them. This is simpler to implement compared
to full persistence.
2. Full Persistence:
o In full persistence, you can access and modify any previous version
of the data structure.
o Any modification to an old version results in a new version being
created, allowing you to access multiple versions and create a tree
of versions.
3. Confluent Persistence:
o This is an extension of full persistence where you can combine
different versions of the data structure into a new version. This
merging of states allows for more complex operations and histories.
How Persistence is Achieved
When a data structure is modified in a persistent way, rather than changing the
structure in place, new nodes are created to represent the changes. The
unchanged parts of the data structure are shared across versions. This is
achieved through techniques like:
• Structural Sharing: Instead of copying the entire data structure, only the
parts that are modified are copied, while the unmodified parts are shared.
This minimizes memory usage.

CH NARESH 13
• Immutable Data Structures: Many persistent data structures are built
using immutable objects, which inherently do not change once created.
Any modification results in a new object, preserving the original.
Example: Persistent Linked List
A persistent linked list works similarly to a standard linked list but retains
previous versions after modifications. If you add or remove an element, a new
version of the list is created by copying only the affected nodes. The rest of
the list shares nodes with the previous versions.
Use Cases of Persistent Data Structures
• Undo/Redo Operations: Text editors or graphics software can implement
undo/redo functionalities using persistent data structures.
• Version Control: Tools like Git use persistent-like data structures to
manage changes in code repositories.
• Functional Programming: Persistent data structures are common in
functional programming languages (e.g., Haskell, Clojure) where
immutability is a key concept.
Advantages
• Access to History: You can retrieve any previous state, which is useful for
debugging and historical analysis.
• Immutability: Persistent data structures promote immutability, which can
help in multithreaded environments where shared data can lead to
inconsistencies.
Disadvantages
• Overhead: Maintaining multiple versions can increase memory usage and
computational overhead.
• Complexity: Implementing persistent data structures can be more complex
than their ephemeral counterparts.
In summary, persistence in data structures allows them to maintain historical
versions, providing benefits in various applications where accessing past
states is crucial. This is typically achieved using structural sharing and
immutability, making persistence a key concept in both functional
programming and real-world applications like version control systems.

1.4. Sparse Matrix


A sparse matrix is a matrix in which most of the elements are zero. Due to
the large number of zero elements, storing sparse matrices in a standard 2D
array format results in significant memory wastage. To address this,
specialized representations are used to store only the non-zero elements of the
matrix, optimizing both space and operations.
Two common representations for sparse matrices are the 3-column
representation (triplet form) and the Compressed Sparse Row (CSR)
representation.

CH NARESH 14
1.4.1. 3-Column Representation (Triplet Form)
In the 3-column (triplet) representation, a sparse matrix is stored using three
arrays to keep track of the non-zero elements:
• Row Index: The row index of each non-zero element.
• Column Index: The column index of each non-zero element.
• Value: The value of the non-zero element.
This format is easy to understand and is suitable for storing sparse matrices
where the non-zero elements are randomly distributed.
Example
Consider the following sparse matrix:
0 0 3 0
0 5 0 0
7 0 0 9

• Non-zero elements: 3, 5, 7, 9.
The 3-column representation of this matrix would be:
Row Index: 0 1 2 2
Column Index: 2 1 0 3
Value: 3 5 7 9

Here, each row in this 3-column representation corresponds to a non-zero


element, storing its row index, column index, and value. This representation
is straightforward but can become less efficient for very large matrices with
more sophisticated patterns of sparsity.

Write a java program to represent a sparse matrix using 3 column


representation
package mat;

import java.util.Scanner;

public class Matrix {

public static void main(String[] args)


{
Scanner sc=new Scanner(System.in);
int sm[][]=new int[3][3],z=0,nz=0;
System.out.println("Enter the matrix\n");
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
sm[i][j]=sc.nextInt();
}
}

CH NARESH 15
System.out.println("The sparse matrix is\n");
for(int i=0;i<3;i++)
{
System.out.println();
for(int j=0;j<3;j++)
{
System.out.printf("%d\t", sm[i][j]);
}
}

for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
if(sm[i][j]==0)
{
z++;
}
else
{
nz++;
}
}
}
if(nz>z)
{
System.out.println("The matrix is dense"); }
else
{
int k=0;
int s[][]=new int[nz][3];
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
if(sm[i][j]!=0)
{
s[k][0]=i;
s[k][1]=j;
s[k][2]=sm[i][j];
k++;
}
}
}

System.out.println("The sparse matrix is\n");


for(int i=0;i<nz;i++)
{
System.out.println();

CH NARESH 16
for(int j=0;j<3;j++)
{
System.out.printf("%d\t",s[i][j]);

}
}
}
}

2. Compressed Sparse Row (CSR) Representation


The Compressed Sparse Row (CSR) representation is a more efficient way
to store and manipulate sparse matrices. It compresses the row information
and stores only the non-zero elements, which makes it particularly useful for
matrix-vector multiplication. CSR uses three 1-dimensional arrays:
1. Values: Stores all non-zero elements of the matrix in a row-wise manner.
2. Column Indices: Stores the column indices corresponding to each non-
zero element in the Values array.
3. Row Pointers: Stores the starting index in the Values array for each row
of the matrix.
Example
Using the same sparse matrix:

0 0 3 0
0 5 0 0
7 0 0 9

The CSR representation of this matrix would be:


• Values: [3, 5, 7, 9]
• Column Indices: [2, 1, 0, 3]
• Row Pointers: [0, 1, 2, 4]
Explanation:
1. Values: The non-zero elements listed in row-major order.
2. Column Indices: The column positions of each element in the Values
array. For example, 3 is at column index 2, 5 at column index 1, etc.
3. Row Pointers: Indicates the starting index of each row in the Values array.
o 0 means the first row starts at index 0 in the Values array.
o 1 means the second row starts at index 1.
o 2 means the third row starts at index 2.
o 4 is the end marker indicating that the third row ends at index 4 in
the Values array.
Why Use CSR?
• Memory Efficient: Stores only the non-zero elements and requires
minimal space for indexing.

CH NARESH 17
• Fast Access: Enables quick traversal and matrix-vector multiplication
because of its row-wise compression.

1.5 Scalability and data driven parallelism


Scalability and data-driven parallelism are critical concepts in computer
science and software engineering, especially when dealing with high-
performance computing, big data processing, or distributed systems. Let’s
dive into these concepts using easy-to-understand analogies and then explore
the technical details.
1. Scalability: Growing Efficiently
Analogy: Imagine you are running a small café. Initially, you can handle
everything on your own: taking orders, making coffee, and serving customers.
As more customers start coming in, your café becomes busier, and you find
yourself overwhelmed. To handle this growth, you decide to hire more staff: a
barista to make coffee, a cashier to take orders, and a server to deliver drinks.
Now, your café can handle more customers without you doing all the work
alone.
Scalability in computing is similar. It’s about a system’s ability to handle
increased load by adding more resources (like more servers, CPUs, or
memory) without significantly degrading performance.
Types of Scalability:
• Vertical Scaling (Scaling Up): Adding more power to your existing
machine (like upgrading to a faster CPU or adding more RAM). Think of
it like upgrading your café equipment to make coffee faster.
• Horizontal Scaling (Scaling Out): Adding more machines to handle the
load. This is like hiring more staff in your café. Each new worker helps
handle more customers.
Why Scalability Matters:
Scalability is essential because it ensures that your application can handle
increased demand smoothly. For example, if an e-commerce website can’t
scale properly, it will slow down or crash when too many people try to buy
during a sale.
2. Data-Driven Parallelism: Dividing and Conquering Work
Analogy: Imagine you’re tasked with assembling 1,000 toy cars for a toy
store. Doing it alone would take ages, but if you have a team of 10 people, you
can divide the work so that each person assembles 100 cars. Better yet, you
split the assembly into stages: one person handles the wheels, another the

CH NARESH 18
body, and so on. This way, the work is done simultaneously, and the cars are
assembled much faster.
Data-Driven Parallelism works on the same principle. It’s about breaking
down a large dataset or computational task into smaller pieces and processing
them simultaneously across multiple processors or machines.
How Data-Driven Parallelism Works:
• Divide: The data or tasks are divided into smaller chunks.
• Distribute: These chunks are sent to different processors or nodes.
• Conquer: Each processor works on its chunk independently.
• Combine: The results are aggregated to form the final output.
For example, imagine sorting a massive list of numbers:
1. Split the list into smaller sub-lists.
2. Each processor sorts its assigned sub-list.
3. Finally, all sub-lists are merged into one sorted list.
Key Concepts in Data-Driven Parallelism:
• Data Partitioning: Dividing data into independent chunks. Think of
slicing a pie; each slice can be eaten separately.
• Task Scheduling: Deciding which processor gets which chunk of data. It’s
like assigning each worker their part of the assembly process.
• Load Balancing: Ensuring each processor gets a fair share of work, so no
one is overwhelmed, similar to making sure no worker has too many tasks
while others have too few.
Technical Details and Challenges
1. Synchronization: Ensuring that all processors work well together. Imagine
if some café workers are too fast and others too slow—it would create a
bottleneck.
2. Communication Overhead: The cost of coordinating between processors.
It’s like the time your staff spends talking to each other instead of serving
customers.
3. Scalability Limits: Sometimes, adding more processors doesn’t help and
can even slow things down due to increased overhead. This is similar to
having too many workers in a small kitchen, where they start getting in
each other's way.
4. Fault Tolerance: Handling failures gracefully. If one worker makes a
mistake, the café shouldn’t come to a halt. Similarly, if one processor fails,
the system should continue functioning.
Applications of Data-Driven Parallelism:
• Big Data Processing: Tools like Hadoop and Spark use parallelism to
process massive datasets efficiently.
• Scientific Computing: Simulations and models (like weather forecasting)
often rely on breaking tasks into parallelizable chunks.
• Machine Learning: Training models on large datasets is sped up
significantly using data-driven parallelism.
CH NARESH 19
Questions for You:
1. Scalability: Are you familiar with concepts like load balancing or
distributed systems?
2. Parallelism: Have you encountered terms like multithreading,
MapReduce, or GPU computing?
3. Synchronization: Do you know about concepts like locks, semaphores, or
barriers in parallel computing?

1.6. Block and band matrices


Block matrices are like dividing a large matrix into smaller, more manageable
chunks or "blocks." Each of these blocks can be thought of as its own smaller
matrix, and by organizing these smaller blocks, we can manipulate the larger
matrix in a more structured way.
A block matrix is a matrix that’s been partitioned into smaller matrices or
"blocks." Instead of treating every individual element of the large matrix
separately, you treat the smaller submatrices as entities within the larger
matrix. This is particularly useful when working with large matrices,
simplifying operations like matrix multiplication, inversion, and solving
systems of equations.

What does a Block Matrix Look Like?

A block matrix is typically expressed as:

Here:
• Each AijA_{ij}Aij represents a block, which itself could be a matrix (not
just a single number).
• The whole matrix AAA is made up of these smaller blocks.
Imagine these blocks as neighborhoods, where each neighborhood (block)
contains its own matrix of houses (elements).
Why Use Block Matrices?
There are several practical reasons for organizing a matrix into blocks:
1. Efficiency: When working with large matrices, it's often more efficient to
treat sections (blocks) at once rather than dealing with each individual
element. For instance, multiplying two large matrices by multiplying
blocks can reduce computational complexity.

CH NARESH 20
2. Structure Preservation: Sometimes, the block structure reveals
something about the underlying problem. For instance, in certain physical
systems, blocks might represent interactions between distinct subsystems.
By organizing things into blocks, we keep the interactions within the
subsystems clear and separate from the overall system.
3. Parallel Computation: In computer science, dividing a problem into
smaller blocks allows for parallel processing, where each block can be
processed independently on different processors, speeding up computation.
Example: A Simple Block Matrix

Now, each of the submatrices within the larger matrix is a block. The whole
matrix AAA can be manipulated at the block level, and operations like addition
and multiplication can be performed on these blocks rather than on individual
elements.
Matrix Multiplication with Blocks
Matrix multiplication is one of the operations that block matrices help
simplify. The rule for matrix multiplication still holds at the block level: the
product of two block matrices is found by multiplying corresponding blocks,
just like multiplying two numbers in a regular matrix.
Let’s say we have two block matrices:

CH NARESH 21
This looks almost exactly like regular matrix multiplication, except now we
are multiplying and adding entire blocks (submatrices) instead of individual
elements.

Recap

• Block matrices help organize large matrices into smaller, more manageable
submatrices.
• Each block can be thought of as its own matrix, making operations on the large matrix
easier and more structured.
• Operations like matrix multiplication can be done at the block level, which is more
efficient and can reveal more about the structure of the problem.
• Special block matrices, like block diagonal matrices, have further simplifying
properties.

1.6.1.Write a java program to perform addition of two block matrices

package hello;

import java.util.Scanner;

class AddLockMatrix {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);

// Input for number of blocks


System.out.println("Enter the number of blocks:");
int blocks = input.nextInt();

// Determine the matrix size based on blocks


int matrixSize = blocks * 2;
int[][] totalsubmatrix1 = new int[matrixSize][matrixSize];
int[][] totalsubmatrix2 = new int[matrixSize][matrixSize];
int[][] addblock = new int[matrixSize][matrixSize];

// Temporary 2x2 submatrices


int[][] submatrix1 = new int[2][2];
int[][] submatrix2 = new int[2][2];

CH NARESH 22
// Input and placement of submatrix1 into totalsubmatrix1
for (int i = 0; i < blocks; i++) {
for (int j = 0; j < blocks; j++) {
System.out.printf("Enter the 2x2 submatrix1 at block (%d, %d):\n",
i, j);
for (int k = 0; k < 2; k++) {
for (int l = 0; l < 2; l++) {
submatrix1[k][l] = input.nextInt();
}
}
// Place the submatrix1 into totalsubmatrix1
for (int k = 0; k < 2; k++) {
for (int l = 0; l < 2; l++) {
totalsubmatrix1[i * 2 + k][j * 2 + l] = submatrix1[k][l];
}
}
}
}

// Input and placement of submatrix2 into totalsubmatrix2


for (int i = 0; i < blocks; i++) {
for (int j = 0; j < blocks; j++) {
System.out.printf("Enter the 2x2 submatrix2 at block (%d, %d):\n",
i, j);
for (int k = 0; k < 2; k++) {
for (int l = 0; l < 2; l++) {
submatrix2[k][l] = input.nextInt();
}
}
// Place the submatrix2 into totalsubmatrix2
for (int k = 0; k < 2; k++) {
for (int l = 0; l < 2; l++) {
totalsubmatrix2[i * 2 + k][j * 2 + l] = submatrix2[k][l];
}
}
}
}

// Display totalsubmatrix1
CH NARESH 23
System.out.println("The reconstructed matrix1 is:");
for (int i = 0; i < matrixSize; i++) {
for (int j = 0; j < matrixSize; j++) {
System.out.print(totalsubmatrix1[i][j] + "\t");
}
System.out.println();
}

// Display totalsubmatrix2
System.out.println("The reconstructed matrix2 is:");
for (int i = 0; i < matrixSize; i++) {
for (int j = 0; j < matrixSize; j++) {
System.out.print(totalsubmatrix2[i][j] + "\t");
}
System.out.println();
}

// Calculate and store the addition of the two matrices


for (int i = 0; i < matrixSize; i++) {
for (int j = 0; j < matrixSize; j++) {
addblock[i][j] = totalsubmatrix1[i][j] + totalsubmatrix2[i][j];
}
}

// Display the resulting addition matrix


System.out.println("The reconstructed Addition matrix is:");
for (int i = 0; i < matrixSize; i++) {
for (int j = 0; j < matrixSize; j++) {
System.out.print(addblock[i][j] + "\t");
}
System.out.println();
}

input.close();
}
}

1.7.Band Matrices

CH NARESH 24
A band matrix is a special kind of matrix where non-zero elements are
concentrated around the diagonal, with all other elements being zero. In a band
matrix, the non-zero elements form a "band" or "stripe" near the diagonal,
while everything far away from this band is zero.

What Does a Band Matrix Look Like?


A band matrix can be represented like this:

In this example:
• The non-zero elements appear near the diagonal, forming a "band."
• Farther away from the diagonal (above or below the non-zero elements),
everything is zero.
Key Properties of a Band Matrix
A band matrix is defined by two key numbers:
1. Upper bandwidth (u): The maximum distance from the main diagonal
where non-zero elements appear above the diagonal.
2. Lower bandwidth (l): The maximum distance from the main diagonal
where non-zero elements appear below the diagonal.
For example, in a matrix with upper bandwidth u=1u = 1u=1 and lower
bandwidth l=1l = 1l=1, all non-zero elements would be in the following
pattern:

CH NARESH 25
Here, u=1u = 1u=1 because the non-zero elements extend only one position above the
diagonal, and l=1l = 1l=1 because they extend only one position below the diagonal.

Types of Band Matrices

There are special cases of band matrices that come up frequently in practice:

1. Diagonal Matrix: This is the simplest case, where only the main diagonal contains
non-zero elements, and all the bandwidths are zero

Penta diagonal Matrix: This is any matrix where the bandwidth is not
restricted to just the diagonals. It can have wider bands with upper band more
than 1 and lower band greater than 1, meaning non-zero elements can extend
farther from the main diagonal.

1.Develop a Program to perform band matrices addition

CH NARESH 26
package hello;

import java.util.Scanner;

class AddTwoMatrix

public static void main(String args[])

int m, n, c, d;

Scanner in = new Scanner(System.in);

System.out.println("Enter the number of rows and columns of matrix");

m = in.nextInt();

n = in.nextInt();

int first[][] = new int[m][n];

int second[][] = new int[m][n];

int sum[][] = new int[m][n];

System.out.println("Enter the elements of first matrix");

for (c = 0; c < m; c++)

for (d = 0; d < n; d++)

first[c][d] = in.nextInt();

CH NARESH 27
System.out.println("Enter the elements of second matrix");

for (c = 0 ; c < m; c++)

for (d = 0 ; d < n; d++)

second[c][d] = in.nextInt();

for (c = 0; c < m; c++)

for (d = 0; d < n; d++)

sum[c][d] = first[c][d] + second[c][d]; //replace '+' with '-' to subtract


matrices

System.out.println("Sum of the matrices:");

for (c = 0; c < m; c++)

for (d = 0; d < n; d++)

System.out.print(sum[c][d] + "\t");

System.out.println();

CH NARESH 28
2.Write a java program to perform matrix multiplication between two band
matrices

age hello;

import java.util.Scanner;

class MatrixMultiplication

public static void main(String args[])

int m, n, p, q, sum = 0, c, d, k;

Scanner in = new Scanner(System.in);

System.out.println("Enter the number of rows and columns of first


matrix");

m = in.nextInt();

n = in.nextInt();

int first[][] = new int[m][n];

CH NARESH 29
System.out.println("Enter elements of first matrix");

for (c = 0; c < m; c++)

for (d = 0; d < n; d++)

first[c][d] = in.nextInt();

System.out.println("Enter the number of rows and columns of second


matrix");

p = in.nextInt();

q = in.nextInt();

if (n != p)

System.out.println("The matrices can't be multiplied with each other.");

else

int second[][] = new int[p][q];

int multiply[][] = new int[m][q];

System.out.println("Enter elements of second matrix");

for (c = 0; c < p; c++)

for (d = 0; d < q; d++)

second[c][d] = in.nextInt();

CH NARESH 30
for (c = 0; c < m; c++) {

for (d = 0; d < q; d++) {

for (k = 0; k < p; k++)

sum = sum + first[c][k]*second[k][d];

multiply[c][d] = sum;

sum = 0;

System.out.println("Product of the matrices:");

for (c = 0; c < m; c++) {

for (d = 0; d < q; d++)

System.out.print(multiply[c][d]+"\t");

System.out.print("\n");

1.7. Generalized Matrix and Vector interface.

CH NARESH 31
The concept of a generalized matrix refers to matrices that go beyond the typical
square or rectangular arrangement of numbers. These matrices are used to handle
more complex mathematical structures and scenarios, where traditional matrices
might not be enough. To explain this concept, let’s break it down through a simple
analogy and then dive into some technical details.

Analogy: Boxes of Different Shapes

Imagine you're organizing a series of boxes. In most cases, each box has a well-
defined structure, like a square or rectangular box (similar to a traditional matrix).
But sometimes, you need to work with more complex shapes, like boxes that
aren't rectangular, or boxes that have compartments inside of them. These
complex shapes are like generalized matrices—they go beyond the standard
rectangular structure to handle more intricate patterns.

Now, in mathematics, generalized matrices extend the concept of traditional


matrices by loosening or modifying the rules. This might involve non-standard
dimensions, matrices with functional entries, or even matrices that represent
operations in non-numerical spaces. They arise in various advanced areas, like
functional analysis, graph theory, or quantum mechanics.

Key Types of Generalized Matrices

1. Matrices Over Rings and Fields


2. Block Matrices
3. Sparse Matrices
4. Operator Matrices
5. Tensor Generalizations (Multidimensional Arrays)

(Students are advised to explore the 1,4,5 matrices above since we are already
explored 2,3 Matrices in the above list)

1.7.1.Vector Interfaces:

Vector class is part of the Collection Framework and is used to implement a


dynamic array that can grow or shrink in size. The Vector class is very similar to
an ArrayList, but with a key difference: synchronization. While ArrayList is
not synchronized (i.e., it's not thread-safe by default), Vector is synchronized,
meaning that it is safe to use in a multithreaded environment.

Let’s break this down step by step using an analogy and then dive into the
technical details of the Vector class.

Analogy: A Stretchable Container


CH NARESH 32
Imagine you have a container (like a bag) that can automatically resize based on
how many objects you put into it. Normally, a regular container has a fixed size,
so when it’s full, you can’t add any more objects without getting a bigger one.
But with a stretchable container:

• When it’s full, it automatically expands to accommodate more items.


• If you take items out, it doesn’t shrink immediately, but it still keeps room
for the maximum capacity it once held.
• If multiple people are trying to access the container (adding and removing
items) at the same time, the container has a locking mechanism to ensure
that only one person can interact with it at a time, preventing conflicts.

The Vector class works similarly:

• It stores elements in a dynamic array that automatically resizes when


needed.
• It maintains synchronization, meaning it can handle multiple threads safely
by ensuring only one thread can modify the Vector at a time.

Basic Properties of Vector

1. Dynamic Array: The Vector class implements a dynamic array that


grows when the current capacity is exceeded, just like the ArrayList.
However, unlike arrays that have a fixed size, the Vector can expand
dynamically, allocating more memory when required.
2. Thread-Safe: Vector is synchronized by default, which means it can be
shared safely among multiple threads. This ensures that two threads cannot
modify the same Vector object at the same time, avoiding data corruption.
3. Growth Factor: When a Vector needs to grow, it increases its size by a
certain amount, which is typically double its current capacity. You can also
specify a custom capacity increment when creating the Vector, which
controls how much extra space is allocated each time it grows.
4. Legacy Class: Although Vector is still part of the Java Collections
Framework, it is considered a legacy class. Most modern Java applications
prefer using ArrayList and explicit synchronization via classes like
Collections.synchronizedList() or CopyOnWriteArrayList for
multithreaded scenarios because Vector's default synchronization can add
unnecessary overhead when thread safety isn’t needed.

Constructors in Vector

The Vector class provides several constructors that allow for different ways of
initializing it.

CH NARESH 33
1. Default Constructor: Initializes an empty Vector with an initial capacity
of 10.

Vector<String> vector = new Vector<>();

Constructor with Initial Capacity: Allows you to specify the initial capacity of
the Vector. If you know in advance that your Vector will need a specific size, this
can avoid unnecessary resizing.

Vector<String> vector = new Vector<>(20); // Initial capacity is 20

Constructor with Initial Capacity and Capacity Increment: Here, you can
specify both the initial capacity and the increment by which the Vector will grow
when it needs more space.

Vector<String> vector = new Vector<>(20, 5); // Starts with 20 elements and


grows by 5 when full.

Common Methods in the Vector Class

The Vector class shares many of the methods common to other classes in the
Collection Framework (such as ArrayList), but here are a few important
methods and how they work:

1. Adding Elements:
o add(E element): Adds the specified element to the end of the Vector.
o add(int index, E element): Inserts the element at the specified
position in the Vector.

Removing Elements:

• remove(Object o): Removes the first occurrence of the specified element


from the Vector.
• remove(int index): Removes the element at the specified index.

Getting Elements:

• get(int index): Retrieves the element at the specified index

Size and Capacity:

• size(): Returns the number of elements in the Vector.


• capacity(): Returns the current capacity of the Vector (the size of the
underlying array).

CH NARESH 34
1.Develop a Program to implement List interface using Vector class and explore
all methods

package hello;

import java.util.Collection;

import java.util.Iterator;

import java.util.List;

import java.util.ListIterator;

import java.util.Vector;

public class MyList<T> implements List<T> {

private Vector<T> vector;

public MyList() {

vector = new Vector<>();

@Override

public int size() {

return vector.size();

@Override
CH NARESH 35
public boolean isEmpty() {

return vector.isEmpty();

@Override

public boolean contains(Object o) {

return vector.contains(o);

@Override

public Iterator<T> iterator() {

return vector.iterator();

@Override

public Object[] toArray() {

return vector.toArray();

@Override

public <T> T[] toArray(T[] a) {

return vector.toArray(a);

CH NARESH 36
@Override

public boolean add(T t) {

return vector.add(t);

@Override

public boolean remove(Object o) {

return vector.remove(o);

@Override

public boolean containsAll(Collection<?> c) {

return vector.containsAll(c);

@Override

public boolean addAll(Collection<? extends T> c) {

return vector.addAll(c);

@Override

public boolean addAll(int index, Collection<? extends T> c) {

CH NARESH 37
return vector.addAll(index, c);

@Override

public boolean removeAll(Collection<?> c) {

return vector.removeAll(c);

@Override

public boolean retainAll(Collection<?> c) {

return vector.retainAll(c);

@Override

public void clear() {

vector.clear();

@Override

public T get(int index) {

return vector.get(index);

CH NARESH 38
@Override

public T set(int index, T element) {

return vector.set(index, element);

@Override

public void add(int index, T element) {

vector.add(index, element);

@Override

public T remove(int index) {

return vector.remove(index);

@Override

public int indexOf(Object o) {

return vector.indexOf(o);

@Override

public int lastIndexOf(Object o) {

return vector.lastIndexOf(o);

CH NARESH 39
}

@Override

public ListIterator<T> listIterator() {

return vector.listIterator();

@Override

public ListIterator<T> listIterator(int index) {

return vector.listIterator(index);

@Override

public List<T> subList(int fromIndex, int toIndex) {

return vector.subList(fromIndex, toIndex);

@Override

public String toString() {

return vector.toString();

public static void main(String[] args) {

CH NARESH 40
MyList<String> list = new MyList<>();

list.add("Apple");

list.add("Banana");

list.add("Cherry");

System.out.println(list);

list.remove(0);

System.out.println(list);

Output:

[Apple, Banana, Cherry]

[Banana, Cherry]

1.8. Temporal manipulation and persistence

Temporal manipulation and persistence in data structures is an interesting concept


that relates to how we can manage and keep track of data over time, often
allowing us to "go back" to previous states or even "travel through time" in a
computational sense. Let’s break this down in a very intuitive way.

1. Temporal Manipulation:

Think of temporal manipulation in data structures like keeping a record of every


change that has ever happened to your data, so you can access or even revert to
any point in time. Imagine your data structure is a diary, where you write down
something new every day. Temporal manipulation allows you to:

• Read any entry from any past day.


• Look at what your diary looked like at any specific day in history.
• Potentially undo mistakes by returning to an earlier version of your diary.

CH NARESH 41
In technical terms, temporal manipulation refers to operations that allow us to
interact with different states of the data structure across time. A common example
of this is version control, like what you see in Git. When you work on a file,
every time you commit changes, a snapshot of that file is taken. You can then go
back and retrieve the file from any previous commit. The underlying data
structure isn't just the current state of the file but rather all the states it has gone
through.

Analogy:

Imagine you’re writing an essay on a computer. Every time you hit "save," the
computer remembers that exact version of the essay. You can not only access the
most recent version but also any earlier versions. Temporal manipulation allows
you to:

• Jump back to "yesterday’s saved version."


• Compare today’s essay to what you wrote last week.
• Rewind and start again from any point in the past.

This idea is useful for programs or systems that need to handle undo functionality,
like text editors or games where you can "rewind" the state of the game to an
earlier point.

2. Persistence in Data Structures:

Persistence refers to the ability of a data structure to maintain previous versions


of itself even after changes are made. A persistent data structure remembers all
of its previous versions or states.

Here, the persistence is like keeping track of every single diary entry, even after
you’ve updated or changed something. This means:

• Every version of your diary is accessible.


• Even after you add new information, the old information doesn’t get
erased; instead, a new version is created.

There are different levels of persistence:

• Partial persistence: You can access any previous version of the data, but
you can only modify the latest version.
• Full persistence: You can access and modify any version of the data,
meaning you can go back to an old version, change it, and create a new
timeline from that point (like a "branch" in Git).

CH NARESH 42
• Confluent persistence: You can merge data from two different timelines
together. This is more advanced and useful for cases where two
independent changes might need to be reconciled.

Analogy:

Let’s go back to the diary example. A persistent data structure would be like
keeping every single version of your diary in a huge filing cabinet. Even when
you make a new diary entry, all the old versions stay in the cabinet, untouched.
You could:

• Go back and read a diary entry from any day.


• Recreate a new version of the diary starting from an older one (e.g., you
didn’t like the entries from last week, so you revert to the week before and
write something new from there).

3. How It Works in Data Structures:

Now, technically speaking, how do we implement this? The key idea is to make
only partial copies of the data when changes occur, rather than copying the whole
structure every time.

Imagine you have a tree (like a binary tree). Instead of making a full copy of the
tree every time a small change is made, we only copy the parts of the tree that are
affected by the change. The rest of the tree can stay the same. So, if you change
just one leaf of the tree, we only need to make a new version of that leaf and its
direct parent nodes, while the rest of the tree remains the same across all versions.

This concept is called path copying, where only the path from the modified node
to the root is copied, and everything else is shared. It saves time and memory
because we’re not duplicating the entire structure each time we make a change.

Analogy:

Let’s say you’re writing a 100-page book and decide to change just one sentence
on page 50. Instead of printing out an entirely new 100-page book just for that
one change, you only print a new version of page 50. You then staple this new
page onto the old book, leaving all the other pages intact. The next time you want
to see the latest version, you just flip to page 50 and read the updated sentence.

Example: Persistent Linked List

CH NARESH 43
A persistent linked list is an example where, instead of modifying the list in
place when adding or removing elements, a new version of the list is created while
keeping the old version intact. Here's a step-by-step breakdown:

1. Let’s say we have a linked list: A -> B -> C.


2. Now, if we want to add D to the list, rather than modifying C to point to D,
we create a new version of the list: A -> B -> C -> D. However, the original
list (A -> B -> C) still exists in memory.
3. If you decide later that you want to revert to the old list, you still have that
reference!

This way, the data structure becomes immutable in a sense, because once you
create a version, it cannot be changed—only new versions can be created.

Real-World Applications:

1. Version Control Systems (Git): Each commit is essentially a snapshot


(version) of your project, allowing you to go back in time or branch off
into new versions.
2. Functional Programming: Many functional programming languages
(like Haskell) heavily use persistent data structures to maintain
immutability. Immutability is crucial for avoiding side effects, which
makes reasoning about programs easier.
3. Undo Operations in Software: Text editors, design software, and games
need to allow users to revert actions, which often rely on some form of
persistent data structure under the hood.

1.8.1. Implement temporal data structures using standard implementation in


python numpy.

import numpy as np

class TemporalArray:

def __init__(self):

# Initialize with an empty list of states

self.history = [] # List to store the historical states (snapshots)

self.current_state = None # Tracks the latest state

CH NARESH 44
def initialize(self, initial_state):

"""

Initializes the temporal data structure with the first state.

"""

self.current_state = np.array(initial_state) # Convert the initial state to a


NumPy array

self.history.append(self.current_state.copy()) # Store the initial state as the


first snapshot

def update(self, new_state):

"""

Updates the current state and stores the snapshot in history.

"""

self.current_state = np.array(new_state) # Update the current state

self.history.append(self.current_state.copy()) # Store the new state in


history

def get_state(self, version):

"""

Retrieves a specific version of the state.

"""

if version < 0 or version >= len(self.history):

raise IndexError("Version index out of range.")

CH NARESH 45
return self.history[version] # Return the specified historical state

def show_history(self):

"""

Displays all recorded states in history.

"""

print("History of States:")

for idx, state in enumerate(self.history):

print(f"Version {idx}: {state}")

# Usage Example

temporal_array = TemporalArray()

# Initialize with an initial state

temporal_array.initialize([1, 2, 3])

print("Initial State:", temporal_array.get_state(0))

# Update with new states

temporal_array.update([4, 5, 6])

temporal_array.update([7, 8, 9])

# Retrieve and display a specific past state

print("Retrieved State (Version 1):", temporal_array.get_state(1))

CH NARESH 46
# Show all states stored in history

temporal_array.show_history()

Initial State: [1 2 3]

Retrieved State (Version 1): [4 5 6]

History of States:

Version 0: [1 2 3]

Version 1: [4 5 6]

Version 2: [7 8 9]

1.8.2 Implement temporal data structures using standard implementation in


NDArray (Java).

import java.util.Arrays;

class TemporalArray {

private int[] values;

private long[] timestamps;

private int size;

private int capacity;

public TemporalArray(int capacity) {

this.capacity = capacity;

this.values = new int[capacity];

this.timestamps = new long[capacity];


CH NARESH 47
this.size = 0;

public void add(int value) {

if (size == capacity) {

System.out.println("Array is full. Cannot add more elements.");

return;

values[size] = value;

timestamps[size] = System.currentTimeMillis();

size++;

public int getValueAtTime(long timestamp) {

for (int i = 0; i < size; i++) {

if (timestamps[i] == timestamp) {

return values[i];

throw new IllegalArgumentException("No value found at the given


timestamp.");

public void printArray() {


CH NARESH 48
for (int i = 0; i < size; i++) {

System.out.println("Value: " + values[i] + ", Timestamp: " +


timestamps[i]);

public static void main(String[] args) {

TemporalArray temporalArray = new TemporalArray(10);

temporalArray.add(5);

temporalArray.add(10);

temporalArray.add(15);

temporalArray.printArray();

// Example of retrieving a value at a specific timestamp

long timestamp = temporalArray.timestamps[1];

System.out.println("Value at timestamp " + timestamp + ": " +


temporalArray.getValueAtTime(timestamp));

*********

All The Best

Om Namah Shivaya

CH NARESH 49
CH NARESH 50

You might also like