DSA-2 unit1
DSA-2 unit1
UNIT_1
Syllabus: Graphs- Representations of graphs, Adjacency and Incidence matrices, Adjacency
List, Dynamic Graphs and persistence - Sparse Matrices- Key Value and Structural
implementations, Scalability and data driven parallelism, Block and band matrices.
Generalized Matrix and Vector interface. Standard implementations in NumPy (Python) and
ND Array (Java) - Temporal manipulation and persistence
There are many types of graphs exists but here we are only concentrating on
1.Undirected Graph
2. Directed Graph
1.Undirected Graph:
• A graph is called an un-directed graph if all the edges present between any
graph nodes are un-directed.
• By un-directed edges, we mean the edges of the graph that cannot be
determined from the node it is starting and at which node it is ending. All
the edges for a graph need to be non-directed to call it a n-directed graph.
All the edges of a non-directed graph don't have any direction.
CH NARESH 1
Fig 1.1 Undirected Graph
• The graph that is displayed above is an example of an undirected graph.
This graph is called a disconnected graph because there are four vertices
named vertex A, vertex B, vertex C, and vertex D.
• There are also exactly four edges between these vertices of the graph. And
all the vertices that are present between the different nodes of the graph are
not directed, which means the edges don't have any specific direction.
2.Directed Graph:
Representations of Graph
Here are the two most common ways to represent a graph : For simplicity, we
are going to consider only unweighted graphs in this post.
1. Adjacency Matrix
2. Adjacency List
CH NARESH 2
Adjacency Matrix
An adjacency matrix is a way of representing a graph as a matrix of boolean
(0’s and 1’s)
Let’s assume there are n vertices in the graph So, create a 2D
matrix adjMat[n][n] having dimension n x n.
• If there is an edge from vertex i to j, mark adjMat[i][j] as 1.
• If there is no edge from vertex i to j, mark adjMat[i][j] as 0.
Representation of Undirected Graph as Adjacency Matrix:
The below figure shows an undirected graph. Initially, the entire Matrix is
initialized to 0. If there is an edge from source to destination, we insert 1 to
both cases (adjMat[destination] and adjMat[destination]) because we can
go either way.
CH NARESH 3
public class AdjacencyMatrix {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
// Add edges
System.out.print("Enter the number of edges: ");
int numEdges = scanner.nextInt();
for (int i = 0; i < numEdges; i++) {
System.out.print("Enter the source and destination of edge " + (i + 1) + ": ");
int src = scanner.nextInt();
int dest = scanner.nextInt();
scanner.close();
}
}
Output:
Enter the number of vertices: 4
Is the graph directed? (false)
Enter the number of edges: 3
CH NARESH 4
Enter the source and destination of edge 1: 0 1
Enter the source and destination of edge 2: 1 2
Enter the source and destination of edge 3: 2 3
Adjacency Matrix:
0100
1010
0101
0010
2.Adjacency List
An array of Lists is used to store edges between two vertices. The size of array
is equal to the number of vertices (i.e, n). Each index in this array represents
a specific vertex in the graph. The entry at the index i of the array contains a
linked list containing the vertices that are adjacent to vertex i.
Let’s assume there are n vertices in the graph So, create an array of list of
size n as adjList[n].
• adjList[0] will have all the nodes which are connected (neighbour) to
vertex 0.
• adjList[1] will have all the nodes which are connected (neighbour) to
vertex 1 and so on.
Representation of Undirected Graph as Adjacency list:
The below undirected graph has 3 vertices. So, an array of list will be created
of size 3, where each indices represent the vertices. Now, vertex 0 has two
neighbours (i.e, 1 and 2). So, insert vertex 1 and 2 at indices 0 of array.
Similarly, For vertex 1, it has two neighbour (i.e, 2 and 0) So, insert vertices 2
and 0 at indices 1 of array. Similarly, for vertex 2, insert its neighbours in array
of list.
CH NARESH 5
The below directed graph has 3 vertices. So, an array of list will be created of
size 3, where each indices represent the vertices. Now, vertex 0 has no
neighbours. For vertex 1, it has two neighbour (i.e, 0 and 2) So, insert vertices
0 and 2 at indices 1 of array. Similarly, for vertex 2, insert its neighbours in
array of list.
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.Scanner;
// Add edges
System.out.print("Enter the number of edges: ");
int numEdges = scanner.nextInt();
for (int i = 0; i < numEdges; i++) {
System.out.print("Enter the source and destination of edge " + (i + 1) + ": ");
CH NARESH 6
int src = scanner.nextInt();
int dest = scanner.nextInt();
scanner.close();
}
}
Output:
Enter the number of vertices: 4
Is the graph directed? (false)
Enter the number of edges: 3
Enter the source and destination of edge 1: 0 1
Enter the source and destination of edge 2: 1 2
Enter the source and destination of edge 3: 2 3
Adjacency List:
0: 1
1: 0 2
2: 1 3
3: 2
CH NARESH 7
1.2 Dynamic Graphs:
A dynamic graph is like a regular graph, but it evolves over time. To break
this down, let’s first define a graph in simple terms:
1. Graph: A collection of nodes (also called vertices) connected by edges
(lines between the nodes). Think of a social network: people are nodes, and
friendships between them are edges.
2. Static Graph: This is a graph where the structure (nodes and edges)
doesn’t change once it’s set. For example, a road map where cities (nodes)
and roads (edges) are fixed and do not change.
However, in many real-world scenarios, graphs are not static. Relationships
and connections change. In a dynamic graph, nodes and edges can appear,
disappear, or change over time. This "dynamic" quality makes them powerful
for modeling complex, changing systems.
Analogies to Understand Dynamic Graphs
Imagine a train network where stations are nodes, and train routes are edges.
In a static graph, this network doesn’t change: trains always run the same
routes between fixed stations. But in reality, train routes can change daily due
to maintenance, new stations might open, and some routes might be closed
temporarily. A dynamic graph models these changes, capturing a more realistic
picture of how the train network evolves.
Another analogy is a social media network:
• When someone makes a new friend, an edge is added to the graph.
• If two friends stop communicating, the edge between them might weaken
or disappear.
• New people (nodes) join the network, and some leave, changing the graph
structure.
Why Dynamic Graphs are Necessary
Dynamic graphs are crucial because they model systems that change over
time. Let’s explore some reasons in detail:
1. Real-World Applications: Many networks in the real world are inherently
dynamic:
o Social Networks: Friendships, following relationships, and
interactions change over time.
o Communication Networks: The internet is constantly evolving;
nodes (devices) join, leave, or change their connections.
CH NARESH 8
o Biological Networks: In cell biology, interactions between proteins,
genes, or neurons can fluctuate depending on environmental
conditions or internal states.
o Transportation Networks: Traffic patterns vary throughout the day,
road closures happen, and public transport routes can change.
A static graph cannot accurately capture the evolving nature of these networks.
Dynamic graphs, however, can represent the changes over time, providing
more insights into the behavior and properties of such systems.
2. Complex Pattern Detection: Many complex phenomena can only be
understood when changes are tracked over time. For instance:
o Disease Spread: To model how a disease spreads through a
population, you need to consider how interactions between
individuals (nodes) change over time. Dynamic graphs allow
researchers to simulate and study different scenarios, like how
reducing contact (cutting edges) affects the spread.
o Network Vulnerabilities: In cybersecurity, networks change as new
nodes are added or old ones removed. By studying how these
changes affect network security, one can identify weak points and
respond more effectively to threats.
3. Efficient Updates: When dealing with a large, constantly changing
network, recalculating everything from scratch every time a small change
occurs would be extremely inefficient. Dynamic graphs allow for
incremental updates, meaning you only need to adjust the parts of the
graph that change. This can significantly save computational time and
resources.
4. Prediction and Analysis: In dynamic graphs, historical changes can be
tracked, allowing us to predict future behavior. For instance, in a social
network, observing how certain relationships form or dissolve over time
might help in predicting future interactions or trends.
Technical Breakdown of Dynamic Graphs
From a technical perspective, dynamic graphs require specific algorithms and
data structures that can handle changes efficiently:
1. Data Structures: Dynamic graphs use specialized data structures to allow
quick updates:
o Dynamic adjacency lists: These lists efficiently track connections
(edges) for each node, allowing for easy insertion and deletion.
o Temporal edge lists: Instead of storing just the presence of an edge,
they store when edges are active, enabling the tracking of temporal
changes.
2. Algorithms:
o Incremental algorithms: These update only the affected part of the
graph when a change occurs. For example, if a new road opens in a
CH NARESH 9
city, only the routes connected to that road need recalculating, not
the entire city's road map.
o Streaming algorithms: In cases where the graph changes
continuously (like data packets in a communication network),
streaming algorithms can process updates on the fly without
requiring the entire graph to be stored in memory.
3. Time Complexity: The efficiency of operations in dynamic graphs (adding
or removing nodes/edges) often needs to be faster than recalculating static
graphs. Therefore, researchers have developed dynamic algorithms
optimized for different types of graph changes.
Practical Example: Dynamic Shortest Paths
Imagine Google Maps, which uses a dynamic graph of the road network to
calculate routes. Roads (edges) can close or open, and traffic (weights on
edges) changes throughout the day. A static graph approach would mean
recalculating the entire route map every time a change happens. In contrast, a
dynamic graph approach would just update the parts of the map affected by
changes, allowing for real-time route adjustments.
The Importance of Dynamic Graphs in Data Science and Beyond
In data science, dynamic graphs are used for:
• Anomaly detection: Identifying unusual patterns, such as fraudulent
transactions in a banking network, based on changes over time.
• Community detection: In social networks, dynamic graphs help reveal
how communities form, merge, or split.
In computer science, dynamic graphs are key in:
• Routing algorithms: For adapting routes in changing networks like the
internet.
• Machine learning: In graph neural networks, modeling dynamic
relationships can improve predictions for social interactions or
recommendations.
import java.util.ArrayList;
import java.util.Scanner;
• Adding an Edge: Takes source and destination vertices as input, adds them to each
other's adjacency lists (for an undirected graph).
• Removing an Edge: Takes source and destination vertices as input, removes the
corresponding entries from their adjacency lists.
• Adding a Vertex: Simply adds a new ArrayList to represent the new vertex.
• Removing a Vertex: Removes the vertex's adjacency list and updates other vertices'
adjacency lists to remove any references to this vertex.
• Displaying the Graph: Prints the adjacency list representation of the graph.
1.3 Persistence:
Persistence in data structures refers to the ability of a data structure to
preserve its previous versions even after it has been modified. In other words,
when a data structure is persistent, you can access and use any of its past states
without affecting the current or future states. This is useful for applications
like undo functionalities, version control systems, and time-travel debugging.
There are two main types of persistence in data structures:
1. Partial Persistence:
o In partial persistence, you can access all previous versions of the data
structure, but only the latest version can be modified.
o Essentially, you can "look back" at older states of the data structure,
but you can't change them. This is simpler to implement compared
to full persistence.
2. Full Persistence:
o In full persistence, you can access and modify any previous version
of the data structure.
o Any modification to an old version results in a new version being
created, allowing you to access multiple versions and create a tree
of versions.
3. Confluent Persistence:
o This is an extension of full persistence where you can combine
different versions of the data structure into a new version. This
merging of states allows for more complex operations and histories.
How Persistence is Achieved
When a data structure is modified in a persistent way, rather than changing the
structure in place, new nodes are created to represent the changes. The
unchanged parts of the data structure are shared across versions. This is
achieved through techniques like:
• Structural Sharing: Instead of copying the entire data structure, only the
parts that are modified are copied, while the unmodified parts are shared.
This minimizes memory usage.
CH NARESH 13
• Immutable Data Structures: Many persistent data structures are built
using immutable objects, which inherently do not change once created.
Any modification results in a new object, preserving the original.
Example: Persistent Linked List
A persistent linked list works similarly to a standard linked list but retains
previous versions after modifications. If you add or remove an element, a new
version of the list is created by copying only the affected nodes. The rest of
the list shares nodes with the previous versions.
Use Cases of Persistent Data Structures
• Undo/Redo Operations: Text editors or graphics software can implement
undo/redo functionalities using persistent data structures.
• Version Control: Tools like Git use persistent-like data structures to
manage changes in code repositories.
• Functional Programming: Persistent data structures are common in
functional programming languages (e.g., Haskell, Clojure) where
immutability is a key concept.
Advantages
• Access to History: You can retrieve any previous state, which is useful for
debugging and historical analysis.
• Immutability: Persistent data structures promote immutability, which can
help in multithreaded environments where shared data can lead to
inconsistencies.
Disadvantages
• Overhead: Maintaining multiple versions can increase memory usage and
computational overhead.
• Complexity: Implementing persistent data structures can be more complex
than their ephemeral counterparts.
In summary, persistence in data structures allows them to maintain historical
versions, providing benefits in various applications where accessing past
states is crucial. This is typically achieved using structural sharing and
immutability, making persistence a key concept in both functional
programming and real-world applications like version control systems.
CH NARESH 14
1.4.1. 3-Column Representation (Triplet Form)
In the 3-column (triplet) representation, a sparse matrix is stored using three
arrays to keep track of the non-zero elements:
• Row Index: The row index of each non-zero element.
• Column Index: The column index of each non-zero element.
• Value: The value of the non-zero element.
This format is easy to understand and is suitable for storing sparse matrices
where the non-zero elements are randomly distributed.
Example
Consider the following sparse matrix:
0 0 3 0
0 5 0 0
7 0 0 9
• Non-zero elements: 3, 5, 7, 9.
The 3-column representation of this matrix would be:
Row Index: 0 1 2 2
Column Index: 2 1 0 3
Value: 3 5 7 9
import java.util.Scanner;
CH NARESH 15
System.out.println("The sparse matrix is\n");
for(int i=0;i<3;i++)
{
System.out.println();
for(int j=0;j<3;j++)
{
System.out.printf("%d\t", sm[i][j]);
}
}
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
if(sm[i][j]==0)
{
z++;
}
else
{
nz++;
}
}
}
if(nz>z)
{
System.out.println("The matrix is dense"); }
else
{
int k=0;
int s[][]=new int[nz][3];
for(int i=0;i<3;i++)
{
for(int j=0;j<3;j++)
{
if(sm[i][j]!=0)
{
s[k][0]=i;
s[k][1]=j;
s[k][2]=sm[i][j];
k++;
}
}
}
CH NARESH 16
for(int j=0;j<3;j++)
{
System.out.printf("%d\t",s[i][j]);
}
}
}
}
0 0 3 0
0 5 0 0
7 0 0 9
CH NARESH 17
• Fast Access: Enables quick traversal and matrix-vector multiplication
because of its row-wise compression.
CH NARESH 18
body, and so on. This way, the work is done simultaneously, and the cars are
assembled much faster.
Data-Driven Parallelism works on the same principle. It’s about breaking
down a large dataset or computational task into smaller pieces and processing
them simultaneously across multiple processors or machines.
How Data-Driven Parallelism Works:
• Divide: The data or tasks are divided into smaller chunks.
• Distribute: These chunks are sent to different processors or nodes.
• Conquer: Each processor works on its chunk independently.
• Combine: The results are aggregated to form the final output.
For example, imagine sorting a massive list of numbers:
1. Split the list into smaller sub-lists.
2. Each processor sorts its assigned sub-list.
3. Finally, all sub-lists are merged into one sorted list.
Key Concepts in Data-Driven Parallelism:
• Data Partitioning: Dividing data into independent chunks. Think of
slicing a pie; each slice can be eaten separately.
• Task Scheduling: Deciding which processor gets which chunk of data. It’s
like assigning each worker their part of the assembly process.
• Load Balancing: Ensuring each processor gets a fair share of work, so no
one is overwhelmed, similar to making sure no worker has too many tasks
while others have too few.
Technical Details and Challenges
1. Synchronization: Ensuring that all processors work well together. Imagine
if some café workers are too fast and others too slow—it would create a
bottleneck.
2. Communication Overhead: The cost of coordinating between processors.
It’s like the time your staff spends talking to each other instead of serving
customers.
3. Scalability Limits: Sometimes, adding more processors doesn’t help and
can even slow things down due to increased overhead. This is similar to
having too many workers in a small kitchen, where they start getting in
each other's way.
4. Fault Tolerance: Handling failures gracefully. If one worker makes a
mistake, the café shouldn’t come to a halt. Similarly, if one processor fails,
the system should continue functioning.
Applications of Data-Driven Parallelism:
• Big Data Processing: Tools like Hadoop and Spark use parallelism to
process massive datasets efficiently.
• Scientific Computing: Simulations and models (like weather forecasting)
often rely on breaking tasks into parallelizable chunks.
• Machine Learning: Training models on large datasets is sped up
significantly using data-driven parallelism.
CH NARESH 19
Questions for You:
1. Scalability: Are you familiar with concepts like load balancing or
distributed systems?
2. Parallelism: Have you encountered terms like multithreading,
MapReduce, or GPU computing?
3. Synchronization: Do you know about concepts like locks, semaphores, or
barriers in parallel computing?
Here:
• Each AijA_{ij}Aij represents a block, which itself could be a matrix (not
just a single number).
• The whole matrix AAA is made up of these smaller blocks.
Imagine these blocks as neighborhoods, where each neighborhood (block)
contains its own matrix of houses (elements).
Why Use Block Matrices?
There are several practical reasons for organizing a matrix into blocks:
1. Efficiency: When working with large matrices, it's often more efficient to
treat sections (blocks) at once rather than dealing with each individual
element. For instance, multiplying two large matrices by multiplying
blocks can reduce computational complexity.
CH NARESH 20
2. Structure Preservation: Sometimes, the block structure reveals
something about the underlying problem. For instance, in certain physical
systems, blocks might represent interactions between distinct subsystems.
By organizing things into blocks, we keep the interactions within the
subsystems clear and separate from the overall system.
3. Parallel Computation: In computer science, dividing a problem into
smaller blocks allows for parallel processing, where each block can be
processed independently on different processors, speeding up computation.
Example: A Simple Block Matrix
Now, each of the submatrices within the larger matrix is a block. The whole
matrix AAA can be manipulated at the block level, and operations like addition
and multiplication can be performed on these blocks rather than on individual
elements.
Matrix Multiplication with Blocks
Matrix multiplication is one of the operations that block matrices help
simplify. The rule for matrix multiplication still holds at the block level: the
product of two block matrices is found by multiplying corresponding blocks,
just like multiplying two numbers in a regular matrix.
Let’s say we have two block matrices:
CH NARESH 21
This looks almost exactly like regular matrix multiplication, except now we
are multiplying and adding entire blocks (submatrices) instead of individual
elements.
Recap
• Block matrices help organize large matrices into smaller, more manageable
submatrices.
• Each block can be thought of as its own matrix, making operations on the large matrix
easier and more structured.
• Operations like matrix multiplication can be done at the block level, which is more
efficient and can reveal more about the structure of the problem.
• Special block matrices, like block diagonal matrices, have further simplifying
properties.
package hello;
import java.util.Scanner;
class AddLockMatrix {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
CH NARESH 22
// Input and placement of submatrix1 into totalsubmatrix1
for (int i = 0; i < blocks; i++) {
for (int j = 0; j < blocks; j++) {
System.out.printf("Enter the 2x2 submatrix1 at block (%d, %d):\n",
i, j);
for (int k = 0; k < 2; k++) {
for (int l = 0; l < 2; l++) {
submatrix1[k][l] = input.nextInt();
}
}
// Place the submatrix1 into totalsubmatrix1
for (int k = 0; k < 2; k++) {
for (int l = 0; l < 2; l++) {
totalsubmatrix1[i * 2 + k][j * 2 + l] = submatrix1[k][l];
}
}
}
}
// Display totalsubmatrix1
CH NARESH 23
System.out.println("The reconstructed matrix1 is:");
for (int i = 0; i < matrixSize; i++) {
for (int j = 0; j < matrixSize; j++) {
System.out.print(totalsubmatrix1[i][j] + "\t");
}
System.out.println();
}
// Display totalsubmatrix2
System.out.println("The reconstructed matrix2 is:");
for (int i = 0; i < matrixSize; i++) {
for (int j = 0; j < matrixSize; j++) {
System.out.print(totalsubmatrix2[i][j] + "\t");
}
System.out.println();
}
input.close();
}
}
1.7.Band Matrices
CH NARESH 24
A band matrix is a special kind of matrix where non-zero elements are
concentrated around the diagonal, with all other elements being zero. In a band
matrix, the non-zero elements form a "band" or "stripe" near the diagonal,
while everything far away from this band is zero.
In this example:
• The non-zero elements appear near the diagonal, forming a "band."
• Farther away from the diagonal (above or below the non-zero elements),
everything is zero.
Key Properties of a Band Matrix
A band matrix is defined by two key numbers:
1. Upper bandwidth (u): The maximum distance from the main diagonal
where non-zero elements appear above the diagonal.
2. Lower bandwidth (l): The maximum distance from the main diagonal
where non-zero elements appear below the diagonal.
For example, in a matrix with upper bandwidth u=1u = 1u=1 and lower
bandwidth l=1l = 1l=1, all non-zero elements would be in the following
pattern:
CH NARESH 25
Here, u=1u = 1u=1 because the non-zero elements extend only one position above the
diagonal, and l=1l = 1l=1 because they extend only one position below the diagonal.
There are special cases of band matrices that come up frequently in practice:
1. Diagonal Matrix: This is the simplest case, where only the main diagonal contains
non-zero elements, and all the bandwidths are zero
Penta diagonal Matrix: This is any matrix where the bandwidth is not
restricted to just the diagonals. It can have wider bands with upper band more
than 1 and lower band greater than 1, meaning non-zero elements can extend
farther from the main diagonal.
CH NARESH 26
package hello;
import java.util.Scanner;
class AddTwoMatrix
int m, n, c, d;
m = in.nextInt();
n = in.nextInt();
first[c][d] = in.nextInt();
CH NARESH 27
System.out.println("Enter the elements of second matrix");
second[c][d] = in.nextInt();
System.out.print(sum[c][d] + "\t");
System.out.println();
CH NARESH 28
2.Write a java program to perform matrix multiplication between two band
matrices
age hello;
import java.util.Scanner;
class MatrixMultiplication
int m, n, p, q, sum = 0, c, d, k;
m = in.nextInt();
n = in.nextInt();
CH NARESH 29
System.out.println("Enter elements of first matrix");
first[c][d] = in.nextInt();
p = in.nextInt();
q = in.nextInt();
if (n != p)
else
second[c][d] = in.nextInt();
CH NARESH 30
for (c = 0; c < m; c++) {
multiply[c][d] = sum;
sum = 0;
System.out.print(multiply[c][d]+"\t");
System.out.print("\n");
CH NARESH 31
The concept of a generalized matrix refers to matrices that go beyond the typical
square or rectangular arrangement of numbers. These matrices are used to handle
more complex mathematical structures and scenarios, where traditional matrices
might not be enough. To explain this concept, let’s break it down through a simple
analogy and then dive into some technical details.
Imagine you're organizing a series of boxes. In most cases, each box has a well-
defined structure, like a square or rectangular box (similar to a traditional matrix).
But sometimes, you need to work with more complex shapes, like boxes that
aren't rectangular, or boxes that have compartments inside of them. These
complex shapes are like generalized matrices—they go beyond the standard
rectangular structure to handle more intricate patterns.
(Students are advised to explore the 1,4,5 matrices above since we are already
explored 2,3 Matrices in the above list)
1.7.1.Vector Interfaces:
Let’s break this down step by step using an analogy and then dive into the
technical details of the Vector class.
Constructors in Vector
The Vector class provides several constructors that allow for different ways of
initializing it.
CH NARESH 33
1. Default Constructor: Initializes an empty Vector with an initial capacity
of 10.
Constructor with Initial Capacity: Allows you to specify the initial capacity of
the Vector. If you know in advance that your Vector will need a specific size, this
can avoid unnecessary resizing.
Constructor with Initial Capacity and Capacity Increment: Here, you can
specify both the initial capacity and the increment by which the Vector will grow
when it needs more space.
The Vector class shares many of the methods common to other classes in the
Collection Framework (such as ArrayList), but here are a few important
methods and how they work:
1. Adding Elements:
o add(E element): Adds the specified element to the end of the Vector.
o add(int index, E element): Inserts the element at the specified
position in the Vector.
Removing Elements:
Getting Elements:
CH NARESH 34
1.Develop a Program to implement List interface using Vector class and explore
all methods
package hello;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;
import java.util.ListIterator;
import java.util.Vector;
public MyList() {
@Override
return vector.size();
@Override
CH NARESH 35
public boolean isEmpty() {
return vector.isEmpty();
@Override
return vector.contains(o);
@Override
return vector.iterator();
@Override
return vector.toArray();
@Override
return vector.toArray(a);
CH NARESH 36
@Override
return vector.add(t);
@Override
return vector.remove(o);
@Override
return vector.containsAll(c);
@Override
return vector.addAll(c);
@Override
CH NARESH 37
return vector.addAll(index, c);
@Override
return vector.removeAll(c);
@Override
return vector.retainAll(c);
@Override
vector.clear();
@Override
return vector.get(index);
CH NARESH 38
@Override
@Override
vector.add(index, element);
@Override
return vector.remove(index);
@Override
return vector.indexOf(o);
@Override
return vector.lastIndexOf(o);
CH NARESH 39
}
@Override
return vector.listIterator();
@Override
return vector.listIterator(index);
@Override
@Override
return vector.toString();
CH NARESH 40
MyList<String> list = new MyList<>();
list.add("Apple");
list.add("Banana");
list.add("Cherry");
System.out.println(list);
list.remove(0);
System.out.println(list);
Output:
[Banana, Cherry]
1. Temporal Manipulation:
CH NARESH 41
In technical terms, temporal manipulation refers to operations that allow us to
interact with different states of the data structure across time. A common example
of this is version control, like what you see in Git. When you work on a file,
every time you commit changes, a snapshot of that file is taken. You can then go
back and retrieve the file from any previous commit. The underlying data
structure isn't just the current state of the file but rather all the states it has gone
through.
Analogy:
Imagine you’re writing an essay on a computer. Every time you hit "save," the
computer remembers that exact version of the essay. You can not only access the
most recent version but also any earlier versions. Temporal manipulation allows
you to:
This idea is useful for programs or systems that need to handle undo functionality,
like text editors or games where you can "rewind" the state of the game to an
earlier point.
Here, the persistence is like keeping track of every single diary entry, even after
you’ve updated or changed something. This means:
• Partial persistence: You can access any previous version of the data, but
you can only modify the latest version.
• Full persistence: You can access and modify any version of the data,
meaning you can go back to an old version, change it, and create a new
timeline from that point (like a "branch" in Git).
CH NARESH 42
• Confluent persistence: You can merge data from two different timelines
together. This is more advanced and useful for cases where two
independent changes might need to be reconciled.
Analogy:
Let’s go back to the diary example. A persistent data structure would be like
keeping every single version of your diary in a huge filing cabinet. Even when
you make a new diary entry, all the old versions stay in the cabinet, untouched.
You could:
Now, technically speaking, how do we implement this? The key idea is to make
only partial copies of the data when changes occur, rather than copying the whole
structure every time.
Imagine you have a tree (like a binary tree). Instead of making a full copy of the
tree every time a small change is made, we only copy the parts of the tree that are
affected by the change. The rest of the tree can stay the same. So, if you change
just one leaf of the tree, we only need to make a new version of that leaf and its
direct parent nodes, while the rest of the tree remains the same across all versions.
This concept is called path copying, where only the path from the modified node
to the root is copied, and everything else is shared. It saves time and memory
because we’re not duplicating the entire structure each time we make a change.
Analogy:
Let’s say you’re writing a 100-page book and decide to change just one sentence
on page 50. Instead of printing out an entirely new 100-page book just for that
one change, you only print a new version of page 50. You then staple this new
page onto the old book, leaving all the other pages intact. The next time you want
to see the latest version, you just flip to page 50 and read the updated sentence.
CH NARESH 43
A persistent linked list is an example where, instead of modifying the list in
place when adding or removing elements, a new version of the list is created while
keeping the old version intact. Here's a step-by-step breakdown:
This way, the data structure becomes immutable in a sense, because once you
create a version, it cannot be changed—only new versions can be created.
Real-World Applications:
import numpy as np
class TemporalArray:
def __init__(self):
CH NARESH 44
def initialize(self, initial_state):
"""
"""
"""
"""
"""
"""
CH NARESH 45
return self.history[version] # Return the specified historical state
def show_history(self):
"""
"""
print("History of States:")
# Usage Example
temporal_array = TemporalArray()
temporal_array.initialize([1, 2, 3])
temporal_array.update([4, 5, 6])
temporal_array.update([7, 8, 9])
CH NARESH 46
# Show all states stored in history
temporal_array.show_history()
Initial State: [1 2 3]
History of States:
Version 0: [1 2 3]
Version 1: [4 5 6]
Version 2: [7 8 9]
import java.util.Arrays;
class TemporalArray {
this.capacity = capacity;
if (size == capacity) {
return;
values[size] = value;
timestamps[size] = System.currentTimeMillis();
size++;
if (timestamps[i] == timestamp) {
return values[i];
temporalArray.add(5);
temporalArray.add(10);
temporalArray.add(15);
temporalArray.printArray();
*********
Om Namah Shivaya
CH NARESH 49
CH NARESH 50