0% found this document useful (0 votes)
9 views

Unit 1

The document defines data structures and their importance, different types of data representation including primitive, structured, and unstructured data. It also discusses abstract data types and common ones like lists, stacks, queues, trees and graphs. The document then defines algorithms, their characteristics and properties. It concludes by explaining asymptotic notations like Big O, Omega and Theta which are used to analyze algorithms' time complexities.

Uploaded by

Rohan Ajee
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Unit 1

The document defines data structures and their importance, different types of data representation including primitive, structured, and unstructured data. It also discusses abstract data types and common ones like lists, stacks, queues, trees and graphs. The document then defines algorithms, their characteristics and properties. It concludes by explaining asymptotic notations like Big O, Omega and Theta which are used to analyze algorithms' time complexities.

Uploaded by

Rohan Ajee
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

UNIT-1

Data Structures

Definition

● A data structure is a way of organizing and storing data to perform operations efficiently.

● It defines a set of rules for how data is arranged in memory and how operations can be
performed on that data.
● The choice of data structure depends on the specific requirements and constraints of a
given problem.

Importance of Data Structures:

● Efficiency: Well-designed data structures allow for efficient storage and retrieval of
information. They can significantly impact the performance of algorithms and overall
system efficiency.
● Organization: Data structures help in organizing and managing data in a structured
manner. This organization is crucial for maintaining the integrity and coherence of the
data.
● Abstraction: Data structures provide a level of abstraction, allowing programmers to
work with high-level concepts without worrying about low-level implementation details.

Data Representation

Data representation is a fundamental concept in computer science that deals with how
data is organized and stored in a computer system. Data structures are the building blocks of data
representation, providing efficient ways to organize and manage data for various purposes.

Different Types of Data Representation

Data can be represented in various forms depending on its type and intended use. Some common
types of data representation include:

1. Primitive Data Types: These are basic data types that are directly supported by the
programming language, such as integers, floating-point numbers, strings, and booleans.
2. Structured Data: This type of data is organized into a hierarchical or relational structure,
such as arrays, structs, and records.
3. Unstructured Data: This type of data does not have a predefined structure, such as text,
images, and audio.
4. Data in Memory: Data is stored in memory using binary representation, where each data
element is represented as a sequence of bits.
5. Data on Storage Devices: Data is stored on storage devices using various formats, such as
binary file formats, databases, and cloud storage.

Data Type Example


Primitive Data Types Integers, floating-point numbers, strings, booleans
Structured Data Arrays, structs, records
Unstructured Data Text, images, audio

Primitive Data Types

Primitive data types are the basic building blocks of data representation in Python. They include:

● Integers: Integers are whole numbers, such as 10, -20, or 100.


● Floating-point numbers: Floating-point numbers are decimal numbers, such as 3.14159, -
123.45, or 1.0e+6.
● Strings: Strings are sequences of characters, such as "Hello, World!", "This is an example
string.", or ''.
● Booleans: Booleans are values that can be either True or False.

Structured Data

Structured data is organized into a hierarchical or relational structure. This makes it easier to
store, manage, and access data. Examples of structured data in Python include:

● Arrays: Arrays are collections of elements of the same data type that are stored in
contiguous memory locations. For example, the following code creates an array of
integers:
● Structs: Structs are collections of named data elements, or fields. For example, the
following code creates a struct to store information about a person:
● Records: Records are similar to structs, but they are immutable, meaning that their values
cannot be changed after they are created. For example, the following code creates a
record to store information about a person:

Unstructured Data

Unstructured data does not have a predefined format or organization. This makes it more
difficult to store, manage, and access than structured data. However, unstructured data is often
more expressive and informative than structured data. Examples of unstructured data in Python
include:

● Text: Text is a sequence of characters that can be used to represent human language, such
as the text of this document.
● Images: Images are representations of visual information, such as photographs, drawings,
or paintings. They are typically stored as arrays of pixels, where each pixel is represented
by a color value.
● Audio: Audio is a representation of sound waves, such as music, speech, or
environmental sounds. It is typically stored as an array of samples, where each sample is
a representation of the sound pressure at a particular point in time.

Abstract Data Type

An abstract data type (ADT) is a mathematical model for a type of data structure that
specifies its behavior without specifying its implementation details. ADTs provide a high-level
abstraction for data structures, allowing programmers to focus on the logic of their programs
without worrying about the underlying implementation details.

1. Lists: Lists are a collection of elements that can be added, removed, accessed, and
searched. Common operations include insert, delete, search, and retrieve.
2. Stacks: Stacks are LIFO (Last In, First Out) data structures, where elements are added
and removed from the top. Common operations include push, pop, and peek.
3. Queues: Queues are FIFO (First In, First Out) data structures, where elements are added
to the rear and removed from the front. Common operations include enqueue, dequeue,
and front.
4. Trees: Trees are hierarchical data structures, where nodes are connected in a parent-child
relationship. Common operations include insert, delete, search, and traversal.
5. Graphs: Graphs are a collection of nodes connected by edges, representing relationships
between entities. Common operations include add node, add edge, remove node, remove
edge, and search.

Algorithm:

● The word Algorithm means ” A set of finite rules or instructions to be followed in calculations or
other problem-solving operations ” Or ” A procedure for solving a mathematical problem in a
finite number of steps that frequently involves recursive operations”.

What are the Characteristics of an Algorithm?

As one would not follow any written instructions to cook the recipe, but only the standard one. Similarly,
not all written instructions for programming is an algorithms. In order for some instructions to be an
algorithm, it must have the following characteristics:

Clear and Unambiguous: The algorithm should be clear and unambiguous. Each of its steps should be
clear in all aspects and must lead to only one meaning.
Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined inputs. It may or may
not take input.
Well-Defined Outputs: The algorithm must clearly define what output will be yielded and it should be
well-defined as well. It should produce at least 1 output.
Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite time.
Feasible: The algorithm must be simple, generic, and practical, such that it can be executed with the
available resources. It must not contain some future technology or anything.
Language Independent: The Algorithm designed must be language-independent, i.e. it must be just
plain instructions that can be implemented in any language, and yet the output will be the same, as
expected.

Properties of Algorithm:
· It should terminate after a finite time.
· It should produce at least one output.
· It should take zero or more input.
· It should be deterministic, meaning giving the same output for the same input case.
· Every step in the algorithm must be effective i.e. every step should do some work.

Asymptotic Notations

Asymptotic notation is a mathematical notation used in computer science to describe the


efficiency or complexity of algorithms as the input size approaches infinity. It provides a concise
way to express the upper or lower bounds of an algorithm's running time or space requirements.
The most commonly used asymptotic notations are Big O, Omega, and Theta.

1. Big O Notation (O-notation):


Big O notation represents the upper bound or worst-case scenario of an algorithm's time
complexity. It describes the maximum amount of resources (time or space) an algorithm will use
as a function of the input size.
1. Example:

If an algorithm has a time complexity of O(n), it means that the running time grows linearly with
the size of the input (e.g., in a straight line).

2. Omega Notation (Ω-notation):


Omega notation represents the lower bound or best-case scenario of an algorithm's time
complexity. It describes the minimum amount of resources an algorithm will use as a function of
the input size.

2. Example:

If an algorithm has a time complexity of Ω(n), it means that the running time grows at least as
fast as a linear function of the input.

3. Theta Notation (Θ-notation):


Theta notation represents both the upper and lower bounds of an algorithm's time complexity. It
provides a more precise estimate of the growth rate of the algorithm.
3. Example:

If an algorithm has a time complexity of Θ(n), it means that the running time grows exactly in
proportion to the size of the input.

Common Time Complexities:


O(1) - Constant Time:
● Operations take a constant amount of time regardless of the input size.
O(log n) - Logarithmic Time:
● The running time grows logarithmically with the input size.
O(n) - Linear Time:
● The running time grows linearly with the input size.
O(n log n) – Linear arithmetic Time:
● Common in efficient sorting algorithms like Merge Sort and Heap Sort.
O(n^2) - Quadratic Time:
● The running time is proportional to the square of the input size.
O(2^n) - Exponential Time:
● The running time grows exponentially with the input size.

Importance of Asymptotic Notation:


1. Algorithm Analysis: Asymptotic notation allows for a high-level analysis of algorithms
without getting bogged down in specific details.
2. Comparison of Algorithms: It provides a convenient way to compare the efficiency of
different algorithms and identify the most suitable one for a particular problem.
3. Scaling Behavior: Asymptotic notation helps to understand how an algorithm's
performance scales as the input size increases, guiding decisions about algorithm
selection in real-world applications.
4. Predicting Performance: It provides insights into how an algorithm will perform on large
inputs, helping in the prediction of resource requirements.

Algorithm Analysis, Recursion

Algorithm analysis is the process of studying the performance of algorithms. This involves
determining the time and space complexity of an algorithm.

Time complexity is a measure of how long an algorithm takes to run, while space complexity is
a measure of how much memory an algorithm uses.

There are two main approaches to algorithm analysis:

● Big O notation: Big O notation is a mathematical notation used to describe the upper
bound of an algorithm's growth rate. It specifies that the algorithm's growth rate is no
worse than a certain function, denoted by O(g(n)). This means that the algorithm grows at
a rate less than or equal to g(n) as n approaches infinity.
● Worst-case analysis: Worst-case analysis is the process of determining the maximum
time or space complexity of an algorithm. This is typically done by identifying the input
that causes the algorithm to take the longest time or use the most memory.

Recursion

Recursion is a technique in computer science where a problem is solved by recursively calling


itself. This means that the problem is broken down into smaller subproblems of the same type,
and the solution to the original problem is found by combining the solutions to the subproblems.

Recursion can be a powerful tool for solving problems, but it can also be difficult to understand
and implement. It is important to carefully design recursive algorithms to avoid problems such as
stack overflow.

Example: Factorial

The factorial of a non-negative integer n is the product of all positive integers less than or equal
to n. For example, the factorial of 5 is 120, because 120 = 1 × 2 × 3 × 4 × 5.
Here is a recursive algorithm to calculate the factorial of a non-negative integer n in C:

C
int factorial(int n) {
if (n == 0) {
return 1;
} else {
return n * factorial(n - 1);
}
}

This algorithm works by recursively calling itself to calculate the factorial of n - 1, and then
multiplying the result by n. This algorithm has a time complexity of O(n).

Analysis of Factorial the Recursion problem

The factorial of a number is defined as:


f(n) = n * f(n-1) → for all n >0
f(0) = 1 → for n = 0

factorial(n):
if n is 0
return 1
return n * factorial(n-1)

Time complexity

If we look at the pseudo-code again, added below for convenience. Then we notice that:
● factorial(0) is only comparison (1 unit of time)
● factorial(n) is 1 comparison, 1 multiplication, 1 subtraction and time for factorial(n-1)
factorial(n):
if n is 0
return 1
return n * factorial(n-1)
From the above analysis we can write:
T(n) = T(n — 1) + 3
T(0) = 1
T(n) = T(n-1) + 3
= T(n-2) + 6
= T(n-3) + 9
= T(n-4) + 12
= ...
= T(n-k) + 3k
as we know T(0) = 1
we need to find the value of k for which n - k = 0, k = n
T(n) = T(0) + 3n , k = n
= 1 + 3n
that gives us a time complexity of O(n)

Space complexity

For every call to the recursive function, the state is saved onto the call stack, till the value is
computed and returned to the called function.
Here we don’t assign an explicit stack, but an implicit call stack is maintained
f(6) → f(5) → f(4) → f(3) → f(2) → f(1) → f(0)
f(6) → f(5) → f(4) → f(3) → f(2) → f(1)
f(6) → f(5) → f(4) → f(3) → f(2)
f(6) → f(5) → f(4) → f(3)
f(6) → f(5) → f(4)
f(6) → f(5)
f(6)
The function in bold is the one currently being executed. As you can see for f(6) a stack of 6 is
required till the call is made to f(0) and a value is finally computed.
Hence for factorial of N, a stack of size N will be implicitly allocated for storing the state of the
function calls.
The space complexity of recursive factorial implementation is O(n)

Introduction to Data Structures

● A data structure is not only used for organizing the data. It is also used for processing,
retrieving, and storing data.
● There are different basic and advanced types of data structures that are used in almost
every program or software system that has been developed.

Basic Terminologies related to Data Structures


Data Structures are the building blocks of any software or program. Selecting the suitable data
structure for a program is an extremely challenging task for a programmer.

The following are some fundamental terminologies used whenever the data structures are
involved:
1. Data: We can define data as an elementary value or a collection of values. For example,
the Employee's name and ID are the data related to the Employee.
2. Data Items: A Single unit of value is known as Data Item.
3. Group Items: Data Items that have subordinate data items are known as Group Items.
For example, an employee's name can have a first, middle, and last name.
4. Elementary Items: Data Items that are unable to divide into sub-items are known as
Elementary Items. For example, the ID of an Employee.
5. Entity and Attribute: A class of certain objects is represented by an Entity. It consists of
different Attributes. Each Attribute symbolizes the specific property of that Entity. For
example,
6.

Attributes ID Name Gender Job Title

Values 1234 Stacey M. Hill Female Software Developer

➔ Entities with similar attributes form an Entity Set. Each attribute of an entity set has a
range of values, the set of all possible values that could be assigned to the specific
attribute.
➔ The term "information" is sometimes utilized for data with given attributes of meaningful
or processed data.
1. Field: A single elementary unit of information symbolizing the Attribute of an Entity is
known as Field.
2. Record: A collection of different data items are known as a Record. For example, if we
talk about the employee entity, then its name, id, address, and job title can be grouped to
form the record for the employee.
3. File: A collection of different Records of one entity type is known as a File. For example,
if there are 100 employees, there will be 25 records in the related file containing data
about each employee.

Understanding the Need for Data Structures


● As applications are becoming more complex and the amount of data is increasing every
day, which may lead to problems with data searching, processing speed, multiple requests
handling, and many more.
● Data Structures support different methods to organize, manage, and store data efficiently.
With the help of Data Structures, we can easily traverse the data items.
● Data Structures provide Efficiency, Reusability, and Abstraction.

Why should we learn Data Structures?


1. Data Structures and Algorithms are two of the key aspects of Computer Science.
2. Data Structures allow us to organize and store data, whereas Algorithms allow us to
process that data meaningfully.
3. Learning Data Structures and Algorithms will help us become better Programmers.
4. We will be able to write code that is more effective and reliable.
5. We will also be able to solve problems more quickly and efficiently.
Understanding the Objectives and use of Data Structures
Data Structures satisfy two complementary objectives:
1. Correctness: Data Structures are designed to operate correctly for all kinds of inputs
based on the domain of interest. In order words, correctness forms the primary objective
of Data Structure, which always depends upon the problems that the Data Structure is
meant to solve.
2. Efficiency: Data Structures also require efficiency. It should process the data quickly
without utilizing many computer resources like memory space. In a real-time state, the
efficiency of a data structure is a key factor in determining the success and failure of the
process.

Understanding some Key Features of Data Structures


Some of the Significant Features of Data Structures are:
1. Robustness: Generally, all computer programmers aim to produce software that yields
correct output for every possible input, along with efficient execution on all hardware
platforms. This type of robust software must manage both valid and invalid inputs.
2. Adaptability: Building software applications like Web Browsers, Word Processors, and
Internet Search Engine include huge software systems that require correct and efficient
working or execution for many years. Moreover, software evolves due to emerging
technologies or ever-changing market conditions.
3. Reusability: The features like Reusability and Adaptability go hand in hand. It is known
that the programmer needs many resources to build any software, making it a costly
enterprise. However, if the software is developed in a reusable and adaptable way, then it
can be applied in most future applications. Thus, by executing quality data structures, it is
possible to build reusable software, which appears to be cost-effective and timesaving.

Classification of DataStructures

A Data Structure delivers a structured set of variables related to each other in various ways. It forms the
basis of a programming tool that signifies the relationship between the data elements and allows
programmers to process the data efficiently.

We can classify Data Structures into two categories:


1. Primitive Data Structure
2. Non-Primitive Data Structure
The following figure shows the different classifications of Data Structures.
● Linear data structure: Data structure in which data elements are arranged
sequentially or linearly, where each element is attached to its previous and next
adjacent elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.
● Static data structure: Static data structure has a fixed memory size. It
is easier to access the elements in a static data structure.
An example of this data structure is an array.
● Dynamic data structure: In dynamic data structure, the size is not
fixed. It can be randomly updated during the runtime which may be
considered efficient concerning the memory (space) complexity of the
code.
Examples of this data structure are queue, stack, etc.
● Non-linear data structure: Data structures where data elements are not placed
sequentially or linearly are called non-linear data structures. In a non-linear data
structure, we can’t traverse all the elements in a single run only.
Examples of non-linear data structures are trees and graphs.
For example, we can store a list of items having the same data-type using the array data
structure.

Array Data Structure

This page contains detailed tutorials on different data structures (DS) with topic-wise problems.

Operations On Data Structure

General Operations:

● Traversing: Visiting each element in the data structure exactly once. Can be done in
different ways like preorder, inorder, postorder for trees, or iteratively for arrays and lists.
● Searching: Finding a specific element within the data structure based on a given
criterion. Different algorithms like linear search, binary search, or hash tables are used
depending on the structure.
● Insertion: Adding a new element to the data structure at a specific position or according
to some rule. Different insertion methods are used for each structure, like appending to a
list, pushing onto a stack, or inserting into a specific node in a tree.
● Deletion: Removing an element from the data structure. Similar to insertion, different
methods are used based on the structure and desired behaviour.
● Sorting: Arranging the elements in the data structure in a specific order (ascending,
descending, custom). Various sorting algorithms like bubble sort, merge sort, quick sort,
etc., are used with different time and space complexities.
● Merging: Combining two or more sorted data structures into a single sorted structure.
Used for efficiently handling large datasets, often employing merge sort algorithm.

Specific Operations:

● Stacks: Push (add to top), Pop (remove from top), Peek (access top element).
● Queues: Enqueue (add to back), Dequeue (remove from front), Peek (access front
element).
● Linked Lists: Insert new node at specific position, Delete node, Traversal based on
pointers.
● Trees: Inorder, preorder, postorder traversal, Finding specific node, Insertion with
balancing (AVL, Red-Black trees), Deletion with rebalancing.
● Graphs: Breadth-First Search (BFS), Depth-First Search (DFS), Finding shortest path,
Topological sorting.

What is an Array?
An array is a collection of items of the same variable type that are stored at contiguous
memory locations. It’s one of the most popular and simple data structures and is often
used to implement other data structures. Each item in an array is indexed starting with 0.

Basic terminologies of array


● Array Index: In an array, elements are identified by their indexes. Array index
starts from 0.
● Array element: Elements are items stored in an array and can be accessed by their
index.
● Array Length: The length of an array is determined by the number of elements it
can contain.

Representation of Array
The representation of an array can be defined by its declaration. A declaration means
allocating memory for an array of a given size.

Example:
int arr[5]; // This array will store integer type element
char arr[10]; // This array will store char type element
float arr[20]; // This array will store float type element

Types of arrays:
There are majorly two types of arrays:
● One-dimensional array (1-D arrays): You can imagine a 1d array as a row, where
elements are stored one after another.

1D array
● Two-dimensional array: 2-D Multidimensional arrays can be considered as an array of
arrays or as a matrix consisting of rows and columns.

Types of Array operations:


● Traversal: Traverse through the elements of an array.
● Insertion: Inserting a new element in an array.
● Deletion: Deleting element from the array.
● Searching: Search for an element in the array.
● Sorting: Maintaining the order of elements in the array.
1. Accessing elements:
● Example:
C++
int numbers[5] = {10, 20, 30, 40, 50};
int thirdElement = numbers[2]; // thirdElement will be 30

2. Traversing (Iterating):
● Example:
C++
for (int i = 0; i < 5; i++) {
cout << numbers[i] << " "; // Prints each element on a new line
}

● Output: 10 20 30 40 50

3. Searching (Linear Search):


● Example:
C++
bool found = false;
int target = 30;
for (int i = 0; i < 5; i++) {
if (numbers[i] == target) {
found = true;
break;
}
}
if (found) {
cout << "Target found at index " << i << endl;
} else {
cout << "Target not found" << endl;
}

4. Insertion (Shifting elements):


● Example:
C++
// Insert 60 at index 2:
for (int i = 4; i >= 2; i--) {
numbers[i + 1] = numbers[i]; // Shift elements to the right
}
numbers[2] = 60; // Insert new element

5. Deletion (Shifting elements):


● Example:
C++
// Delete element at index 3:
for (int i = 3; i < 4; i++) {
numbers[i] = numbers[i + 1]; // Shift elements to the left
}

Applications of Array Data Structure:

Below are some applications of arrays.


● Storing and accessing data: Arrays are used to store and retrieve data in a specific order. For
example, an array can be used to store the scores of a group of students, or the temperatures
recorded by a weather station.
● Sorting: Arrays can be used to sort data in ascending or descending order. Sorting algorithms
such as bubble sort, merge sort, and quicksort rely heavily on arrays.
● Searching: Arrays can be searched for specific elements using algorithms such as linear search
and binary search.
● Matrices: Arrays are used to represent matrices in mathematical computations such as matrix
multiplication, linear algebra, and image processing.
● Stacks and queues: Arrays are used as the underlying data structure for implementing stacks and
queues, which are commonly used in algorithms and data structures.
● Graphs: Arrays can be used to represent graphs in computer science. Each element in the array
represents a node in the graph, and the relationships between the nodes are represented by the
values stored in the array.
● Dynamic programming: Dynamic programming algorithms often use arrays to store intermediate
results of subproblems in order to solve a larger problem.
LINKED LIST

What is a Linked List?

A Linked List is a linear data structure which looks like a chain of nodes, where each node is a
different element. Unlike Arrays, Linked List elements are not stored at a contiguous location.
It is basically chains of nodes, each node contains information such as data and a pointer to the
next node in the chain. In the linked list there is a head pointer, which points to the first element
of the linked list, and if the list is empty then it simply points to null or nothing.

Why is linked list data structure needed?

Here are a few advantages of a linked list that is listed below, it will help you understand why it
is necessary to know.
● Dynamic Data structure: The size of memory can be allocated or de-allocated at run time
based on the operation insertion or deletion.
● Ease of Insertion/Deletion: The insertion and deletion of elements are simpler than arrays
since no elements need to be shifted after insertion and deletion, Just the address needed
to be updated.
● Efficient Memory Utilization: As we know Linked List is a dynamic data structure the
size increases or decreases as per the requirement so this avoids the wastage of memory.
● Implementation: Various advanced data structures can be implemented using a linked list
like a stack, queue, graph, hash maps, etc.

Types of linked lists:

There are mainly three types of linked lists:


1. Single-linked list
2. Double linked list
3. Circular linked list
1. Singly-linked list

Traversal of items can be done in the forward direction only due to the linking of every node to
its next node.

Singly Linked List

Representation of Single linked list:

● A Node Creation:

// A Single linked list node


class Node {
public:
int data;
Node* next;
};

Commonly used operations on Singly Linked List:

The following operations are performed on a Single Linked List


● Insertion: The insertion operation can be performed in three ways. They are as follows…
● Inserting At the Beginning of the list
● Inserting At End of the list
● Inserting At Specific location in the list
● Deletion: The deletion operation can be performed in three ways. They are as follows…
● Deleting from the Beginning of the list
● Deleting from the End of the list
● Deleting a Specific Node
● Search: It is a process of determining and retrieving a specific node either from the front,
the end or anywhere in the list.
● Display: This process displays the elements of a Single-linked list.

2. Doubly linked list


Traversal of items can be done in both forward and backward directions as every node contains an
additional prev pointer that points to the previous node.

Doubly linked list

Representation of Doubly linked list:

/* Node of a doubly linked list */


class Node {
public:
int data;
Node* next; // Pointer to next node in DLL
Node* prev; // Pointer to previous node in DLL
};
Commonly used operations on Double-Linked List:
In a double-linked list, we perform the following operations…
● Insertion: The insertion operation can be performed in three ways as follows:
● Inserting At the Beginning of the list
● Inserting after a given node.
● Inserting at the end.
● Inserting before a given node
● Deletion: The deletion operation can be performed in three ways as follows…
● Deleting from the Beginning of the list
● Deleting from the End of the list
● Deleting a Specific Node
● Display: This process displays the elements of a double-linked list.
#include <iostream>

using namespace std;

// Structure for a node in the linked list


struct Node {
int data;
Node* next;
};

// Function to insert a node at the beginning of the linked list


void insertAtBeginning(Node** head_ref, int new_data)
{
Node* new_node = new Node();
new_node->data = new_data;
new_node->next = (*head_ref);
(*head_ref) = new_node;
}

// Function to print the contents of the linked list


void printList(Node* node) {
while (node != nullptr) {
cout << node->data << " ";
node = node->next;
}
}

int main() {
Node* head = nullptr; // Initially empty list

insertAtBeginning(&head, 10);
insertAtBeginning(&head, 20);
insertAtBeginning(&head, 30);

cout << "Created linked list is: ";


printList(head); // Output: 30 20 10
return 0;
}
Explanation:
1. Node Structure:
○ Each node in the linked list contains two parts:
■ data: Holds the actual data value.
■ next: A pointer to the next node in the list.
2. Insertion:
○ insertAtBeginning() function:
■ Creates a new node with the given data.
■ Points the next of the new node to the current head of the list.
■ Updates the head of the list to point to the new node.
3. Printing:
○ printList() function:
■ Iterates through the list, printing the data of each node.
■ Stops when it reaches the end of the list (when node becomes nullptr).
Example Output:
Created linked list is: 30 20 10
Additional Common Linked List Operations:
● Inserting at the end: Appends a new node to the end of the list.
● Deleting a node: Removes a node from the list based on its position or value.
● Searching: Finds a node with a specific value in the list.
● Reversing: Reverses the order of nodes in the list.

3. Circular linked lists


A circular linked list is a type of linked list in which the first and the last nodes are also connected to each
other to form a circle, there is no NULL at the end.

Circular Linked List

Commonly used operations on Circular Linked List:


The following operations are performed on a Circular Linked List
● Insertion: The insertion operation can be performed in three ways:
● Insertion in an empty list
● Insertion at the beginning of the list
● Insertion at the end of the list
● Insertion in between the nodes
● Deletion: The deletion operation can be performed in three ways:
● Deleting from the Beginning of the list
● Deleting from the End of the list
● Deleting a Specific Node
● Display: This process displays the elements of a Circular linked list.

Linked List vs. Array


#include <iostream.h>

using namespace std;

// Node structure
struct Node {
int data;
Node* next;
};

// Function to create a new node


Node* createNode(int data) {
Node* newNode = new Node();
newNode->data = data;
newNode->next = nullptr;
return newNode;
}

// Function to insert at the beginning (O(1) time complexity)


void insertAtBeginning(Node* head_ref, int new_data) {
Node* new_node = createNode(new_data);
new_node->next = (*head_ref);
(*head_ref) = new_node;
}

// Function to print the list (O(n) time complexity)


void printList(Node* node) {
while (node != nullptr) {
cout << node->data << " ";
node = node->next;
}
}

int main() {
Node* head = nullptr; // Initially empty list

insertAtBeginning(&head, 10);
insertAtBeginning(&head, 20);
insertAtBeginning(&head, 30);

cout << "Created linked list is: ";


printList(head); // Output: 30 20 10

return 0;
}

Complexity Analysis:
● Insertion at the beginning: O(1) time complexity, as it only involves creating a new node
and updating the head pointer.
● Printing the list: O(n) time complexity, as it requires iterating through each node in the
list.
● Other operations:
○ Insertion at the end: O(1) time complexity (similar to insertion at the beginning).
○ Deletion at the beginning or end: O(1) time complexity.
○ Deletion at a specific position: O(n) time complexity (requires finding the node to
delete).
○ Searching for a value: O(n) time complexity for unsorted lists, O(log n) for sorted
lists using binary search.
Space Complexity:
● Linked lists have a space complexity of O(n), as each node requires additional memory
for the data and pointer fields.
Algorithm Efficiency:
● Refers to how well an algorithm uses computational resources (time and memory) to solve a
problem.
● A more efficient algorithm typically uses fewer resources for a given input size.
Time Complexity:
● Measures how the execution time of an algorithm grows as the input size increases.
● Expressed using Big O notation (e.g., O(1), O(n), O(log n), O(n^2)).
● Common time complexities:
○ O(1): Constant time, independent of input size.
○ O(log n): Logarithmic time, grows slowly with input size.
○ O(n): Linear time, grows directly with input size.
○ O(n^2): Quadratic time, grows as the square of input size.
Space Complexity:
● Measures how much memory an algorithm uses as the input size increases.
● Also expressed using Big O notation.
● Common space complexities:
○ O(1): Constant space, independent of input size.
○ O(n): Linear space, grows directly with input size.
Key Considerations:
● Algorithm Choice: Different algorithms for the same problem often have different time and space
complexities. Choosing the most efficient algorithm for a given situation is crucial.
● Input Size: The importance of efficiency becomes more evident as input sizes grow larger.
● Trade-offs: Sometimes algorithms with better time complexity have worse space complexity, or
vice versa. Determining the best balance depends on the specific problem and resource
constraints.
Example:
● Linear Search: O(n) time complexity (scans all elements in the worst case), O(1) space
complexity (only uses a few variables).
● Binary Search: O(log n) time complexity (divides the search space in half repeatedly), O(1) space
complexity.
Understanding time and space complexity is essential for designing efficient algorithms and choosing
appropriate data structures for different tasks.

You might also like