0% found this document useful (0 votes)
27 views37 pages

DS Unit-1 Intro2 DS

Uploaded by

asdflkj.amrutha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views37 pages

DS Unit-1 Intro2 DS

Uploaded by

asdflkj.amrutha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

SYLLABUS

UNIT 1: Data Structures and Algorithms Basics (8 Hours)


Introduction: Basic terminologies, elementary data organizations, data structure operations;

M 0 U R
abstract data types (ADT) and their characteristics.
Algorithms: Definition, characteristics, analysis of an algorithm, asymptotic notations, time

PI 30 CT PU
E
and space trade-offs.
Array ADT: Definition, operations and representations – row-major and column- major.

PA 1 R
UNIT 2: Sorting, Searching and Hashing (10 Hours)

IT T U G
Sorting: Different approaches to sorting, properties of different sorting algorithms (insertion,
Shell, quick, merge, heap, counting), performance analysis and comparison.

A
Searching: Necessity of a robust search mechanism, searching linear lists (linear search,
binary search) and complexity analysis of search methods.

,N
Hashing: Hash functions and hash tables, closed and open hashing, randomization methods

AR
(division method, mid-square method, folding), collision resolution techniques.
1

TA M
R

LK
OE
S T
SYLLABUS Text/Reference Books
RC

UNIT 3: Stacks and Queues (8 Hours)


1. G.A.V. Pai, Data Structures and Algorithms: Concepts, Techniques and Application, First

A
Stack ADT: Allowable operations, algorithms and their complexity analysis, applications of stacks–expression
conversion and evaluation (algorithmic analysis), multiple stacks. Edition, McGraw Hill, 2017.
Queue ADT: Allowable operations, algorithms and their complexity analysis for simple queue and circular 2. Ellis Horowitz, Sartaj Sahni and Susan Anderson-Freed, Fundamentals of Data Structures
C
DA

queue, introduction to double-ended queues and priority queues. in C, Second Edition, Universities Press, 2008.
3. Mark Allen Weiss, Data Structures and Algorithm Analysis in C, Third Edition, Pearson
UNIT 4: Linked Lists (10 Hours)
Singly Linked Lists: Representation in memory, algorithms of several operations: traversing, searching,
Education, 2007.
insertion, deletion, reversal, ordering, etc. 4. Thomas H Cormen, Algorithms Unlocked, MIT Press, 2013
Doubly and Circular Linked Lists: Operations and algorithmic analysis. 5. Reema Thareja, Data Structures using C, Third Edition, Oxford University Press, 2023
Linked representation of stacks and queues. 6. Narasimha Karumanchi, Data Structures and Algorithms Made Easy: Data Structures and
Algorithmic Puzzles Fifth Edition, Career Monk Publications, 2016.
UNIT 5: Trees and Graphs (8 Hours)
7. Aditya Bhargava, Grokking Algorithms: An Illustrated Guide for Programmers and Other
AM

Trees: Basic tree terminologies, binary tree and operations, binary search tree (BST) and operations with time
analysis of algorithms, threaded binary trees. Curious People, First Edition, Manning Publications, 2016.
Self-balancing Search Trees: Tree rotations, AVL tree and operations, 8. K. R. Venugopal and Sudeep R. Prasad, Mastering C, Second Edition, McGraw Hill,
Graphs: Basic terminologies, representation of graphs, traversals (DFS, BFS) with complexity analysis, path 2015.
finding (Dijkstra's SSSP, Floyd's APSP), and spanning tree (Prim's and Kruskal's algorithms). 9. A. K. Sharma, Data Structures using C, Second Edition, Pearson Education, 2013.

1
Course Outcomes

M 0 U R
On completion of the course the student will be able to
1. Identify different ADTs, their operations and specify their

PI 30 CT PU
complexities.

E
2. Apply linear data structures to address practical challenges and Introduction to Data Structures

PA 1 R
analyze their complexity.

IT T U G
3. Implement different sorting, searching, and hashing methods and
analyze their time and space requirements.

A
4. Analyse non-linear data structures to develop solutions for real-

,N
world applications.

AR
6

TA M
R

LK
OE
S T
Introduction to Data Structures Introduction to Data Structures
RC

 Data is a known fact that can be recorded or store on the computer which can be in any Need of Data Structures

A
format like text, image or numerical values. As applications are getting complexed and amount of data is increasing day by day, there may arise the following
problems:
C
Data structure is a mathematical way for storage of data in the computer memory.
DA

Processor speed: To handle very large amount of data, high speed processing is required, but as the data is
 It is the way of storing and accessing the data from the computer memory. growing day by day to the billions of files per entity, processor may fail to deal with that much amount of data.
 Data structure is representation of the logical relationship existing between individual
Data Search: Consider an inventory size of 106 items in a store, If our application needs to search for a
elements of data. particular item, it needs to traverse 106 items every time, results in slowing down the search process.

 So that large number of data is processed in small interval of time. Multiple requests: If thousands of users are searching the data simultaneously on a web server, then there are the
chances that a very large server can be failed during that process
AM

 A Data Structure is a set of Domain D, a set of Functions F and a set of Axioms A.


In order to solve the above problems, data structures are used.
 The triple (D, F, A) denotes the data structure. Data is organized to form a data structure in such a way that all items are not required to be searched and required
 Algorithm + Data Structure = Program data can be searched instantly.
7 8

2
Introduction to Data Structures Introduction to Data Structures
Different between primitive and non-primitive

M 0 U R
• A primitive data structure is generally a basic structure that is usually built into the

PI 30 CT PU
E
language, such as an integer, a float.

PA 1 R
• A non-primitive data structure is built out of primitive data structures linked together in

IT T U G
meaningful ways, such as a or a linked-list, binary search tree, AVL Tree, graph etc.

A
,N

AR
9 10

TA M
R

LK
OE
S T
Introduction to Data Structures Introduction to Data Structures
RC

Linear data structures Nonlinear data structures

A
• Linear data structures organize their data elements in a linear fashion. • In nonlinear data structures, data elements are not organized in a sequential fashion.
• Data elements in a liner data structure are traversed one after the other and only one
C • A data item in a nonlinear data structure could be attached to several other data elements to
DA

element can be directly reached while traversing. reflect a special relationship among them and all the data items cannot be traversed in a
• Linear data structures are very easy to implement. single run.
• Some commonly used linear data structures are arrays, linked lists, stacks and queues.
• Data structures like trees and graphs are some examples of widely used nonlinear data
• An arrays is a collection of data elements where each element could be identified using an structures.
index.
• A stack is actually a list where data elements can only be added or removed from the top of • A tree is a data structure that is made up of a set of linked nodes, which can be used to
the list. represent a hierarchical relationship among data elements.
AM

• A queue is also a list, where data elements can be added from one end of the list and removed
from the other end of the list. • A graph is a data structure that is made up of a finite set of edges and vertices.
• A linked list is a sequence of nodes, where each node is made up of a data element and a • Edges represent connections or relationships among vertices that stores data elements.
reference to the next node in the sequence.
11 12

3
Introduction to Data Structures Introduction to Data Structures
Difference between Linear and Nonlinear Data Structures Data Types & Data Structure
Linear Data Structure Nonlinear Data Structure

M 0 U R
1 In linear data structures, data elements are organized In nonlinear data structures, a data element can be attached to Possible Structures: Set, Linear, Tree, Graph.
sequentially several other data elements to represent specific relationships

PI 30 CT PU
that exist among them.

E
2 They are easy to implement in the computer’s They are slightly difficult to implement in the computer’s
LINEAR

PA 1 R
memory. memory than Linear Data Structure.
SET

IT T U G
3 All the data items can be traversed in a single run. All the data items cannot be traversed in a single run.
4 Linear Data structures are implemented using arrays. Non linear Data structures are implemented using Pointers.
TREE

A
5 In a linear data structures, if we traverse from an In a non-linear data structure, traversing from an element can
element, we can strictly reach only one other element. lead to more than one element.

,N
GRAPH
6 Some commonly used linear data structures examples Data structures like trees and graphs are some examples of

AR
are arrays, linked lists, stacks and queues. widely used nonlinear data structures. 13 14

TA M
R

LK
OE
S T
Introduction to Data Structures Introduction to Data Structures
RC

ADT stands for Abstract Data Type.


Following are most important operation, which play major role in data structure.

A
• Data type is a collection of values and a set of operations on these values.
• Abstract data type refers to the mathematical concept that defines the data type.
C
1. Traversing: Accessing of the required records is called traversing. It is also known as visiting of
DA

• An ADT is a set of operation which is related to the mathematical abstractions.


records.
The definition of ADT involves two parts.
2. Searching: Finding the location of record with given value is called Searching.
1. First the description of the way in which the components are related to each other.
3. Sorting: The arrangement of data in ascending or descending order is called sorting.
4. Insertion: Adding new record to structure is called inserting. 2. Second are the statements of the operations that can be performed on elements of ADT.
A book is an ADT with attributes like name, author(s), ISBN, number of pages, subject, etc.
5. Deletions: Removing a record or set of records form data structure is called deleting process.
AM

6. Merging: When two or more than two records are combined, this process is called merging. For example, you might create a currency data type, which generally acts like a float but
always has a precision of 2 decimal places and implements special rules about how to round
7. Copying: The creation of duplicate data item is called copying process. off fractions of a paisa to Rupees.
15  A Data Structure is a set of Domain D, a set of Functions F and a set of
16 Axioms A.

4
Algorithm
Introduction to Data Structures Algorithm
Introduction to Data Structures
 To make a computer do anything, you have to write a computer program.  Let's say that you have a friend arriving at the airport, and your friend needs to get from the airport to your house.

M 0 U R
 To write a computer program, we have to tell the computer, step by step, exactly what we want to do.  Here are three different algorithms that you might give your friend for getting to your home:
 The computer then "executes" the program, following each step to accomplish the end goal.
 The taxi algorithm:

PI 30 CT PU
 When we are telling the computer what to do, you also get to choose how it's going to do it. 1. Book/Go to the taxi stand.

E
 That's where computer algorithms come in. 2. Get in a taxi.
3. Give the driver my address.

PA 1 R
 The algorithm is the basic technique used to get the job done.
 The bus algorithm:

IT T U G
 An algorithm is an effective method for solving a problem using a finite sequence of instructions. 1. Outside the airport, catch bus Number 51.
 Algorithms are used for calculation, data processing, and many other fields. 2. Transfer to Bus Number 102 on Main Bus Stand.

A
 Each algorithm is a list of well-defined instructions for completing a task. 3. Get off on Ring road near N.I.T. Garden.
 Starting from an initial state, the instructions describe a computation that proceeds through a well- 4. Walk five blocks north to my house.
defined series of successive states, eventually terminating in a final ending state.

,N
 The call-me algorithm:
Algorithm is a step-by-step process to solve a specific problem.

AR

1. When your plane arrives, call my cell phone.
 Algorithm is not only the base of data structure but it is also the base of good programming. 2. Meet me outside baggage claim.
17 18

TA M
R

LK
OE
S T
Introduction to Data Structures Introduction to Data Structures
RC

Algorithm Various characteristics or properties of an algorithm

A
 Let's say that you have a friend arriving at the airport, and your friend needs to get from the airport to your house. • Input: Each and every algorithm must have input. It may be zero or something else, i.e. input is necessary.
 Here are three different algorithms that you might give your friend for getting to your home: • Output: The algorithm will generate result after processing the input. This result is called output
C • Definiteness: Each and every instruction must be cleared and simple, that can be understand by anybody
 Each algorithm also has a different cost
DA

 The taxi algorithm:


else. This phenomenon is called definiteness.
1. Book/Go to the taxi stand. and a different travel time. • Finiteness: Each algorithm must have proper starting and ending. i.e. the sequence of steps must be
2. Get in a taxi.  Taking a taxi, for example, is probably the stopped after giving the suitable result.
3. Give the driver my address.
fastest way, but also the most expensive. • Effectiveness: Before preparing any algorithm, firstly it should be sketch on the papers so that every
 The bus algorithm:  Taking the bus is definitely less instruction must be feasible.
1. Outside the airport, catch bus Number 51. expensive, but a whole lot slower. • The running time or execution time of a data structure as a function of the length of the input is known
2. Transfer to Bus Number 102 on Main Bus Stand.  You choose the algorithm based on the as time complexity.
3. Get off on Ring road near N.I.T. Garden. circumstances. • When an algorithm or data structure runs on your computer, it needs some sort of space/memory. The total
AM

4. Walk five blocks north to my house. memory space your data structure needs for its execution or to work properly is known as space
 Each algorithm has advantages and
disadvantages in different situations. complexity.
 The call-me algorithm:
• Along with above some other characteristics’ of the algorithms that should be followed by the algorithms
1. When your plane arrives, call my cell phone.
are modularity, correctness, maintainability, functionality, robustness, user-friendliness, simplicity,
2. Meet me outside baggage claim. 19 extensibility and reliability. 20

5
Arrays
• Array form an important part of almost all programming language.

M 0 U R
• It provides a powerful feature and can be used as such or can be used to form
complex data structures like stack and queue.

PI 30 CT PU
Introduction

E
• An array can be defined as an infinite collection of homogeneous (similar
data type) elements.
to Arrays

PA 1 R
• This means that an array can store either all integer, all floating point

IT T U G
numbers, all characters or any other complex data type but all of same type.
• Arrays are always stored in consecutive memory locations.

A
• Array name is actually a pointer to the first location of the memory block

,N
allocation to the name of the array.

AR
21 22

TA M
R

LK
OE
S T
Types of Arrays
RC

Types of Arrays

• There are three types of arrays: C A


DA

• One-Dimensional Arrays
• Two-Dimensional Arrays
• Multi-Dimensional Arrays

You have 720 bricks.


You have 80 bricks. You lay them in rows of 10, each row adjacent to the
AM

You lay them in rows of 10, each row adjacent to other up to 10, creating the same 10x8 array of bricks
the other. as in the 2D example.
It starts with brick 0 row 0 and ends with brick 9 Now do this 8 more times with the flats of bricks
row 7. behind each other, creating 10 stacked flats and a
23 block of bricks 10x8x9. 24

6
One-Dimensional Array
• A one-dimensional array is one in which only one subscript specification is needed
One-Dimensional Array
to specify a particular element of the array.

M 0 U R
• One dimensional array can be declare as follows: • Initialization of one dimensional arrays:

PI 30 CT PU
data_type array_name [size]; int arr[5]={10,20,30};

E
• Data type is the type of elements to be stored in the array. • Here, initialization must be constant value at compile time.
• Variable name specifies the name of array, it may be given any name like other • Here, arr[0] to arr[2] has assign a value but other element assign a zero by

PA 1 R
simple variable. default.

IT T U G
• Size specifies the no of values to be stored in the array, it must be integer value.
• The index of the array start from 0 to onwards.

A
• Declaration of one-dimensional array:

,N
int num[5];

AR
25 26

TA M
R

LK
OE
S T
One-Dimensional Array One-Dimensional Array :Example
RC

• It array will store five integer values, its name is num. it can be visualized as show. void main()

A
index data {
num[0] 10 C int arr[5]={10,20,30,40,50};
DA

num[1] 20 int i;
num[2] 30 for (i=0;i<5;i++)
num[3] 40
{
num[4] 50
size of array=(upper bound – lower bound) +1 printf(“\n[%d] value is:%d”, i, arr[i]);
=4+0+1 printf(“\n[%d] Address value is:%p”, i, &arr[i]);
AM

=5 }
}
27 28

7
Two-Dimensional Array
Two-Dimensional Array • Declaration of two-dimensional array:
int arr[3][3];

M 0 U R
• Two-dimensional arrays are as a grid.
• Here, this example graphical or pictorial representation of 2-D array.
• Two dimensional array can be declare as follows:

PI 30 CT PU
E
0 1 2
data_type array_name [row size] [column size];
• the first number in brackets is the number of rows, and second number 0 Arr[0][0] Arr[0][1] Arr[0][2]

PA 1 R
in brackets is the numbers of column.

IT T U G
1
• So the upper left corner of any grid would be element [0][0]. Arr[1][0] Arr[1][1] Arr[1][2]

A
2
Arr[2][0] Arr[2][1] Arr[2][2]

,N

AR
29 30

TA M
R

LK
OE
S T
Two-Dimensional Array Two-Dimensional Array : Example
RC

• Initialization of Two dimensional arrays: void main() {

A
int arr[2][3]={ int arr[2][3]={ {10,20,30}, {40,50,60} };
{10,20,30},
{40,50,60} C int i,j;
DA

}; for (i=0;i<2;i++)
• Below show the data in 2-D grid. { printf(“\n”);
0 1 2 for(j=0;j<3;j++)
0 10 20 30 {
1 printf(“%d ”, arr[i][j]);
40 50 60
AM

}
}
31 } 32

8
Two-Dimensional Arrays
For example, Implementation of Two-Dimensional Array in memory
A two-dimensional array consists of a certain number of rows and columns:
const int NUMROWS = 3;

M 0 U R
const int NUMCOLS = 7;
int Array[NUMROWS][NUMCOLS]; 0 1 2 3 4 5 6 • A two-dimensional array can be implemented in two ways:
• Row-major implementation

PI 30 CT PU
E
0 4 18 9 3 -4 6 0
• Column-major implementation

PA 1 R
1 12 45 74 15 0 98 0

IT T U G
2 84 87 75 67 81 85 79

Array[2][5]  3rd value in 6th column

A
Array[0][4]  1st value in 5th column

,N
The declaration must specify the number of rows and the number of columns,

AR
and both must be constants.
33 34

TA M
R

LK
OE
S T
Two-Dimensional Arrays: Two-Dimensional Arrays
RC

Row-major Implementation Row-major Implementation

A
• Row-major implementation is a linearization technique in which elements • The storage can be clearly understood by arranging array as matrix as show
of array are reader from the keyboard row-wise that means the complete
C bellow:
DA

first row is stored then the complete second row is stored and so on.
• For example an array [3][3] is stored in the memory as show bellow:
A00 A01 A02 Row 1
A10 A11 A12 Row 2
a=
A00 A10 A20 A01 A11 A21 A02 A12 A22
A20 A21 A22
AM

Row 3
Col 1 Col 2 Col 3

35 36

9
Two-Dimensional Arrays: Row Major Ordering
Row-major Implementation Example 1: Suppose we want to calculate the address of element A [1, 2] and the
• Address of elements in row major implementation: matrix is of [2][4], consider base address as 1000 and W is 2.

M 0 U R
• the computer does not keep the track of all elements of the array, rather it keeps
a base address and calculates the address of required element when needed.

PI 30 CT PU
• It calculates by the following relation:

E
address of element a[i][j]=B+W(n(i-l1)+(j-l2))

PA 1 R
• Here, B=Base address

IT T U G
• W=size of each element array element
• n=the number of columns or (u2-l2+1)

A
• l1 the lower bound of row and l2 is lower bound of column
• u1 upper bound of row and u2 upper bound of column

,N

AR
37 38

TA M
R

LK
OE
S T
Row Major Ordering Row Major Ordering
RC

Example 1: Suppose we want to calculate the address of element A [1, 2] and the Example 2: Consider array stores marks of 30 students in 4 subjects.

A
matrix is of [2][4], consider base address as 1000 and W is 2. Row represents subjects and Columns represents students.
Base Address of the array is 2500 and W = 2. Lower bound for row and column is 0.
It can be calculated as follow: C a. Compute the address of marks[3][5] in row major order.
DA

It can be calculated as follow:


B = 1000, W= 2, m=2, n=4, i=1, j=2 //l1 = 0, l2 = 2, u1 = 0, u2 = 4

Address a[i][j]=B+W(n(i-l1)+(j-l2))

Address (A[1, 2]) = 1000 + 2 *[4*(1-0) + (2-0)]


= 1000 + 2 * [4 + 2]
AM

= 1000 + 2 * 6
= 1000 + 12
= 1012
39 40

10
Row Major Ordering Row-major Implementation
Example 2: Consider array stores marks of 30 students in 4 subjects. • Example 3: A two-dimensional array defined as
Row represents subjects and Columns represents students. a[4:7,-1:3] requires 2 bytes of storage space for each element.

M 0 U R
Base Address of the array is 2500 and W = 2. Lower bound for row and column is 0.
a. Compute the address of marks[3][5] in row major order. • If the array is stored in row-major form, then calculate the address of
element at location a[6,2] given base address is 100.

PI 30 CT PU
It can be calculated as follow:

E
B = 2500, W= 2, m=30, n=4, i=3, j=5

PA 1 R
LOC a[i][j]=B+W(n(i-l1)+(j-l2))

IT T U G
LOC (A[3, 5]) = 2500 + 2 *[30*(3-0) + (5-0)]

A
= 2500 + 2 * [90 + 5]
= 2500 + 2 * 95

,N
= 2500 + 190 = 2690

AR
41 42

TA M
R

LK
OE
S T
Row-major Implementation Two-Dimensional Arrays:
RC

• Example 3: A two-dimensional array defined as Column-major Implementation


a[4:7,-1:3] requires 2 bytes of storage space for each element.

A
• In column major implementation memory allocation is done column by
• If the array is stored in row-major form, then calculate the address of element column that means first the elements of the complete first column is stored
at location a[6,2] given base address is 100. C
DA

then elements of complete second column is stored and so on.


• Sol: B=100, l1=4, l2=-1, u1=7, u2=3,W=2, i=6, j=2 and
• For example an array [3][3] is stored in the memory as show bellow:
• n(no. of col)= (u2- l2+1)=3-(-1)+1=5
A00 A10 A20 A01 A11 A21 A02 A12 A22
• Address of a[i][j]= B+W[(n(i-l1)+(j-l2))]
• Address of a[6][2]=100+2[(5(6-4) + (2-(-1))] Col 1 Col 2 Col 3
AM

=100+2[(5(2)+3))]
=100+26
=126 43 44

11
Two-Dimensional Arrays Two-Dimensional Arrays:
Column-major Implementation Column-major Implementation
• Address of elements in Column major implementation:

M 0 U R
• The storage can be clearly understood by arranging array
as matrix as show below: • It calculates by the following relation:
address of element a[i][j]=B+W(m(j-l2)+(i-l1))

PI 30 CT PU
E
A00 A01 A02 • Here, B=Base address

PA 1 R
a= A10 • W=size of each element array element
A11 A12

IT T U G
m=the number of row (u1-l1+1)
• l1 the lower bound of row
A20 A21 A22

A
• l2 is lower bound of column
Col 1 Col 2 Col 3 • u1 upper bound of row

,N
• u2 upper bound of column

AR
45 46

TA M
R

LK
OE
S T
Column Major Ordering Two-Dimensional Arrays:
RC

Example 1: Consider array stores marks of 30 students in 4 subjects. Column-major Implementation


Row represents subjects and Columns represents students. Base Address of the

A
A two-dimensional array defined as a[-20:20,10:35] requires one bytes of storage space
array is 2500 and W = 2. Lower bound for row and column is 0. for each element. If the array is stored in column-major form, then calculate the
Compute the address of marks[2][20] in column major order. address of element at location a[0,30] given base address is 500.
C
DA

• Sol: B=500, l1=-20, l2=10, u1=20, u2=35


It can be calculated as follow: • W=1 byte
• i=0, j=30 and m (no. of col)= u1- l1+1
B = 2500, W= 2, m=4, n=30, i=2, j=20
=20-(-20)+1=41
LOC A[i][j]= B+W(m(j − l2)+(i − l1)) • Address of a[i][j]= B+W(m(j-l2)+(i-l1))
• Address of a[0][30]=500+1(41(30-10)+(0-(-20))
=500+1(41*20+20)
AM

LOC (A[3, 5]) = 2500 + 2 *[4*(20) + 2]


= 2500 + 2 * [80 + 2] =500+1(820+20)
= 2500 + 2 * 82 =500+840
= 2500 + 164 = 2664 =1340
47 48

12
Two-Dimensional Arrays: Higher-Dimensional Arrays
An array X [-15:10, 15:40] requires one byte of storage.
If beginning location is 1500 determine the location of X [15][20]. An array can be declared with multiple dimensions.
• Sol: As we see here the number of rows and columns are not given in the question. 2 Dimensional 3 Dimensional

M 0 U R
• So they are calculated as:
• Number or rows say M = (u1- l1+1)= [10 – (- 15)] +1 = 26

PI 30 CT PU
• Number or columns say N = (u2- l2+1)= [40 – 15)] +1 = 26

E
• (i) Column Major Wise Calculation of above equation

PA 1 R
B = 1500, W = 1 byte, I = 15, J = 20, l1 = -15, l2 = 15, M = 26
• Address of A [ I ][ J ] = B + W * [M * ( J – l2 ) + ( I – l1 ) ] double Coord[100][100][100];

IT T U G
• = 1500 + 1 * [ 26 * (20 – 15) + (15 – (-15))]
• = 1500 + 1 * [26 * 5 + 30] = 1500 + 1 * [160] = 1660 [Ans]

A
• (ii) Row Major Wise Calculation of above equation
• B = 1500, W = 1 byte, I = 15, J = 20, l1 = -15, l2 = 15, N = 26 •
• Address of A [ I ][ J ] = B + W * [ N * ( I – l1 ) + ( J – l2 ) ]

,N
• = 1500 + 1* [26 * (15 – (-15))) + (20 – 15)]

AR
• = 1500 + 1 * [26 * 30 + 5] = 1500 + 1 * [780 + 5] = 1500 + 785 = 2285 [Ans]
49 Multiple dimensions get difficult to visualize graphically. 50

TA M
R

LK
OE
S T
Multi-Dimensional Array Multi-Dimensional Array
RC

Row Major Order: Column Major Order:


To calculate the address of an element stored in a 3D Array in Row Major Order following To calculate the address of an element stored in a 3D Array in Column Major Order following

A
formula is used: formula is used:
Address of A[i][j][k] = B + W*(m*n*(i − l1) + n*(j − l2) + (k − l3)) Address of A[i][j][k] = B + W*(m*n*(i − l1) + m*(k − l3) + (j − l2))
C
DA

Where: Where:
i = block a subset of an element whose address is to be found i = block a subset of an element whose address is to be found
j = row subset of an element whose address is to be found j = row subset of an element whose address is to be found
k = column subset of an element whose address is to be calculated k = column subset of an element whose address is to be calculated
B = Base address B = Base address
W = size of the data type stored in an array W = size of the data type stored in an array
m = total number of rows in an array m = total number of rows in an array
n = total number of columns in an array n = total number of columns in an array
AM

l1 = lower bound of row l1 = lower bound of row


l2= lower bound of column l2= lower bound of column
l3= lower bound of width l3= lower bound of width
51 52

13
Formula
D Row Major Order Column Major Order

M 0 U R
A[i][j]= A[i][j]=
2D

PI 30 CT PU
B+W(n(i − l1)+(j − l2)) B+W(m(j − l2)+(i − l1))
Algorithm Analysis

E
PA 1 R
A[i][j][k] = A[i][j][k] =
B + W(m*n*(i − l1) + n*(j − B + W(m*n*(i − l1) + m*(k − l3) +

IT T U G
3D
l2) + (k − l3)) (j − l2))

A
,N

AR
53
54

TA M
R

LK
OE
S T
Algorithm Motivations for Complexity Analysis
RC

• An algorithm is a set of instructions to be followed to solve a problem. • There are often many different algorithms which can be used to solve the same

A
• There can be more than one solution (more than one algorithm) to solve a given problem.
problem.
C • Thus, it makes sense to develop techniques that allow us to:
DA

• An algorithm can be implemented using different programming languages on different


platforms. o compare different algorithms with respect to their “efficiency”
• An algorithm must be correct. It should correctly solve the problem. o choose the most efficient algorithm for the problem
• e.g. For sorting, this means even if (1) the input is already sorted, or (2) it contains
repeated elements.
• The efficiency of any algorithmic solution to a problem is a measure of the:
• Once we have a correct algorithm for a problem, we have to determine the o Time efficiency: the time it takes to execute.
efficiency of that algorithm.
AM

o Space efficiency: the space (primary or secondary memory) it uses.


• We will focus on an algorithm’s efficiency with respect to time.

55 56

14
Algorithmic Performance Analysis of Algorithms
There are two aspects of algorithmic performance: • Analysis of Algorithms is the area of computer science that provides
• Time

M 0 U R
tools to analyze the efficiency of different methods of solutions.
• Instructions take time.
• How fast does the algorithm perform?

PI 30 CT PU
• How do we compare the time efficiency of two algorithms that solve

E
• What affects its runtime?
• Space the same problem?

PA 1 R
• Data structures take space

IT T U G
• What kind of data structures can be used?
• How does choice of data structure affect the runtime?

A
We will focus on time:
• How to estimate the time required for an algorithm

,N
• How to reduce the time required

AR
57 58

TA M
R

LK
OE
S T
Analysis of Algorithms Analysis of Algorithms
RC

• Analysis of Algorithms is the area of computer science that provides • When we analyze algorithms, we should employ mathematical

A
tools to analyze the efficiency of different methods of solutions. techniques that analyze algorithms independently of specific
C
DA

implementations, computers, or data.


• How do we compare the time efficiency of two algorithms that solve
the same problem?
• To analyze algorithms:
• First, we start to count the number of significant operations in a particular
• Naïve Approach: implement these algorithms in a programming solution to assess its efficiency.
language (C/C++), and run them to compare their time requirements. • Then, we will express the efficiency of algorithms using growth functions.
AM

59 60

15
General Rules for Estimation The Execution Time of Algorithms
• Loops: The running time of a loop is at most the running time of the statements
• Each operation in an algorithm (or a program) has a cost.
inside of that loop times the number of iterations.  Each operation takes a certain of time.

M 0 U R
• Nested Loops: Running time of a nested loop containing a statement in the inner count = count + 1; 

PI 30 CT PU
take a certain amount of time, but it is constant

E
most loop is the running time of statement multiplied by the product of the sized of
all loops.
A sequence of operations:

PA 1 R
count = count + 1; Cost: c1

IT T U G
• If/Else/case: Never more than the running time of the test plus the larger of running
times of Statements 1 and/or Statements 2…….so on. sum = sum + count; Cost: c2

A
• Consecutive Statements: Just add the running times of those consecutive statements.  Total Cost = c1 + c2

,N

AR
61 62

TA M
R

LK
OE
S T
The Execution Time of Algorithms Simple Complexity Analysis: Loops
RC

Example: Simple If-Statement • We start by considering how to count operations in for-loops.

A
Cost Times
if (n < 0) c1 1 C • First of all, we should know the number of iterations of the loop; say it is x.
DA

absval = -n c2 1 • Then the loop condition is executed x + 1 times.


else
absval = n; c3 1 • Each of the statements in the loop body is executed x times.

Total Cost <= c1 + max(c2, c3) • The loop-index update statement is executed x times.
AM

63 64

16
The Execution Time of Algorithms (cont.) The Execution Time of Algorithms (cont.)
#include <stdio.h> #include <stdio.h>
int main() int main()

M 0 U R
{ {
int i = 0; int i = 0;
int n = 5; int j = 0;

PI 30 CT PU
for (i = 0; i <= n; i++) int n = 5;

E
{ int count = 0;
printf ("\n The value of i inside the loop is %d", i); for (i = 0; i <= n; i++) {

PA 1 R
} for (j = 0; j <= n; j++) {

IT T U G
printf ("\n\n The value of i outside the loop is %d", i); count = count + 1;
} }
}

A
printf ("\nExcution Count of loop is %d", count);
}

,N

AR
65 66

TA M
R

LK
OE
S T
The Execution Time of Algorithms (cont.) The Execution Time of Algorithms (cont.)
RC

#include <stdio.h>
int main() Example: Simple Loop

A
{ int i = 0; Cost Times
int j, k = 0;
int n = 2; i = 0; c1
C
DA

int count = 0;
for (i = 0; i <= n; i++) { sum = 0; c2
for (j = 0; j <= n; j++) { while (i <= n) { c3
for (k = 0; k <= n; k++) {
count = count + 1; i = i + 1; c4
printf ("\nExecution Count of loop is =%d value of i is =%d j is =%d and k is =%d", count, i, j, k);
} sum = sum + i; c5
printf ("\n"); }
}
printf ("\n");
AM

}
printf ("\nExcution Count of loop is %d", count); Total Cost = ??
}

67 68

17
The Execution Time of Algorithms (cont.) The Execution Time of Algorithms (cont.)
Example: Simple Loop #include <stdio.h>
Cost Times int main()

M 0 U R
i = 0; c1 1 {
int i = 0;

PI 30 CT PU
sum = 0; c2 1
int n = 5;

E
while (i <= n) { c3 n+1 printf ("\n i \t n");
i = i + 1; c4 n while (i <= n) {

PA 1 R
sum = sum + i; c5 n i = i + 1;

IT T U G
} printf ("\n %d\t %d", i,n);
}

A
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5

,N
 The time required for this algorithm is proportional to n

AR
69 70

TA M
R

LK
OE
S T
RC
The Execution Time of Algorithms (cont.) The Execution Time of Algorithms (cont.)
Example: Nested Loop Example: Nested Loop

A
Cost Times Cost Times
i=0; c1 1
i=0; c1
C sum = 0; c2 1
DA

sum = 0; c2
while (i <= n) { c3 n+1
while (i <= n) { c3
j=0; c4 n
j=0; c4 while (j <= n) { c5 n*(n+1)
while (j <= n) { c5 sum = sum + i; c6 n*n
sum = sum + i; c6 j = j + 1; c7 n*n
j = j + 1; c7 }
} i = i + 1; c8 n
AM

}
i = i + 1; c8
Total Cost = [c1]+[c2]+[(n+1)*c3]+[n*c4]+[n*(n+1)*c5]+[n*n*c6]+[n*n*c7]+ [n*c8]
}
 The time required for this algorithm is proportional to n2
Total Cost = ??
71 72

18
Simple Assignment Time Example Simple Assignment Time Example
// Input: int A[N], array of N integers // Input: int A[N], array of N integers
// Output: Sum of all numbers in array A // Output: Sum of all numbers in array A

M 0 U R
int Sum(int A[], int N){ int Sum(int A[], int N){
int s=0; int s=0; 1
1

PI 30 CT PU
for (int i=0; i< N; i++)

E
for (int i=0; i< N; i++)
2 3 4 2 3 4
s = s + A[i]; s = s + A[i];

PA 1 R
5 6 7 5 6 7 1,2,8: Once
return s; return s;

IT T U G
} } 3,5,6,7: Once per each iteration of for loop, N iteration
8 8
4: Assignment and Addition N+1 times

A
Total: = 3(1) + 4(N) + (N+1)
= 3+ 4N + N + 1 = 5N + 4

,N
The complexity function of the

AR
algorithm is : f(N) = 5N +4 = N
73 73 74 74

TA M
R

LK
OE
S T
Complexity Analysis: Loop Example Complexity Analysis: Loop Example
RC

• Find the exact number of basic operations in the following program fragment:

• Find the exact number of basic operations in the following program fragment: double x, y;

A
x = 2.5 ; y = 3.0;
double x, y; for(int i = 0; i < n; i++){
C a[i] = x * y;
DA

x = 2.5 ; y = 3.0;
for(int i = 0; i < n; i++){ x = 2.5 * x;
a[i] = x * y; y = y + a[i];
x = 2.5 * x; }
y = y + a[i];
} • There are 2 assignments double variables outside the loop => 2 operations.
• The for loop actually comprises
• int variable assignments => 1 operation
• Since the loop iterates n times, the total number of basic operations inside the loop is (n) for i < n and
AM

2*(n+1) for i++.


• the loop body that has three assignments, two multiplications, and an addition and 2 array access and the
loop iterates n times => 8*n operations
• Thus the total number of basic operations is
• 8*n + 2*(n + 1) + (n) + 1+ 2 = 11n + 5 = n
75 76

19
Complexity Analysis: Loop Example Complexity Analysis: Loop Example
• Find the exact number of basic operations in the following program fragment: • Find the exact number of basic operations in the following program fragment:
double x, y;
x = 2.5 ; y = 3.0;

M 0 U R
for(int i = 0; i <= n; i++){
double x, y;
a[i] = x * y;
x = 2.5 ; y = 3.0;

PI 30 CT PU
x = 2.5 * x;
for(int i = 0; i <= n; i++){

E
y = y + a[i];
a[i] = x * y;
}
x = 2.5 * x;

PA 1 R
y = y + a[i]; • There are 2 assignments double variables outside the loop => 2 operations.
}

IT T U G
• The for loop actually comprises
• int variable assignments => 1 operation
• Since the loop iterates n times, the total number of basic operations inside the loop is (n+1) for i <= n and

A
2*(n+2) for i++.
• the loop body that has three assignments, two multiplications, and an addition and 2 array access and the loop
iterates (n+1) times => 8*(n+1) operations

,N
• Thus the total number of basic operations is
• 8*(n+1) + 2*(n + 2) + (n + 1) + 1+ 2 = 11n + 14 = n

AR
77 78

TA M
R

LK
OE
S T
How to Compare Formulas? How to Compare Formulas?
RC

Which is better: Which is better:


or 50N 2  31N 3  24N  15

A
3 N 2  N  21  4  3 N
50N 2  31N 3  24N  15 C Answer depends on value of N:
DA

N 50N 2  31N 3  24N  15 3 N 2  N  21  4  3 N


1 120 37
2 511 71
3 1374 159
4 2895 397

3 N 2  N  21  4  3 N
5 5260 1073
6 8655 3051
AM

7 13266 8923
8 19279 26465
9 26880 79005
10 36255 236527
79 80

20
What contribution of a single term?
Order of Magnitude Analysis
3 N  N  21  4  3
2 N
4  3N %ofTotal Measure speed with respect to the part of the sum that grows quickest

M 0 U R
N
1 37 12 32.4

PI 30 CT PU
2 71 36 50.7

E
3 159 108 67.9
4 397 324 81.6
5 1073 972 90.6
50N 2  31N 3  24N  15

PA 1 R
6 3051 2916 95.6 Ordering:
3 N 2  N  21  4  3 N

IT T U G
7 8923 8748 98.0
8 26465 26244 99.2
9 79005 78732 99.7

A
10 236527 236196 99.9
1  log 2 N  N  N  N log 2 N  N N  N 2  N 3  2 N  3N  N !

,N
• One term dominated the sum as the N grows

AR
81 82

TA M
R

LK
OE
S T
Input Size impact
RC
Simple Example for Growth of 5N+3
Estimated running time for different values of N:

A
What happens if we double the input size N?
C N = 10 => 53 steps
DA

N log2N 5N N log2N N2 2N N = 100 => 503 steps


8 3 40 24 64 256
16 4 80 64 256 65536 N = 1,000 => 5003 steps
32 5 160 160 1024 ~109 N = 1,000,000 => 5,000,003 steps
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076 As N grows, the number of steps grow in linear proportion to N for this function “Sum”
AM

83 83 84 84

21
What Dominates in Previous Example 5n+3? Common Growth Rates
f(n) Classification
What about the +3 and 5 in 5N+3? 1 Constant: run time is fixed, and does not depend upon n.
• As N gets large, the +3 becomes insignificant Most instructions are executed once, or only a few times, regardless of the amount of information

M 0 U R
• 5 is inaccurate, as different operations require varying amounts of time and also does being processed
not have any significant importance log n Logarithmic: when n increases, so does run time, but much slower.

PI 30 CT PU
Common in programs which solve large problems by transforming them into smaller problems.

E
Exp : binary Search
What is fundamental is that the time is linear in N.
n Linear: run time varies directly with n.

PA 1 R
Typically, a small amount of processing is done on each element. Exp: Linear Search
Asymptotic Complexity: As N gets large, concentrate on the highest order n log n When n doubles, run time slightly more than doubles.

IT T U G
term: Common in programs which break a problem down into smaller sub-problems, solves them
independently, then combines solutions. Exp: Merge
• Drop lower order terms such as +3

A
• Drop the constant coefficient such as 5 of the highest order term i.e. N n2 Quadratic: when n doubles, runtime increases fourfold.
Practical only for small problems; typically the program processes all pairs of input (e.g. in a
double nested loop). Exp: Insertion Search

,N
n3 Cubic: when n doubles, runtime increases eightfold. Exp: Matrix

AR
85
2n Exponential: when n doubles, run time squares. This is often the result of a natural, “brute force”
85 solution. Exp: Brute Force. 86

TA M
R

LK
OE
S T
Typical Functions – "Grades"
Running Times
RC

Function Common Name


Running Assume N =100,000 and processor speed is 1,000,000,000 operations per second

A
N! factorial time grows
2N Exponential 'quickly' with
C more input. Function Running Time
DA

Nd, d > 3 Polynomial


2N 3.2 x 1030,086 years
N3 Cubic N4 3171 years
N2 Quadratic N3 11.6 days
N N N Square root N N2 10 seconds
N log N Log Linear N N 0.032 seconds

N Linear N log N 0.0017 seconds


Running
N 0.0001 seconds
AM

N Root - n time grows


'slowly' with N 3.2 x 10-7 seconds
log N Logarithmic more input. log N 1.2 x 10-8 seconds
1 Constant
87 88

22
A Comparison of Growth-Rate Functions (cont.)
Time Complexity Analysis

M 0 U R
 There are different types of time complexities which can be analysed for an
algorithm:

PI 30 CT PU
• Best Case Time Complexity: (Omega (Ω)-notation)

E
• Average Case Time Complexity: (Theta (Θ)-notation)

PA 1 R
Worst Case Time Complexity: (Big-O (O)-notation)

IT T U G A
,N

AR
89 90

TA M
R

LK
OE
S T
Best Case Time Complexity: Big-  Average Case Time Complexity: Big- 
RC

It is measure of minimum time that algorithm will require for input of size “n”. The time that an algorithm will require to execute typical input data of size “n‟

A


is known as average case time complexity.
 Running time of many algorithms varies not only for inputs of different sizes but
also input of same size. C  We can say that value that is obtained by averaging running time of an algorithm
DA

for all possible inputs of size “n‟ can determine average case time complexity.
 For example in running time of some sorting algorithms, sorting will depend on
ordering of input data.
Best case depends on the input
 Therefore if input data of “n‟ items is presented in sorted order, operations Average case is difficult to compute
performed by algorithm will take least time. So we usually focus on worst case analysis
 Searching an Element and gets on first search Easier to compute
 Sorting the list and list is already in sorted order Usually close to the actual running time
AM

Crucial to real-time systems

91 92

23
Worst Case Time Complexity: Big-O Exercise
It is measure of maximum time that algorithm will require for input of size “n”.

M 0 U R
 #include <stdio.h>
 Therefore if various algorithms for sorting are taken into account int main() {

PI 30 CT PU
Say “n” input data items are supplied in reverse order for any sorting algorithm, int n=3;

E

 Then algorithm will require n2 operations to perform sort in ascending which for(int i=0; i<=(n*n); i++)

PA 1 R
will correspond to worst case time complexity of algorithm. printf("\n Count %d", i);

IT T U G
return 0;
}

A
This code has a complexity of O(n2) even though it has simple loops.

,N
Lesson learnt: complexity is not always directly dependent on number of loops.

AR
94
93

TA M
R

LK
OE
S T
Exercise Exercise
RC

#include <stdio.h> #include <stdio.h>


int main() { int main()

A
{
int i, j, n=3;
int i, j, n=3;
for(i=0; i*i<=n; i++){ C
DA

for(i = 0; i <= n; i++) {


for(j=0; j*j<=n; j++){ for( j = 0 ; j <= n ; j++) {
printf("\n Value is i = %d, j = %d", i,j);
printf("\nValue of i is %d, j is %d, n is %d", i, j, n); }
break;
printf("\n \tValue of i is %d, j is %d, n is %d\n", i, j, n); }
} printf("\n");
}
return 0;
printf("\n Final Value is i = %d, j = %d", i,j);
}
AM

return 0;
}
This code has a complexity of O(n) even though it has nested loops.
Lesson learnt: complexity is not always directly dependent on number of loops. This code has a complexity of O(n) even though it has nested loops.
Lesson learnt: complexity is not always directly dependent on number of loops.
95
96

24
Worst Case Time Complexity: Worst Case Time Complexity:
Consider f(n) = 3n + 2 Consider f(n) = 10n2 + 4n + 2
Let us take g(n) = n and c = 4, n0 = ? Let us take g(n) = n2

M 0 U R
Let us check the above condition f(n) ≤ cg(n) c = 11, n0 = ?
3n + 2 ≤ 4n 0 + 2 ≤ 0 False Let us check the above condition f(n) ≤ cg(n)

PI 30 CT PU
n0 = 0

E
n0 = 1 3n + 2 ≤ 4n 3 + 2 ≤ 4 False 10n2 + 4n + 2 ≤ 11n2 for all n ≥ ?
3n + 2 ≤ 4n 6 + 2 ≤ 8 True

PA 1 R
n0 = 2
3n + 2 ≤ 4n for all n ≥ 2 10n2 + 4n + 2 ≤ 11n2 for all n ≥ 5

IT T U G
The condition is satisfied. The condition is satisfied.

A
Hence f(n) = O(n). Hence f(n) = O(n2).

,N
The function f(n) = O(g(n)) if and only if there exists positive The function f(n) = O(g(n)) if and only if there exists positive

AR
constants c and n0 such that f(n) ≤ cg(n) for all n ≥ n0. constants c and n0 such that f(n) ≤ cg(n) for all n ≥ n0.
97 98

TA M
R

LK
OE
S T
Worst Case Time Complexity: Best Case Time Complexity:
RC

Consider f(n) = 3n + 2

A
Let us take g(n) = n
C c = 3, n0 = ?
DA

Let us check the above condition f(n) ≥ cg(n)


3n + 2 ≥ 3n for all n ≥ ?

The condition is satisfied.


Hence f(n) = Ω(n).
AM

Examples of f(n) and g(n) The function f(n) = Ω(g(n)) if and only if there exists positive constants c
and n0 such that f(n) ≥ cg(n) for all n ≥ n0.
99 100

25
Best Case Time Complexity: Worst, Average, and Best-case Complexity
Consider f(n) = 10n2 + 4n + 2 • Best-case Complexity
• Minimum number of steps for any possible input.

M 0 U R
• Used to analyse an algorithm under optimal conditions.
Let us take g(n) = n2
• Average-case Complexity

PI 30 CT PU
c = 10, n0 = ? • Average of the running times of all possible inputs.

E
Let us check the above condition f(n) ≥ cg(n) • It specifies the expected behaviour of the algorithm when the input is randomly
10n2 + 4n + 2 ≥ 10n2 for all n ≥ ? drawn from a given distribution.

PA 1 R
• Demands a definition of probability of each input, which is usually difficult to

IT T U G
provide and to analyze.
The condition is satisfied. • Worst-case Complexity

A
Hence f(n) = Ω (n2). • Maximum steps the algorithm takes for any possible input.
• It is an upper bound on the running time for any input

,N
The function f(n) = Ω(g(n)) if and only if there exists positive • Most tractable measure.

AR
constants c and n0 such that f(n) ≥ cg(n) for all n ≥ n0.
101 102

TA M
R

LK
OE
S T
Why is it called Asymptotic Analysis?
RC

Asymptotic Notation

A
• O notation: asymptotic “less than”: • From the discussion (for all the three notations: worst case, best case and
average case), we can easily understand that, in every case for a given
C function f(n) we are trying to find other function g(n) which approximates
DA

• f(n)=O(g(n)) implies: f(n) “≤” g(n)


f(n) at higher values of (n).
•  notation: asymptotic “greater than”: • That means, g(n) is also a curve which approximates at higher values of (n).
• In mathematics we call such curve as asymptotic curves.
• f(n)=  (g(n)) implies: f(n) “≥” g(n) • In other terms, g(n) is the asymptotic curve for f(n).
• For this reason, we call algorithm analysis as Asymptotic Analysis.
•  notation: asymptotic “equality”:
AM

• f(n)=  (g(n)) implies: f(n) “=” g(n)

103 104
103 104

26
Big-  Visualization Big-  Visualization

M 0 U R
PI 30 CT PU
E
PA 1 R
IT T U G A
The function f(n) = Ω(g(n)) if and only if there exists positive constants c and n0 such

,N
f(n) = θ(g(n)) if and only if there exists some positive constants c1 and c2 and n0 , such that c1
that f(n) ≥ cg(n) for all n ≥ n0.

AR
g(n) ≤ f(n) ≤ c2 g(n) for all n ≥ n0.
105
106

TA M
R

LK
OE
S T
Big-O Visualization COMPARING FUNCTIONS: ASYMPTOTIC
RC

NOTATION

A
• Big Oh Notation: Upper bound
C
DA

• Omega Notation: Lower bound

• Theta Notation: Tighter bound


AM

The function f(n) = O(g(n)) if and only if there exists positive constants c and n0
such that f(n) ≤ cg(n) for all n ≥ n0. 108
108
107

27
Multi-Dimensional Array Multi-Two-Dimensional Array
• C allows arrays of three or more dimensions. • For example:

M 0 U R
• The exact limit is determined by the compiler. int a[3][3][2]={ { {1,2},{3,4},{5,6}},
• The general form of a multi-dimensional array is: { {1,2},{3,4},{5,6}},

PI 30 CT PU
E
type array_name[s1][s2][s3]…..[sm]; { {1,2},{3,4},{5,6}}
• Suppose 3-D array can be group of an array of arrays. };

PA 1 R
• For example : int a[3][3][2]; • Memory representation of above 3-D array is bellow:

IT T U G
• Here 3-D array which is collection of three 2-D arrays

A
each contain 3 rows and 2 column.
1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6

,N

AR
109 0th 2-D array 1st 2-D array 2nd 2-D array 110

TA M
R

LK
OE
S T
RC

Passing Arrays To Functions: Passing Arrays To Functions:


One-Dimensional Arrays:Example Two-Dimensional Arrays:Example

A
void show (int a[],int s) void show (int a[][3],int r,int c)
{ int i; {
for (i=0;i<s;i++) int i,j;
C
DA

{ for (i=0;i<r;i++)
printf(“\n[%d] value is:%d”,a[i]); {
} printf(“\n”);
} for(j=0;j<c;j++)
void main() {
{ printf(“%d ”, arr[i][j]);
int arr[5]={10,20,30,40,50}; }
show(arr,5); }
AM

} }

111 112

28
Passing Arrays To Functions:
Two-Dimensional Arrays:Example Pointer & Arrays

M 0 U R
void main() • An array is a collection of homogeneous type of element stored in
adjacent memory location.

PI 30 CT PU
{

E
int arr[2][3]={ {10,20,30}, {40,50,60} }; • Therefore it is used in the situation to store more than one value at a
time in a single variable.
show(arr,2,3);

PA 1 R
• Array & pointer have a close link in fact array in itself acts like a

IT T U G
} pointer, as one element in a array is stored adjacent to the previous.
• Therefore which displaying the elements of array the next adjacent

A
address is automatically called when its previous element is displayed.

,N

AR
113 114

TA M
R

LK
OE
S T
RC

Pointer & Arrays Pointer & One-Dimensional Array

A
• But in case of using pointers with arrays, the element can be stored in
C • Pointer along with single dimensional array can be
DA

arrays at different location and can be called from that location when
used either to access a single element or it can also
needed.
be used to access the whole array.
• Two point must be remember:
• Given pointer only points to one particular type,
• Array elements are always stored in contiguous memory location.
not to all possible type.
• The increment or decrement of pointers leads to increment or decrement of
address based on the type of pointer define.
AM

115 116

29
Pointer & One-Dimensional Array Pointer & Two-Dimensional Array
void main() • Pointer is used in two-dimensional arrays in the

M 0 U R
{ int *p,i; same way as in single dimension.
int arr[5]={10,20,30,40,50}; • The address of the first element of the matrix can

PI 30 CT PU
p=&arr[0]; be passed on the pointer and the rest of elements

E
for(i=0;i<5;i++) can be displayed.

PA 1 R
{ • For example :

IT T U G
printf(“\n element =%d”,*p); int a[3][2]={{10,20}, {30,40},{50,60}};
p++;

A
}
The above initialized arrays will be stored in memory in following
getch();

,N
arrangement:

AR
}
117 118

TA M
R

LK
OE
S T
RC

Pointer & Two-Dimensional Array Array of Pointer

A
• Arrays of pointers refer to homogeneous collection of
• The above initialized arrays will be stored in memory in pointers that is collection of pointers the same data type.
following arrangement: C • The pointer are declared in array from same as other data
DA

Elements 10 20 30 40 50 60 types.
• For example an integer pointer array of size 50 can be
Matrix A[0][0] A[0][1] A[1][0] A[1][1] A[2][0] A[2][1] declared as:
Location int *ptr[50];
Address 1001 1003 1005 1007 1009 1011 • A pointer may also be used with more than one array
declaration of same data type for an array pointer of size
AM

50 can be used for first array of size 20 and the third of


• The program using pointer to handle 2-D array show in 10.
word file.
119 120

30
Array of Pointer Array of Structure
• Here, the pointer will be declared as: • A structure is simple to define if there are only one or

M 0 U R
int a[20],b[20],c[10]; two elements, but in case there are too many objects
int *ptr[5]; needed for a structure,

PI 30 CT PU
E
• In above give expression, we can assign the • for example a structure designed to show the data of
address element of each array as: each student of the class, then it that case the array will

PA 1 R
ptr[0]=&a[0]; be introduced a structure declaration using arrays to

IT T U G
ptr[1]=&b[0]; define objects is given in next slide.
ptr[2]=&c[0];

A
• The program using array of pointers shown in word
file.

,N

AR
121 122

TA M
R

LK
OE
S T
RC

Array of Structure
struct student
Array within the Structure
• We have used arrays outside the structures with the object

A
{ declaration.
char name[15]; C • Arrays can also be used within the structures along with the
DA

member declaration.
int rollno; • In other words, a member of a structure can be an array type
char result[10]; also.
};
Struct student data[60];
• The above declaration sets the memory space for 60
AM

objects.
• To see the detail example in word file.
123 124

31
Order-of-Magnitude Analysis and Big O Definition of the Order of an Algorithm
Notation

M 0 U R
• If Algorithm A requires time proportional to f(n), Definition:
Algorithm A is said to be order f(n), and it is denoted as
Algorithm A is order f(n) – denoted as O(f(n)) –

PI 30 CT PU
O(f(n)).

E
• The function f(n) is called the algorithm’s growth-rate if constants k and n0 exist such that A requires
function. no more than k*f(n) time units to solve a problem

PA 1 R
• Since the capital O is used in the notation, this notation of size n  n0.

IT T U G
is called the Big O notation.

A
• If Algorithm A requires time proportional to n2, it is • The requirement of n  n0 in the definition of O(f(n)) formalizes the
O(n2). notion of sufficiently large problems.

,N
• If Algorithm A requires time proportional to n, it is O(n). • In general, many values of k and n can satisfy this definition.

AR
125 126

TA M
R

LK
OE
S T
RC

Growth-Rate Functions Growth-Rate Functions


O(1) Time requirement is constant, and it is independent of the problem’s size.

A
O(log2n) Time requirement for a logarithmic algorithm increases increases slowly
as the problem size increases.
• If an algorithm takes 1 second to run with the problem size 8, what is the
C time requirement (approximately) for that algorithm with the problem size
DA

O(n) Time requirement for a linear algorithm increases directly with the size 16?
of the problem.
• If its order is:
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(1)  T(n) = 1 second
O(n2) Time requirement for a quadratic algorithm increases rapidly with the O(log2n)  T(n) = (1*log216) / log28 = 4/3 seconds
size of the problem. O(n)  T(n) = (1*16) / 8 = 2 seconds
O(n3) Time requirement for a cubic algorithm increases more rapidly with the O(n*log2n)  T(n) = (1*16*log216) / 8*log28 = 8/3 seconds
size of the problem than the time requirement for a quadratic algorithm. O(n2)  T(n) = (1*162) / 82 = 4 seconds
AM

O(2n) As the size of the problem increases, the time requirement for an
O(n )3  T(n) = (1*163) / 83 = 8 seconds
exponential algorithm increases too rapidly to be practical.
O(2n)  T(n) = (1*216) / 28 = 28 seconds = 256 seconds
127 128

32
A Comparison of Growth-Rate Functions Properties of Growth-Rate Functions

M 0 U R
1. We can ignore low-order terms in an algorithm’s growth-rate function.
• If an algorithm is O(n3+4n2+3n), it is also O(n3).

PI 30 CT PU
• We only use the higher-order term as algorithm’s growth-rate function.

E
2. We can ignore a multiplicative constant in the higher-order term of an

PA 1 R
algorithm’s growth-rate function.
• If an algorithm is O(5n3), it is also O(n3).

IT T U G
3. O(f(n)) + O(g(n)) = O(f(n)+g(n))

A
• We can combine growth-rate functions.
• If an algorithm is O(n3) + O(4n), it is also O(n3 +4n2)  So, it is O(n3).

,N
• Similar rules hold for multiplication.

AR
129 130

TA M
R

LK
OE
S T
RC

Growth-Rate Functions – Example1 Growth-Rate Functions –CostExample2


Times

A
i=0; c1 1
Cost Times sum = 0; c2 1
i = 0; c1 C
1 while (i <= n) { c3 n+1
DA

sum = 0; c2 1 j=1; c4 n
while (i <= n) { c3 n+1 while (j <= n) { c5 n*(n+1)
i = i + 1; c4 n sum = sum + i; c6 n*n
j = j + 1; c7 n*n
sum = sum + i; c5 n
}
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5 T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
AM

= (c3+c4+c5)*n + (c1+c2+c3) = (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)


= a*n + b = a*n2 + b*n + c
 So, the growth-rate function for this algorithm is O(n)  So, the growth-rate function for this algorithm is O(n2)
131 132

33
Growth-Rate Functions – Recursive
Growth-Rate Functions –Cost
Example3
Times Algorithms
void hanoi(int n, char source, char dest, char spare) { Cost
for (i=1; i<=n; i++) c1 n+1 if (n > 0) { c1

M 0 U R
n hanoi(n-1, source, spare, dest); c2
for (j=1; j<=i; j++) c2  ( j  1) cout << "Move top disk from pole " << source c3

PI 30 CT PU
j 1
j << " to pole " << dest << endl;

E
n
for (k=1; k<=j; k++) c3  (k  1)
j 1 k 1
hanoi(n-1, spare, dest, source); c4
n j } }

PA 1 R
x=x+1; c4 k
j 1 k 1

IT T U G
• The time-complexity function T(n) of a recursive algorithm
n n j is defined in terms of itself, and this is known as
 ( j  1) )+ c3* ( ) k
n j
T(n) = c1*(n+1) + c2*(
 ()k+c4*( recurrence equation for T(n).

A
1)
j 1 j 1 k 1 j 1 k 1

= a*n3 + b*n2 + c*n + d • To find the growth-rate function for a recursive algorithm,

,N
 So, the growth-rate function for this algorithm is O(n3) we have to solve its recurrence relation.

AR
133 134

TA M
R

LK
OE
S T
RC

Growth-Rate Functions – Hanoi Towers Growth-Rate Functions – Hanoi Towers (cont.)


• There are many methods to solve recurrence equations, but we will use a simple

A
method known as repeated substitutions.
• What is the cost of hanoi(n,’A’,’B’,’C’)?
C T(n) = 2*T(n-1) + c
DA

when n=0 = 2 * (2*T(n-2)+c) + c


T(0) = c1 = 2 * (2* (2*T(n-3)+c) + c) + c
= 23 * T(n-3) + (22+21+20)*c (assuming n>2)
when n>0 when substitution repeated i-1th times
T(n) = c1 + c2 + T(n-1) + c3 + c4 + T(n-1)
= 2i * T(n-i) + (2i-1+ ... +21+20)*c
= 2*T(n-1) + (c1+c2+c3+c4)
when i=n
= 2*T(n-1) + c  recurrence equation for the growth-rate
function of hanoi-towers algorithm = 2n * T(0) + (2n-1+ ... +21+20)*c
AM

n 1
= 2n * c1 + ( )*c 2i
• Now, we have to solve this recurrence equation to find the growth-rate function of hanoi-towers i 0
algorithm
= 2n * c1 + ( 2n-1 )*c = 2n*(c1+c) – c  So, the growth rate function is O(2n)
135 136

34
What• Antoalgorithm
Analyze can require different times to solve different What is Important?
problems of the same size.
• Eg. Searching an item in a list of n elements using sequential search. 

M 0 U R
Cost: 1,2,...,n • An array-based list retrieve operation is O(1), a linked-list-based list
• Worst-Case Analysis –The maximum amount of time that an retrieve operation is O(n).
algorithm require to solve a problem of size n.

PI 30 CT PU
• This gives an upper bound for the time complexity of an algorithm. • But insert and delete operations are much easier on a linked-list-based list

E
• Normally, we try to find worst-case behavior of an algorithm. implementation.
• Best-Case Analysis –The minimum amount of time that an  When selecting the implementation of an Abstract Data Type (ADT),

PA 1 R
algorithm require to solve a problem of size n. we have to consider how frequently particular ADT operations occur in a
• The best case behavior of an algorithm is NOT so useful.

IT T U G
given application.
• Average-Case Analysis –The average amount of time that an
algorithm require to solve a problem of size n.

A
• Sometimes, it is difficult to find the average-case behavior of an algorithm.
• We have to look at all possible data organizations of a given size n, and
• If the problem size is always small, we can probably ignore the algorithm’s
their distribution probabilities of these organizations. efficiency.

,N
• Worst-case analysis is more common than average-case analysis. • In this case, we should choose the simplest algorithm.

AR
137 138

TA M
R

LK
OE
S T
RC

What is Important? (cont.) Sequential Search

A
int sequentialSearch(const int a[], int item, int n){
• We have to weigh the trade-offs between an algorithm’s time for (int i = 0; i < n && a[i]!= item; i++);
C
DA

requirement and its memory requirements. if (i == n)


return –1;
• We have to compare algorithms for both style and efficiency. return i;

• The analysis should focus on gross differences in efficiency and not reward }
Unsuccessful Search:  O(n)
coding tricks that save small amount of time.
• That is, there is no need for coding tricks if the gain is not too much. Successful Search:
• Easily understandable program is also important. Best-Case: item is in the first location of the array O(1)
Worst-Case: item is in the last location of the array O(n)
• Order-of-magnitude analysis focuses on large problems. Average-Case: The number of key comparisons 1, 2, ..., n
AM

i ( n 2  n) / 2
i 1
  O(n)
n n
139 140

35
Binary Search Binary Search – Analysis

M 0 U R
int binarySearch(int a[], int size, int x) { • For an unsuccessful search:
int low =0; • The number of iterations in the loop is log2n + 1
int high = size –1;

PI 30 CT PU
int mid; // mid will be the index of  O(log2n)

E
// target when it’s found. • For a successful search:
while (low <= high) {
mid = (low + high)/2; • Best-Case: The number of iterations is 1.  O(1)

PA 1 R
if (a[mid] < x) • Worst-Case: The number of iterations is log2n +1  O(log2n)
low = mid + 1; • Average-Case:  O(log2n)

IT T U G
The avg. # of iterations < log2n
else if (a[mid] > x)
high = mid – 1;
else 0 1 2 3 4 5 6 7  an array with size 8

A
return mid; 3 2 3 1 3 2 3 4  # of iterations
} The average # of iterations = 21/8 < log28
return –1;

,N
}

AR
141 142

TA M
R

LK
OE
S T
Best, Average, and Worst case complexities
RC

How much better is O(log2n)? • We are usually interested in the worst case complexity:
• What are the most operations that might be performed for a given problem

A
size.
• We will not discuss the other cases -- best and average case.
n O(log2n)
16 4 C
DA

64 6
256 8
1024 (1KB) 10
16,384 14
131,072 17
262,144 18 • Best case depends on the input
524,288 19
AM

• Average case is difficult to compute


1,048,576 (1MB) 20 • So we usually focus on worst case analysis
1,073,741,824 (1GB) 30 • Easier to compute
• Usually close to the actual running time
143 • Crucial to real-time systems (e.g. air-traffic control) 144

36
Best, Average, and Worst case complexities
• Example: Linear Search Complexity
• Best Case : Item found at the beginning: One comparison

M 0 U R
• Worst Case : Item found at the end: n comparisons
• Average Case :Item may be found at index 0, or 1, or 2, . . . or n - 1

PI 30 CT PU
• Average number of comparisons is: (1 + 2 + . . . + n) / n = (n+1) / 2

E
• Worst and Average complexities of common sorting algorithms

PA 1 R
Method Worst Case Average Case

IT T U G
Selection sort n2 n2
Inserstion sort n2 n2

A
Merge sort n log n n log n
Quick sort n2 n log n

,N

AR
145

TA M
R

LK
OE
S T
RC

C A
DA
AM

37

You might also like