Datastrucpptnew
Datastrucpptnew
• The arrays for which we need two or more indices are known
as multidimensional array.
• Linked List-
15 10
Data member
and pointer NULL pointer (points to nothing)
struct node {
int data;
struct node *nextPtr;
}
• nextPtr
– Points to an object of type node
– Referred to as a link
• Ties one node to another node
• Types of linked lists:
– Singly linked list
• Begins with a pointer to the first node
• Terminates with a null pointer
• Only traversed in one direction
– Circular, singly linked
• Pointer in the last node points back to the first node
– Doubly linked list
• Two “start pointers” – first element and last element
• Each node has a forward pointer and a backward pointer
• Allows traversals both forwards and backwards
– Circular, doubly linked list
• Forward pointer of the last node points to the first node and backward
pointer of the first node points to the last node
– Header Linked List
• Linked list contains a header node that contains information regarding
complete linked list.
• Stacks-A stack, also called last-in-first-out (LIFO) system, is a linear
list in which insertions (push operation) and deletions (pop operations)
can take place only at one end, called the top of stack .
– Similar to a pile of dishes
– Bottom of stack indicated by a link member to NULL
– Constrained version of a linked list
• The two operations on stack are:
push
– Adds a new node to the top of the stack
pop
– Removes a node from the top
– Stores the popped value
– Returns true if pop was successful
• Queues- A queue, also called a First-in-First-out (FIFO) system, is a
linear list in which insertions can take place at one end of the list,
called the rear of the list and deletions can take place only from other
end , called the front of the list.
• Similar to a supermarket checkout line
• Insert and remove operations
• Tree- A tree is a non-linear data structure that represents a hierarchical
relationship between various elements. The top node of a tree is called
the root node and each subsequent node is called the child node of the
root. Each node can have one or more than one child nodes. A tree that
can have any number of child nodes is called a general tree. If there is
an maximum number N of successors for a node in a tree, then the tree
is called an N-ary tree. In particular a binary (2-ary) tree is a tree in
which each node has either 0, 1, or 2 successors.
• Binary trees
– Binary tree can be empty without any node whereas a general tree
cannot be empty.
– All nodes contain two links
• None, one, or both of which may be NULL
– The root node is the first node in a tree.
– Each link in the root node refers to a child
– A node with no children is called a leaf node
Diagram of a binary tree
A D
C
• Binary search tree
– A type of binary treee
– Values in left subtree less than parent
– Values in right subtree greater than parent
– Facilitates duplicate elimination
– Fast searches, maximum of log n comparisons
47
25 77
11 43 65 93
7 17 31 44 68
• Graph- A graph, G , is an ordered set (V,E) where V represent set of
elements called nodes or vertices in graph terminology and E
represent the edges between these elements. This data structure is
used to represent relationship between pairs of elements which are not
necessarily hierarchical in nature. Usually there is no distinguished
`first' or `last' nodes. Graph may or may not have cycles
Algorithms
• A finite set of steps that specify a sequence of operations to be carried
out in order to solve a specific problem is called an algorithm
Properties of Algorithms:
1. Finiteness- Algorithm must terminate in finite number of steps
2. Absence of Ambiguity-Each step must be clear and unambiguous
3. Feasibility-Each step must be simple enough that it can be easily
translated into the required language
4. Input-These are zero or more values which are externally supplied to
the algorithm
5. Output-At least one value is produced
Conventions Used for Algorithms
• Identifying number-Each algorithm is assigned as identification
number
• Comments-Each step may contain a comment in brackets which
indicate the main purpose of the step
• Assignment statement-Assignment statement will use colon-equal
notation
– Set max:= DATA[1]
• Input/Output- Data may be input from user by means of a read
statement
– Read: Variable names
• Similarly, messages placed in quotation marks, and data in variables
may be output by means of a write statement:
– Write: messages or variable names
• Selection Logic or conditional flow-
– If condition, then:
– [end of if structure]
• Double alternative
– If condition, then
– Else:
– [End of if structure]
• Multiple Alternatives
– If condition, then:
– Else if condition2, then:
– Else if condition3, then
– Else:
– [End of if structure]
• Iteration Logic
• Repeat-While Loop
– Repeat while condition:
– [End of loop]
Algorithm complexity
• An algorithm is a sequence of steps to solve a problem. There can be more
than one algorithm to solve a particular problem and some of these
solutions may be more efficient than others. The efficiency of an algorithm
is determined in terms of utilization of two resources, time of execution
and memory. This efficiency analysis of an algorithm is called complexity
analysis, and it is a very important and widely-studied subject in computer
science. Performance requirements are usually more critical than memory
requirements. Thus in general, the algorithms are analyzed on the basis of
performance requirements i.e running time efficiency.
• Specifically complexity analysis is used in determining how resource
requirements of an algorithm grow in relation to the size of input. The
input can be any type of data. The analyst has to decide which property of
the input should be measured; the best choice is the property that most
significantly affects the efficiency-factor we are trying to analyze. Most
commonly, we measure one of the following :
– the number of additions, multiplications etc. (for numerical algorithms).
– the number of comparisons (for searching, sorting)
– the number of data moves (assignment statements)
.
Based on the type of resource variation studied, there are two types of
complexities
• Time complexity
• Space complexity
Space Complexity- The space complexity of an algorithm is amount of
memory it needs to run to completion. The space needed by a program
consists of following components:
• Instruction space-space needed to store the executable version of program
and is fixed.
• Data space-space needed to store all constants, variable values and has
further two components:
– Space required by constants and simple variables. This space is fixed.
– Space needed by fixed sized structured variable such as arrays and
structures.
– Dynamically allocated space. This space usually varies.
• Environment stack space- Space needed to store information needed to
resume the suspended functions. Each time a function is invoked
following information is saved on environment stack
– Return address i.e from where it has to resume after completion of the
called function
– Values of all local variables and values of formal parameters in
function being invoked.
Time complexity- Time complexity of an algorithm is amount of time it
needs to run to completion. To measure time complexity, key operations
are identified in a program and are counted till program completes its
execution. Time taken for various key operations are:
• Execution of one of the following operations takes time 1:
1. assignment operation
2. single I/O operations
3. single Boolean operations, numeric comparisons
4. single arithmetic operations
5. function return
6. array index operations, pointer dereferences
• Running time of a selection statement (if, switch) is the time for the
condition evaluation + the maximum of the running times for the
individual clauses in the selection.
• Running time of a function call is 1 for setup + the time for any
parameter calculations + the time required for the execution of the
function body.
Expressing Space and time complexity: Big ‘O’ notation
10 6
105
nlogn
104
n
10 3
logn
10 2
13
13 28 56
30
30 36 36
28
28
56 56 30
36
56 13 28
36 30 13
36 36
30 13 28 13
30 56 56 28
Radix Bins 0 1 2 3 4 5 6 7 8 9
• Hashing-The search time of each algorithm depends on the number n of elements
in the collection S of data. Is it possible to design a search of O(1)– that is, one
that has a constant search time, no matter where the element is located in the
list? In theory, this goal is not an impossible dream.
Now, another question arises: can’t we generate the hash function that
never produces a collision? The answer is that in some cases it is
possible to generate such a function. This is known as a perfect hash
function.
INFO[PTR]
LINK[PTR]
• Insertion into a Linked List- Together with the linked list , a special
list is maintained in memory which consist of unused memory cells.
This list, which has its own pointer, is called the list of available space
or the free storage list or the free pool. This list is called the Avail
List. During insertion operation, new nodes are taken from this avail
list which is maintained just like normal data linked list using its own
pointer. Similar to START, Avail list also has its own start pointer
named as AVAIL which stores the address of the first free node of
avail list.
1 Kirk 7
2 6
START 5
3 Dean 11
4 Maxwell 12
5 Adams 3
6 0
7 Lane 4
8 Green 1
AVAIL 10
9 Samuels 0
10 2
11 Fields 8
12 Nelson 9
• Insertion in a Linked List- Algorithms which insert nodes into linked
lists come up in various situations. Three main cases to be discussed are:
• Inserting node at the beginning of the List
• Inserting node after the node with a given location
• Inserting node into a sorted list.
For all algorithms, variable ITEM contains the new information to be
added to the list. All cases follow some common steps:
• Checking to see if free space is available in AVAIL list. If
AVAIL=NULL, algorithm will print overflow
• Removing first node from AVAIL list. Using variable NEW to keep
track of location of new node, this step can be implemented by the pair of
assignments (in this order)
NEW:= AVAIL and AVAIL:=LINK[AVAIL]
• Copying new information into new node
INFO[NEW]:=ITEM
START
AVAIL
ITEM
NEW
AVAIL
NEW
18
ITEM
INSERTING BEFORE A GIVEN NODE
• Algorithm: INSLOC(INFO, LINK,START,AVAIL, ITEM, ITEM1)
This algorithm inserts ITEM so that ITEM PRECEDES the
node with data item ITEM1
• Step 1: [OVERFLOW] If AVAIL=NULL, then:
Write: OVERFLOW
Return
• Step 2: [Remove first node from AVAIL list]
Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
• Step 3: Set INFO[NEW]:= ITEM [Copies new data into new node]
• Step 4: Set PTR:=START and SAVE:=NULL
• Step 5: Repeat while INFO[PTR]≠ ITEM1
Set SAVE:=PTR and PTR:=LINK[PTR]
[End of Loop]
• Step 6:Set LOC:=SAVE
• Step 7: If LOC=NULL, then:
Set LINK[NEW]:=START and START:=NEW
Else:
Set LINK[NEW]:=LINK[LOC] and LINK[LOC]:= NEW
[End of If structure]
• Step 8: Return
INSERTING BEFORE A GIVEN NODE WITH LOCATION GIVEN
• Algorithm: INSLOC(INFO, LINK,START,AVAIL, ITEM, LOC)
This algorithm inserts ITEM so that ITEM PRECEDES the
node with location LOC
• Step 1: [OVERFLOW] If AVAIL=NULL, then:
Write: OVERFLOW
Return
• Step 2: [Remove first node from AVAIL list]
Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
• Step 3: Set INFO[NEW]:= ITEM [Copies new data into new node]
• Step 4: Set PTR:=START and SAVE:=NULL
• Step 5: Repeat while PTR≠ LOC
Set SAVE:=PTR and PTR:=LINK[PTR]
[End of Loop]
• Step 6:Set LOC:=SAVE
• Step 7: If LOC=NULL, then:
Set LINK[NEW]:=START and START:=NEW
Else:
Set LINK[NEW]:=LINK[LOC] and LINK[LOC]:= NEW
[End of If structure]
• Step 8: Return
INSERTING BEFORE NTH NODE
• Algorithm: INSLOC(INFO, LINK,START,AVAIL, ITEM, N)
This algorithm inserts ITEM so that ITEM PRECEDES the Nth node
• Step 1: [OVERFLOW] If AVAIL=NULL, then:
Write: OVERFLOW
Return
• Step 2: [Remove first node from AVAIL list]
Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
• Step 3: Set INFO[NEW]:= ITEM [Copies new data into new node]
• Step 4: Set PTR:=START , SAVE:=NULL and K:=1
• Step 5: Repeat while K≠ N
• SAVE:=PTR and PTR:=LINK[PTR]
• Set K:=K+1
• [End of Loop]
• Step 6:Set LOC:=SAVE
• Step 7: If LOC=NULL, then:
Set LINK[NEW]:=START and START:=NEW
Else:
Set LINK[NEW]:=LINK[LOC] and LINK[LOC]:= NEW
[End of If structure]
• Step 8: Return
• For insertion in a sorted list, we use two alogorithms. One algorithm
finds the location of the node after which the new node has to be
inserted. This algorithm returns the location to main algorithm which
finally do the insertion.
INSERTING INTO A SORTED LIST
• Algorithm: INSSRT(INFO,LINK,START,AVAIL,ITEM)
• Step 1: CALL FINDA(INFO,LINK, START,ITEM,LOC)
• Step 2: CALL INSLOC(INFO,LINK,START,AVAIL,LOC,ITEM)
• Step 3: Exit
• Algorithm: FINDA(INFO,LINK,START,ITEM,LOC)
This algorithm finds the location LOC of the last node in a
sorted list such that INFO[LOC ]<ITEM, or sets
LOC=NULL
• Step 1: [List Empty ?] If START=NULL, then:
Set LOC:=NULL
Return
[End of If structure]
• Step 2: [Special case] If ITEM<INFO[START], then:
Set LOC:=NULL
Return
[End of If structure]
• Step 3: Set SAVE:=START and PTR:=LINK[START]
• Step 4: Repeat while PTR≠NULL:
If ITEM< INFO[PTR], then:
Set LOC:=SAVE
Return
[End of If Structure]
Set SAVE:=PTR and PTR:=LINK[PTR]
[End of Step 4 Loop]
• Step 5: Set LOC:=SAVE
• Step 6: Return
• Algorithm: INSLOC(INFO, LINK,START,AVAIL, LOC, ITEM)
This algorithm inserts ITEM so that ITEM follows the
node with location LOC or inserts ITEM as the first node
when LOC =NULL
• Step 1: [OVERFLOW] If AVAIL=NULL, then:
• Write: OVERFLOW
• Return
• Step 2: [Remove first node from AVAIL list]
Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
• Step 3: Set INFO[NEW]:= ITEM [Copies new data into new node]
• Step 4: If LOC=NULL, then:
Set LINK[NEW]:=START and START:=NEW
Else:
Set LINK[NEW]:=LINK[LOC] and LINK[LOC]:= NEW
[End of If structure]
• Step 5: Return
Deletion of a node from a linked list
• Case 1: Deletion of node following a given node
• Algorithm: DEL(INFO,LINK,START,AVAIL,LOC,LOCP)
This algorithm deletes node N with location LOC. LOCP
is location of node which precedes N or when N is first
node, LOCP=NULL.
• Step 1: If LOC=NULL, then:
Write: ‘UNDERFLOW’
Exit
• Step 2: If LOCP=NULL, then:
Set START:=LINK[START] [Deletes first node]
Else:
Set LINK[LOCP]:=LINK[LOC]
[End of if structure]
• Step 3: [Return deleted node to AVAIL list]
Set LINK[LOC]:=AVAIL and
AVAIL:=LOC
• Step 4: Return
Deleting the node with a given item of information
For deleting with given item of information, we first need to
know the location of item to be deleted and also the location
preceding the item to be deleted. For this, one algorithm will be called
from the main algorithm to search for the two locations. Two
variables, SAVE and PTR will be used to save the location of
preceding and current node at every comparison respectively. Once ,
locations are found, deletion will be done in main algorithm.
• Algorithm: DELETE(INFO, LINK, START, AVAIL, ITEM)
• Step 1: CALL FINDB(INFO, LINK,START, ITEM, LOC, LOCP)
• Step 2: If LOC=NULL, then:
Write: ‘Underflow’
Exit
• Step 3: If LOCP=NULL, then:
Set START:= LINK[START]
Else:
Set LINK[LOCP]:=LINK[LOC]
[End of if structure]
• Step 4: Set LINK[LOC]:=AVAIL and AVAIL:=LOC
• Step 5: Return
• Algorithm: FINDB(INFO,LINK,START,ITEM,LOC,LOCP)
This algorithm finds the location LOC of first node N
which contains ITEM and location LOCP of node
preceding N. if ITEM does not appear in the list, procedure
sets LOC=NULL and if ITEM appears in first node, then it
sets LOCP=NULL
• Step 1: If START=NULL, then:
Set LOC:=NULL and LOCP:=NULL
Return
• Step 2: If INFO[START]=ITEM, then:
Set LOC:=START and LOCP:=NULL
Return
• Step 3: Set SAVE:=START and PTR:=LINK[START]
• Step 4: Repeat while PTR≠NULL:
• Step 5: If INFO[PTR]=ITEM, then:
Set LOC:=PTR and LOCP:=SAVE
Return
• Step 6: Set SAVE:=PTR and PTR:=LINK[PTR]
[End of Loop]
• Step 7:[Item to be deleted not in list] Set LOC:=NULL
• Step 8: Return
Concatenating two linear linked lists
• Algorithm: Concatenate(INFO,LINK,START1,START2)
This algorithm concatenates two linked lists with start
pointers START1 and START2
• Step 1: Set PTR:=START1
• Step 2: Repeat while LINK[PTR]≠NULL:
Set PTR:=LINK[PTR]
[End of Step 2 Loop]
• Step 3: Set LINK[PTR]:=START2
• Step 4: Return
• Circular Linked List- A circular linked list is a linked list in which last
element or node of the list points to first node. For non-empty circular
linked list, there are no NULL pointers. The memory declarations for
representing the circular linked lists are the same as for linear linked
lists. All operations performed on linear linked lists can be easily
extended to circular linked lists with following exceptions:
• While inserting new node at the end of the list, its next pointer field is
made to point to the first node.
• While testing for end of list, we compare the next pointer field with
address of the first node
Circular linked list is usually implemented using header linked
list. Header linked list is a linked list which always contains a special
node called the header node, at the beginning of the list. This header
node usually contains vital information about the linked list such as
number of nodes in lists, whether list is sorted or not etc. Circular header
lists are frequently used instead of ordinary linked lists as many
operations are much easier to state and implement using header lists
• This comes from the following two properties of circular header
linked lists:
• The null pointer is not used, and hence all pointers contain valid
addresses
• Every (ordinary ) node has a predecessor, so the first node may not
require a special case.
• Algorithm: (Traversing a circular header linked list)
This algorithm traverses a circular header linked list with
START pointer storing the address of the header node.
• Step 1: Set PTR:=LINK[START]
• Step 2: Repeat while PTR≠START:
Apply PROCESS to INFO[PTR]
Set PTR:=LINK[PTR]
[End of Loop]
• Step 3: Return
Searching a circular header linked list
• Algorithm: SRCHHL(INFO,LINK,START,ITEM,LOC)
• This algorithm searches a circular header linked list
• Step 1: Set PTR:=LINK[START]
• Step 2: Repeat while INFO[PTR]≠ITEM and PTR≠START:
Set PTR:=LINK[PTR]
[End of Loop]
• Step 3: If INFO[PTR]=ITEM, then:
Set LOC:=PTR
Else:
Set LOC:=NULL
[End of If structure]
• Step 4: Return
Deletion from a circular header linked list
Algorithm: DELLOCHL(INFO,LINK,START,AVAIL,ITEM)
This algorithm deletes an item from a circular header
linked list.
• Step 1: CALL FINDBHL(INFO,LINK,START,ITEM,LOC,LOCP)
• Step 2: If LOC=NULL, then:
Write: ‘item not in the list’
Exit
• Step 3: Set LINK[LOCP]:=LINK[LOC] [Node deleted]
• Step 4: Set LINK[LOC]:=AVAIL and AVAIL:=LOC
• [Memory retuned to Avail list]
• Step 5: Return
Algorithm: FINDBHL(NFO,LINK,START,ITEM,LOC,LOCP)
This algorithm finds the location of the node to be deleted
and the location of the node preceding the node to be
deleted
• Step 1: Set SAVE:=START and PTR:=LINK[START]
• Step 2: Repeat while INFO[PTR]≠ITEM and PTR≠START
Set SAVE:=PTR and PTR:=LINK[PTR]
[End of Loop]
• Step 3: If INFO[PTR]=ITEM, then:
Set LOC:=PTR and LOCP:=SAVE
Else:
Set LOC:=NULL and LOCP:=SAVE
[End of If Structure]
• Step 4: Return
Insertion in a circular header linked list
Algorithm: INSRT(INFO,LINK,START,AVAIL,ITEM,LOC)
This algorithm inserts item in a circular header linked list
after the location LOC
• Step 1:If AVAIL=NULL, then
Write: ‘OVERFLOW’
Exit
• Step 2: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
• Step 3: Set INFO[NEW]:=ITEM
• Step 4: Set LINK[NEW]:=LINK[LOC]
Set LINK[LOC]:=NEW
• Step 5: Return
Insertion in a sorted circular header linked list
Algorithm: INSSRT(INFO,LINK,START,AVAIL,ITEM)
This algorithm inserts an element in a sorted circular header
linked list
Step 1: CALL FINDA(INFO,LINK,START,ITEM,LOC)
Step 2: If AVAIL=NULL, then
Write: ‘OVERFLOW’
Return
Step 3: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
Step 4: Set INFO[NEW]:=ITEM
Step 5: Set LINK[NEW]:=LINK[LOC]
Set LINK[LOC]:=NEW
Step 6: Return
Algorithm: FINDA(INFO,LINK,ITEM,LOC,START)
This algorithm finds the location LOC after which to
insert
• Step 1: Set PTR:=START
• Step 2: Set SAVE:=PTR and PTR:=LINK[PTR]
• Step 3: Repeat while PTR≠START
If INFO[PTR]>ITEM, then
Set LOC:=SAVE
Return
Set SAVE:=PTR and PTR:=LINK[PTR]
[End of Loop]
• Step 4: Set LOC:=SAVE
• Step 5: Return
• One of the most important application of Linked List is representation
of a polynomial in memory. Although, polynomial can be represented
using a linear linked list but common and preferred way of representing
polynomial is using circular linked list with a header node.
• Polynomial Representation: Header linked list are frequently used for
maintaining polynomials in memory. The header node plays an
important part in this representation since it is needed to represent the
zero polynomial.
• Specifically, the information part of node is divided into two fields
representing respectively, the coefficient and the exponent of
corresponding polynomial term and nodes are linked according to
decreasing degree. List pointer variable POLY points to header node
whose exponent field is assigned a negative number, in this case -1. The
array representation of List will require three linear arrays as COEFF,
EXP and LINK. For example:
• P(x)= 2x8-5x7-3x2+4 can be represented as:
• POLY
0 -1 2 8 -5 7 -3 2 4 0
FORW[PTR]
BACK[PTR]
Operations on a Two-way list
• Traversal
• Algorithm: Traversal
This algorithm traverses a two-way list. FORW and BACK
are the two address parts of each node containing the address
of next node and previous node respectively. INFO is the
information part of each node. START contains the address
of the first node
Step 1: Set PTR:=START
Step 2: Repeat while PTR≠ NULL
Apply PROCESS to INFO[PTR]
Step 3: Set PTR:=FORW[PTR]
[End of Step 2 Loop]
Step 4: Exit
• Algorithm: SEARCH(INFO,FORW,BACK,ITEM,START,LOC)
This algorithm searches the location LOC of ITEM in a two-
way list and sets LOC=NULL if ITEM is not found in the list
• Step 1: Set PTR:=START
• Step 2: Repeat while PTR≠NULL and INFO[PTR]≠ITEM
Set PTR:=FORW[PTR]
[End of Loop]
• Step 3: If INFO[PTR]=ITEM, then:
Set LOC:=PTR
Else:
Set LOC:=NULL
• Step 4: Return
• Algorithm: DELETE(INFO,FROW,BACK,START,AVAIL,LOC)
This algorithm deletes a node from a two-way list
• Step 1: If LOC=START , then:
START:=FORW[START]
BACK[START]:=NULL
Return
• Step 2: [Delete node] Set FORW[BACK[LOC]]:=FORW[LOC]
Set BACK[FORW[LOC]]:=BACK[LOC]
• Step 3: [Returning node to AVAIL]
Set FORW[LOC]:=AVAIL and AVAIL:=LOC
• Step 4: Return
• Algorithm:INSRT (INFO,FORW,BACK,START,AVAIL,LOCA, LOCB,
ITEM)
This algorithm inserts an item in a doubly linked list.
LOCA and LOCB location of adjacent nodes A and B
• Step 1: [OVERFLOW] If AVAIL=NULL, then:
Write: OVERFLOW
Return
• Step 2: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
Set INFO[NEW]:=ITEM
• Step 3: Set FORW[LOCA]:=NEW and BACK[NEW]:=LOCA
Set FORW[NEW]:=LOCB and BACK[LOCB]:=NEW
• Step 4: Return
FORW[PTR] NEW
BACK[PTR]
TEST QUESTIONS
• Algorithm to copy the contents of one linked list to another
• Algorithm: Copy (INFO,LINK,START,AVAIL)
This algorithm copies the contents of one linked list to
another.
• Step 1: If AVAIL=NULL,then
Write: ‘Overflow’
Return
• Step 2: Set PTR:=START
• Step 3: Set NEW:=AVAIL , START1:=NEW and AVAIL:=LINK[AVAIL]
• Step 4: Repeat while PTR≠ NULL
INFO[NEW]:=INFO[PTR]
LINK[NEW]:=AVAIL and AVAIL:=LINK[AVAIL]
NEW:=LINK[NEW]
[End of Loop]
• Step 5: Set LINK[NEW]:=NULL
• Step 6: Return
• Algorithm: Copy (INFO,LINK,START,START1)
This algorithm copies the contents of one linked list to another. START AND START1 are the
start pointers of two lists
• Step 1: Set PTR:=START and PTR1:=START1
• Step 2: Repeat while PTR≠ NULL and PTR1 ≠ NULL
INFO[PTR1]:=INFO[PTR]
PTR1:=LINK[PTR1]
PTR:=LINK[PTR]
[End of Loop]
• Step 3: If PTR=NULL and PTR1=NULL
Return
• Step 4:[Case when the list to be copied is still left]
If PTR1=NULL,then
PTR1: =AVAIL and AVAIL:=LINK[AVAIL]
Repeat while PTR ≠ NULL
INFO[PTR1]:=INFO[PTR]
LINK[PTR1]:=AVAIL and AVAIL:=LINK[AVAIL]
PTR1:=LINK[PTR1]
[End of Loop]
• Step 5: [Case when list to be copied is finished, Truncate the extra nodes]
If PTR1 ≠NULL, then
Set PTR1:=NULL
[End of If structure]
• Step 6: Return
• Algorithm to insert a node after the kth node in the circular linked list
• Algorithm: INSRT (INFO,LINK,START,AVAIL,K,ITEM)
This algorithm inserts in a circular linked list after the kth node
• Step 1: If AVAIL:=NULL, then
Write: ‘OVERFLOW’
Return
• Step 2: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
• Set INFO[NEW]:=ITEM
• Step 3: Set PTR:=START and N:=1
• Step 4: If N=K, then
Set LINK[NEW]:=LINK[PTR]
Set LINK[PTR]:=NEW
Return
• Step 5: Set PTR:=LINK[PTR] and
• Step 6: Repeat while N ≠ K
Set PTR:=LINK[PTR] and N:=N+1
[End of if structure]
• Step 7: Set LINK[NEW]:=LINK[PTR]
Set LINK[PTR]:=NEW
• Step 8: Return
• Algorithm to delete the kth node from a doubly linked list
• Algorithm: Del (INFO,FORW,BACK,FIRST,LAST,AVAIL,K)
This algorithm deletes the kth node
• Step 1: Set N:=1 and PTR:=FIRST
• Step 2: Set SAVE:=PTR and PTR:=FORW[PTR]
• Step 3: Repeat while N≠ K
Set SAVE:=PTR and PTR:=FORW[PTR]
Set N:=N+1
[End of If structure]
• Step 4: If N=K, then
FORW[SAVE]:=FORW[PTR]
BACK[FORW[PTR]]:=BACK[PTR]
FORW[PTR]:=AVAIL and AVAIL:=PTR
[End of If structure]
• Step 5: Return
GARBAGE
COLLECTION
• In computer science, garbage collection (GC) is a form of automatic memory
management. The garbage collector, or just collector, attempts to reclaim garbage, or
memory used by objects that will never be accessed or mutated again by the
application. Garbage collection is often portrayed as the opposite of manual memory
management, which requires the programmer to specify which objects to deallocate
and return to the memory system.
• The basic principle of how a garbage collector works is:
• Determine what data objects in a program will not be accessed in the future
• Reclaim the resources used by those objects .
Reachability of an object
• Informally, a reachable object can be defined as an object for which there exists some
variable in the program environment that leads to it, either directly or through
references from other reachable objects. More precisely, objects can be reachable in
. only two ways:
• A distinguished set of objects are assumed to be reachable—these are known as the
roots. Typically, these include all the objects referenced from anywhere in the call
stack (that is, all local variables and parameters in the functions currently being
invoked), and any global variables.
• Anything referenced from a reachable object is itself reachable; more formally,
reachability is a transitive closure
• The memory is traced for garbage collection using Tracing collectors
OR SIMPLY COLLECTORS. Tracing collectors are called that
way because they trace through the working set of memory. These
garbage collectors perform collection in cycles. A cycle is started
when the collector decides (or is notified) that it needs to reclaim
storage, which in particular happens when the system is low on
memory. The original method involves a naive mark-and-sweep in
which the entire memory set is touched several times
• In this method, each object in memory has a flag (typically a single
bit) reserved for garbage collection use only. This flag is always
cleared (counter-intuitively), except during the collection cycle. The
first stage of collection sweeps the entire 'root set', marking each
accessible object as being 'in-use'. All objects transitively accessible
from the root set are marked, as well. Finally, each object in memory
is again examined; those with the in-use flag still cleared are not
reachable by any program or data, and their memory is freed. (For
objects which are marked in-use, the in-use flag is cleared again,
preparing for the next cycle.).
Moving vs. non-moving garbage Collection
• Once the unreachable set has been determined, the garbage collector
may simply release the unreachable objects and leave everything else
as it is, or it may copy some or all of the reachable objects into a new
area of memory, updating all references to those objects as needed.
These are called "non-moving" and "moving" garbage collectors,
respectively.
• At first, a moving garbage collection strategy may seem inefficient
and costly compared to the non-moving approach, since much more
work would appear to be required on each cycle. In fact, however, the
moving garbage collection strategy leads to several performance
advantages, both during the garbage collection cycle itself and during
actual program execution
Memory Allocation: Garbage Collection
• The maintenance of linked list in memory assumes the possibility of
inserting new nodes into the linked lists and hence requires some
mechanism which provides unused memory space for new nodes..
Analogously, some mechanism is required whereby memory space of
deleted nodes becomes available for future use.
Together with linked list, a special list is maintained in memory
which consists of unused memory cells. This list, which has its own
pointer is called the list of available space or the free-storage list or
the free pool. During insertions and deletions in a linked list, these
unused memory cells will also be linked together to form a linked list
using AVAIL as its list pointer variable.
Garbage Collection: The operating system of a computer may
periodically collect all deleted space onto the free-storage list. Any
technique which does this collection is called garbage collection.
Garbage collection is mainly used when a node is deleted from a list
or an entire list is deleted from a program.
Garbage collection usually takes place in two steps:
• First the computer runs through the whole list tagging those cells
which are currently in use,
• The computer then runs through the memory collecting all untagged
spaces onto the free storage list.
Garbage collection may take place when there is only some
minimum amount of space or no space at all left in free storage list or
when CPU is idle and has time to do the collection.
STACKS
• Stack- A stack is a linear data structure in which items may be added
or removed only at one end . Accordingly, stacks are also called last-
in-first-out or LIFO lists. The end at which element is added or
removed is called the top of the stack. Two basic operations
associated with stacks are :
• Push- Term used to denote insertion of an element onto a stack.
The order in which elements are pushed onto a stack is reverse of the
( /
+ + +
• ( ( ( P= A B C* D / +
Transforming Infix Expression into Prefix Expression
• Algorithm: [Polish Notation] PREFIX (Q, P)
Suppose Q is an arithmetic expression written in infix
notation. This algorithm finds the equivalent prefix
expression P
Step 1: Reverse the input string
Step 2: Examine the next element in the input
Step 3: If it is operand, add it to output string
Step 4: If it is closing parentheses, push it on stack
Step 5: If it is operator, then:
(i) if stack is empty, push operator on stack
(ii) if top of stack is closing parentheses, push operator on the
stack
(iii) If it has same or higher priority than top of stack, push
operator on stack
Else pop the operator from the stack and add it to output
string, repeat step 5
• Step 6: If it is an opening parentheses, pop operators from stack and
add them to output string until a closing parentheses is encountered,
pop and discard the closing parentheses.
• Step 7: If there is more input go to step 2
• Step 8: If there is no more input, unstack the remaining operators and
add them to output string
• Step 9: Reverse the output string
• Consider the following arithmetic expression P written in postfix
notation
P: 12, 7, 3 -, /, 2, 1, 5, +, *, +
(a) Translate P, by inspection and hand, into its equivalent infix
expression
(b) Evaluate the infix expression
Sol: (a) Scanning from left to right, translate each operator from postfix
to infix notation
P = 12, [7-3], /, 2, 1, 5, +, *, +
= [12/[7-3]],2, [1+5],*,+
= 12/(7-3)+2*(1+5)
(b) 12/(7-3)+2*(1+5)
= [3],[2*6],+
= 3+12
= 15
Practical applications of stack
– Or when both have same priority but X was added to list before Y
Algorithm:LKQINS(INFO,LINK,FRONT,PRN,AVAIL,ITEM, P)
This algorithm inserts an item in linked list implementation of priority queue
Step 1: If AVAIL=NULL,then:
Write: ‘OVERFLOW’
Exit
Step 2: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
Step 3: [Enter the data and priority of new node] Set INFO[NEW]:=ITEM and PRN[NEW]:=P
Step 4: Set PTR:=FRONT
Step 5: If PRN[PTR]>PRN[NEW], then
LINK[NEW]:=FRONT
FRONT:=NEW
Return
[End of If Structure]
Step 5: Repeat while PTR≠NULL and PRN[PTR]<=PRN[NEW]
Set SAVE:=PTR
Set PTR:=LINK[PTR]
[End of If Structure]
Step 6: If PRN[PTR]>PRN[NEW]
Set LINK[SAVE]:=NEW
Set LINK[NEW]:=PTR
Else:
Set LINK[SAVE]:=NEW
Set LINK[NEW]=NULL
[End of If Structure]
Step 7: Return
• Another way to maintain a priority queue in memory is to use a
separate queue for each level of priority . Each such queue will appear
in its own circular array and must have its own pair of pointers,
FRONT and REAR.
• If each queue is allocated the same amount of space, a two
dimensional array QUEUE can be used instead of the linear arrays for
representing a priority queue. If K represents the row K of the queue,
FRONT[K] and REAR[K] are the front and rear indexes of the Kth
row.
1 2 3 4 5 6
1 AAA
2 BBB CCC XXX
Priority
3
4 FFF DDD EEE
5 GGG
Algorithm: QINSERT( QUEUE,N, FRONT, REAR,ITEM,K)
This algorithm inserts an element in a priority queue in a row with priority
K. N is the size of the Kth row.
• Step 1:[Queue already filled]
If FRONT[K]=1 and REAR[K]=N or FRONT[K]=REAR[K]+1, then:
Write: ‘OVERFLOW’
Exit
• Step 2: If FRONT[K]=NULL, then: [Queue initially empty]
Set FRONT[K]:=1 and REAR[K]:=1
Else If REAR[K]=N, then:
Set REAR[K]:=1
Else:
Set REAR[K]:=REAR[K]+1
[End of If structure]
• Step 3: Set QUEUE[K][REAR[K]]:=ITEM
• Step 4: Return
• Algorithm: QDELETE(QUEUE,N,FRONT,REAR,ITEM, START, MAXP)
• This algorithm deletes an element from a priority queue. MAXP is the
• maximum priority in the array
• Step 1: Set K=1 [Priority number]
• Step 2: Repeat while K<=MAXP and FRONT[K]=NULL
Set K=K+1
[End of Loop]
• Step 3: If K>MAXP , then:
Write:’UNDERFLOW’
Exit
[End of If structure]
• Step 4: Set ITEM:=QUEUE[K][FRONT[K]]
• Step 5: If FRONT[K]=REAR[K], then: [Empty Queue]
Set FRONT[K]:=NULL and REAR[K]:=NULL
Else If FRONT[K]=N, then:
Set FRONT[K]:=1
Else:
Set FRONT[K]:=FRONT[K]+1
[End of If structure]
• Step 5: Return
TREE
• A tree is a non-linear data structure mainly used to represent data
containing hierarchical relationship between elements. In hierarchical
data we have ancestor-descendent, superior-subordinate, whole-part,
or similar relationship among data elements.
• A (general) tree T is defined as a finite nonempty set of elements such
that
• There is a special node at the highest level of hierarchy called the
root,
• and the remaining elements , if any, are partitioned into disjoint sets
T1,T2,T3---Tn where each of these sets is a tree, called the sub tree of T.
• In other words, one may define a tree as a collection of nodes and
each node is connected to another node through a branch. The nodes
are connected in such a way that there are no loops in the tree and
there is a distinguished node called the root of the tree.
• Tree Terminology
• Parent node- If N is a node in T with left successor S1 and right successor S2, then N
is called father or parent of S1 and S2. Similarly, S1 is called left child of N and S2 is
called the right child of N. The child node is also called the descendant of a node N
• Siblings- The child nodes with same parent are called siblings
• Level of element- Each node in tree is assigned a level number. By definition, root
of the tree is at level 0;its children, if any, are at level 1; their children, if any, are at
level 2; and so on. Thus a node is assigned a level number one more than the level
number of its parent
• Depth/Height of Tree- The height or depth of tree is maximum number of
nodes in a branch . It is one more than the maximum level number
of the tree.
• Degree of an element- The degree of a node in a tree is number of children it has.
The degree of leaf node is zero.
• Degree of Tree- The degree of a tree is the maximum degree of its nodes.
• Edge- Line drawn from a node N of T to a successor is called an edge.
• Path- A sequence of edges is called an path
• Leaf- A terminal node of a tree is called leaf node
• Branch- Path ending in a leaf is called branch of the tree
• The most common form of tree maintained in computer is binary tree.
• Binary Tree- A binary tree T is defined as a finite set of elements,
called nodes, such that either:
– T is empty (called null tree or empty tree) or,
– T contains a distinguished node, R, called root of T and remaining
nodes of T form an ordered pair of disjoint binary trees T1 and T2
• Two trees T1 and T2 are called respectively left and right subtree of R
(root node of T). If T1 is nonempty, then its root is called left
successor of R. Similarly, If T2 is nonempty, then its root is called
right successor of R
Root Node
A
(Left Successor of A)
B C (Right Successor of A)
D E G H
F J K
L
• The nodes D,F,G,L,K are the terminal or leaf nodes
• Bianry trees are used to represent algebraic expressions involving
only binary operations, such as
• E= (a-b)/((c*d)+e)
• Each variable or constant in E appears as an internal node in T whose
left and right subtree correspond to operands of the expression
/
- +
a b * e
c d
• Before constructing a tree for an algebraic expression, we have to see
the precedence of the operators involved in the expression.
Difference between binary tree and a general tree
• Each element in binary tree has at most two sub trees whereas each
element in a tree can have any number of sub trees
• The sub trees of each element in a binary tree are ordered. That is we
can distinguish between left and right sub trees. The sub trees in a tree
are unordered.
Properties of Binary Trees
• Each node of a binary tree T can have at most two children. Thus at level r of t,
there can be atmost 2r nodes.
• The number of nodes in a tree for given number of levels in a tree is
2n-1
• Depth of a tree T with n nodes is given by
Dn= log2n + 1
Dn ≈ log2n
• Complete Binary tree- A binary tree T is said to be complete if all its levels,
except possibly the last, have maximum number of possible nodes, and if all the
nodes at last level appear as far left as possible. Thus there is a unique complete tree
T with exactly n nodes.
• Extended Binary Trees: 2-Trees- A binary tree is said to be a 2-tree or an
extended binary tree if each node N has either 0 or 2 children. In such a case, nodes
with 2 children are called internal nodes, and nodes with 0 child are called external
nodes. The external and internal nodes are distinguished diagrammatically by using
circles for internal nodes and squares for external nodes
Representing Binary Trees in memory
• Binary trees can be represented
• using linked list
• using a single array called the sequential representation of tree
• Sequential representation of Binary Trees- This representation uses
only a single linear array Tree as follows:
• The root R of T is stored in TREE[1]
• If a node N occupies TREE[K], then its left child is stored in
TREE[2*K] and its right child is stored in TREE[2*K+1]
45
22 77
11 30 90
15 25 88
45
22
77
11
30
0 NULL
90
0 NULL
15
25
0 NULL
0 NULL
0 NULL
88
• It can be seen that a sequential representation of a binary tree requires
numbering of nodes; starting with nodes on level 1, then on level 2
and so on. The nodes are numbered from left to right .
• It is an ideal case for representation of a complete binary tree and in
this case no space is wasted. However for other binary trees, most of
the space remains unutilized. As can be seen in the figure, we require
14 locations in array even though the tree has only 9 nodes. If null
entries for successors of the terminal nodes are included, we would
actually require 29 locations instead of 14.Thus sequential
representation is usually inefficient unless binary tree is complete or
nearly complete
Linked representation of Binary Tree
• In linked representation, Tree is maintained in memory by means of
three parallel arrays, INFO, LEFT and RIGHT and a pointer variable
ROOT. Each node N of T will correspond to a location K such that
INFO[K] contains data at node N. LEFT[K] contains the location of
left child of node N and RIGHT[K] contains the location of right
child of node N. ROOT will contain location of root R of Tree. If any
subtree is empty, corresponding pointer will contain null value. If the
tree T itself is empty, then ROOT will contain null value
ROOT
A
B C
D E F G
H I J
Traversing Binary Trees
There are three standard ways of traversing a binary tree T with root
R. These are preorder, inorder and postorder traversals
• Preorder
PROCESS the root R
Traverse the left sub tree of R in preorder
Traverse the right sub tree of R in preorder
• Inorder
Traverse the left sub tree of R in inorder
Process the root R
Traverse the right sub tree of R in inorder
• Postorder
Traverse the left sub tree of R in postorder
Traverse the right sub tree of R in postorder
Process the root R
• The difference between the algorithms is the time at which the root R
is processed. In pre algorithm, root R is processed before sub trees are
traversed; in the in algorithm, root R is processed between traversals
of sub trees and in post algorithm , the root is processed after the sub
trees are traversed.
A
B C
D E F
E K
C
As K is to the left of C in preorder
• Creating the right subtree of F
• The root node is D
• From inorder, the nodes on the left of D are: H (left root of D)
the nodes on the right of D are: B G (right root of D)
Thus the tree is:
A D
E K H G
C B
F
A D
E K H G
C B
Threads: Inorder Threading
• Considering linked list representation of a binary tree, it can be seen
that half of the entries in pointer fields LEFT and RIGHT will contain
null entries. This space may be more efficiently used by replacing the
null entries by special pointers called Threads which point to nodes
higher in tree. Such trees are called Threaded trees.
• The threads in a threaded tree are usually indicated by dotted lines . In
computer memory, threads may be represented by negative integers
when ordinary pointers are denoted by positive integers.
• There are many ways to thread a binary tree T but each threading will
correspond to a particular traversal of T. Trees can be threaded using
one-way threading or two-way threading. Unless otherwise stated,
threading will correspond to inorder traversal of T.
• Accordingly, in one-way threading, a thread will appear in right null
field of a node and will point to the next node in inorder traversal of T
• In two-way threading of T, a thread will also appear in the LEFT
field of a node and will point to the preceding node in inorder
traversal of T
• A
B C
D E G H
F J K
B C
D E G H
F J K
2 9
6 10
8
• Binary search tree is one of the most important data structures in
computer science. This structure enables one to search for and find an
element with an average running time
f(n)=O(log2 n )
• It also enables one to easily insert and delete elements. This structure
contrasts with following structures:
• Sorted linear array- here one can find the element with a
running time of O(log2 n ) but it is expensive to insert and
delete
• Linked list- Here one can easily insert and delete but searching
is expensive with running time of O(n)
Searching and Inserting in a BST
• Algorithm: This algorithm searches for ITEM in a tree and inserts it if
not present in tree
• Step 1: Compare ITEM with root node N of Tree
(i) If ITEM < N, proceed to left child of N
(ii) If ITEM >= N, proceed to right child of N
• Step 2: Repeat step 1 until one of the following occurs:
(i) If ITEM = N, then:
Write: ‘Search successful’
(ii) Empty sub tree found indicating search unsuccessful.
Insert item in place of empty sub tree
Algorithm: INSBT(INFO, LEFT, RIGHT, AVAIL, ITEM, LOC)
This algorithm finds the location LOC of an ITEM in T or adds ITEM as a new
node in T at location LOC
Step 1: Call FIND(INFO, LEFT, RIGHT, ROOT, ITEM, LOC, PAR)
Step 2: If LOC ≠ NULL, then
Return
Step 3: [Copy item into new node in AVAIL list]
(a) If AVAIL=NULL, then:
Write: ‘OVERFLOW’
Return
(b) Set NEW:=AVAIL, AVAIL:=LINK[AVAIL] and
INFO[NEW]:=ITEM
(c) Set LEFT[NEW]:=NULL and RIGHT[NEW]:=NULL
Step 4:[Add ITEM to tree]
If PAR=NULL, then:
Set ROOT:=NEW
Else If ITEM<INFO[PAR], then:
Set LEFT[PAR]:=NEW
Else:
Set RIGHT[PAR]:=NEW
[End of If structure]
Step 5: Return
Algorithm: FIND(INFO,LEFT,RIGHT,ROOT,ITEM,LOC,PAR)
This algorithm finds the location LOC of ITEM in T and also the location PAR of the parent of
ITEM. There are three special cases
(a) LOC=NULL and PAR=NULL will indicate tree is empty
(b) LOC≠ NULL and PAR=NULL will indicate that ITEM is the root of T
( c) LOC=NULL and PAR ≠ NULL will indicate that ITEM is not in T and can be added to T as
a child of node N with location PAR
Step 1: If ROOT= NULL , then:
Set LOC:=NULL and PAR:=NULL
Return
Step 2: If ITEM=INFO[ROOT], then:
Set LOC:=ROOT and PAR:=NULL
Write: ’Item is the root of the tree’
Return
Step3: If ITEM < INFO[ROOT], then:
Set PTR:=LEFT[ROOT] and SAVE:=ROOT
Else:
Set PTR:=RIGHT[ROOT] and SAVE:= ROOT
[End of If structure]
Step 4: Repeat while PTR ≠ NULL:
If ITEM=INFO[PTR] ,then:
Set LOC:=PTR and PAR:=SAVE
Write: ‘ the location of the node in tree is’, LOC
Return
If ITEM< INFO[PTR] , then:
Set SAVE:=PTR and PTR:=LEFT[PTR]
Else:
Set SAVE:=PTR and PTR:=RIGHT[PTR]
[End of If structure]
[End of Step 4 Loop]
Step 5: [Search unsuccessful] Set LOC:=NULL and PAR:=SAVE
Step 6: Return
• Deletion in a Binary Search Tree- Deletion in a BST uses a
procedure FIND to find the location of node N which contains ITEM
and also the location of parent node P(N). The way N is deleted from
the tree depends primarily on the number of children of node N. There
are three cases:
• Case 1: N has no children. Then N is deleted from T by simply
replacing the location P(N) by null pointer
• Case 2: N has exactly one child. Then N is deleted from T by simply
replacing the location of N by location of the only child of N
• Case 3: N has two children. Let S(N) denote the inorder successor of
N. Then N is deleted from T by first deleting S(N) from T(by
using Case 1 or Case 2) and then replacing node N in T by
node S(N)
• Case 1: When node to be deleted does not have two children
Algorithm: DELA( INFO, LEFT,RIGHT,ROOT,LOC,PAR)
This procedure deletes node N at location LOC where N
does not have two children. PAR gives the location of
parent node of N or else PAR=NULL indicating N is the
root node. Pointer CHILD gives the location of only child
of N
• Step 1: If LEFT[LOC]=NULL and RIGHT[LOC]=NULL, then:
Set CHILD=NULL
Else If LEFT[LOC]≠NULL, then:
Set CHILD:=LEFT[LOC]
Else
Set CHILD:=RIGHT[LOC]
• Step 2: If PAR ≠ NULL, then:
If LOC=LEFT[PAR] , then:
Set LEFT[PAR]:=CHILD
Else:
Set RIGHT[PAR]:=CHILD
Else:
Set ROOT:=CHILD
• Step 3: Return
• Case 2: When node to be deleted has two children
• Algorithm: DELB( INFO, LEFT, RIGHT, ROOT, LOC, PAR, SUC, PARSUC)
This procedure deletes node N at location LOC where N have two children. PAR
gives the location of parent node of N or else PAR=NULL indicating N is the
root node. Pointer SUC gives the location of in order successor of N and PARSUC
gives the location of parent of in order successor
Step 1: (a) Set PTR:=RIGHT[LOC] and SAVE:=LOC
(b) Repeat while LEFT[PTR]≠NULL
Set SAVE:=PTR and PTR:=LEFT[PTR]
[End of Loop]
(c ) Set SUC:=PTR and PARSUC:=SAVE
Step 2: CALL DELA(INFO,LEFT,RIGHT, ROOT,SUC,PARSUC)
Step 3: (a) If PAR ≠ NULL, then:
If LOC = LEFT [PAR], then:
Set LEFT[PAR]:=SUC
Else:
Set RIGHT[PAR]:=SUC
[End of If structure]
Else:
Set ROOT:=SUC
[End of If structure]
(b) Set LEFT[SUC]:=LEFT[LOC] and
Set RIGHT[SUC]:=RIGHT[LOC]
• Step 4: Return
Heap
• Suppose H is a complete binary tree with n elements. Then H is called
a heap or a maxheap if each node N of H has the property that value
of N is greater than or equal to value at each of the children of N.
• 97
88 95
66 55 95 48
66 35 48 55 62 77 25 38
18 40 30 26 24
• Analogously, a minheap is a heap such that value at N is less than or
equal to the value of each of its children. Heap is more efficiently
implemented through array rather than linked list. In a heap, the
location of parent of a node PTR is given by PTR/2
Inserting an element in a Heap
Suppose H is a heap with N elements, and suppose an ITEM of
information is given. We insert ITEM into the heap H as follows:
• First adjoin the ITEM at the end of H so that H is still a complete tree
but not necessarily a heap
• Then let the ITEM rise to its appropriate place in H so that H is finally
a heap
• Algorithm: INSHEAP( TREE, N, ITEM)
A heap H with N elements is stored in the array TREE and an ITEM of
information is given. This procedure inserts the ITEM as the new element
of H. PTR gives the location of ITEM as it rises in the tree and PAR
denotes the parent of ITEM
• Step 1: Set N:= N +1 and PTR:=N
• Step 2: Repeat Step 3 to 6 while PTR > 1
Set PAR:=PTR/2
If ITEM ≤ TREE[PAR], then:
Set TREE[PTR]:=ITEM
Return
Set TREE[PTR]:=TREE[PAR]
[End of If structure]
Set PTR:=PAR
[End of Loop]
• Step 3: Set TREE[1]:=ITEM
• Step 4: Return
Deleting the root node in a heap
Suppose H is a heap with N elements and suppose we want to delete
the root R of H. This is accomplished as follows:
• Assign the root R to some variable ITEM
• Replace the deleted node R by last node L of H so that H is still a
complete tree but not necessarily a heap
• Let L sink to its appropriate place in H so that H is finally a heap
• Algorithm: DELHEAP( TREE, N , ITEM )
A heap H with N elements is stored in the array TREE.
This algorithm assigns the root TREE[1] of H to the
variable ITEM and then reheaps the remaining elements.
The variable LAST stores the value of the original last
node of H. The pointers PTR, LEFT and RIGHT give the
location of LAST and its left and right children as LAST
sinks into the tree.
Step 1: Set ITEM:=TREE[1]
Step 2: Set LAST:=TREE[N] and N:=N-1
Step 3: Set PTR:=1, LEFT:=2 and RIGHT:=3
Step 4: Repeat step 5 to 7 while RIGHT ≤ N:
Step 5: If LAST ≥ TREE[LEFT] and LAST ≥ TREE [RIGHT] , then:
Set TREE[PTR]:=LAST
Return
Step 6: If TREE[RIGHT]≤ TREE[LEFT], then:
Set TREE[PTR]:=TREE[LEFT]
Set PTR:=LEFT
Else:
Set TREE[PTR]:=TREE[RIGHT] and PTR:=RIGHT
[End of If structure]
Set LEFT:= 2* PTR and RIGHT:=LEFT + 1
[End of Loop]
Step 7: If LEFT=N and If LAST < TREE[LEFT], then:
Set TREE[PTR]:=TREE[LEFT] and Set PTR:=LEFT
Step 8: Set TREE[PTR]:=LAST
Return
90
80 85
60 50 75 70
Application of Heap
HeapSort- One of the important applications of heap is sorting of an
array using heapsort method. Suppose an array A with N elements is
to be sorted. The heapsort algorithm sorts the array in two phases:
Since the root element of heap contains the largest element of the
heap, phase B deletes the elements in decreasing order. Similarly,
using heapsort in minheap sorts the elements in increasing order as
then the root represents the smallest element of the heap.
• Algorithm: HEAPSORT(A,N)
An array A with N elements is given. This algorithm sorts
the elements of the array
• Step 1: [Build a heap H]
Repeat for J=1 to N-1:
Call INSHEAP(A, J, A[J+1])
[End of Loop]
• Step 2: [Sort A repeatedly deleting the root of H]
Repeat while N > 1:
(a) Call DELHEAP( A, N, ITEM)
(b) Set A[N + 1] := ITEM [Store the elements deleted from
the heap]
[End of loop]
• Step 3: Exit
• Problem: Create a Heap out of the following data:
jan feb mar apr may jun jul aug sept oct nov dec
• Solution:
sep
oct jun
Right-skewed Left-skewed
• For efficiency sake, we would like to guarantee that h remains
O(log2n). One way to do this is to force our trees to be height-
balanced.
• Method to check whether a tree is height balanced or not is as
follows:
– Start at the leaves and work towards the root of the tree.
– Check the height of the subtrees(left and right) of the node.
– A tree is said to be height balanced if the difference of heights of
its left and right subtrees of each node is equal to 0, 1 or -1
• Example:
• Check whether the shown tree is balanced or not
A
B C
D
Sol: Starting from the leaf nodes D and C, the height of left and right
subtrees of C and D are each 0. Thus their difference is also 0
• Check the height of subtrees of B
Height of left subtree of B is 1 and height of right subtree of B is 0.
Thus the difference of two is 1 Thus B is not perfectly balanced but
the tree is still considered to be height balanced.
• Check the height of subtrees of A
Height of left subtree of A is 2 while the height of its right subtree is
1. The difference of two heights still lies within 1.
• Thus for all nodes the tree is a balanced binary tree.
• Check whether the shown tree is balanced or not
A
B F
B AR C AR B A
RR LL
BL C B CR BL CL CR AR
CL CR BL CL x
x x
RL Rotation-This rotation occurs when the new node is inserted in left
subtree of right subtree of A. It’s a combination of LL followed by
RR
A C
RL
T1 B A B
C T4 T1 T2 T3 T4
T2 T3 NEW
NEW
• RL Rotation- This rotation occurs when the new node is inserted in right subtree of
left subtree of A.
A A
T1 B T1 C
C T4 LLT2 B
T2 T3 NEW T3 T4
NEW RR
C
A B
T1 T2 T3 T4
NEW
Problem: Construct an AVL search tree by inserting the following
elements in the order of their occurrence
64, 1, 14, 26, 13, 110, 98, 85
Sol:
Deletion in an AVL search Tree
• The deletion of element in AVL search tree leads to imbalance in the
tree which is corrected using different rotations. The rotations are
classified according to the place of the deleted node in the tree.
• On deletion of a node X from AVL tree, let A be the closest ancestor
node on the path from X to the root node with balance factor of +2 or
-2 .To restore the balance, the deletion is classified as L or R
depending on whether the deletion occurred on the left or right sub
tree of A.
• Depending on value of BF(B) where B is the root of left or right sub
tree of A, the R or L rotation is further classified as R0, R1 and
R-1 or L0, L1 and L-1. The L rotations are the mirror images of their
corresponding R rotations.
R0 Rotation- This rotation is applied when the BF of B is 0 after deletion of the
node
R1 Rotation- This rotation is applied when the BF of B is 1
R-1 Rotation- This rotation is applied when the BF of B is -1
• L rotations are the mirror images of R rotations. Thus L0 will be
applied when the node is deleted from the left subtree of A and the BF
of B in the right subtree is 0
• Similarly, L1and L-1 will be applied on deleting a node from left
subtree of A and if the BF of root node of right subtree of A is either 1
or -1 respectively.
GRAPH
• Graph- A graph G consists of :
– A set V of elements called the nodes (or points or vertices)
– A set E of edges such that each edge e in E is identified with a
unique (unordered) pair [u,v] of nodes in V, denoted by e=[u,v]
The nodes u and v are called the end points of e or adjacent nodes
or neighbors.
• The edge in a graph can be directed or undirected depending on
whether the direction of the edge is specified or not.
• A graph in which each edge is directed is called a directed graph or
digraph.
• A graph in which each edge is undirected is called undirected graph.
• A graph which contains both directed and undirected edges is called
mixed graph.
• Let G=(V,E) be a graph and e є E be a directed edge associated with
ordered pair of vertices (v1,v2). Then the edge e is said to be initiating
from v1 to v2. v1 is the starting and v2 is the termination of the edge e.
• An edge in a graph that joins a vertex to itself is called a sling or a loop
• The degree of a node or vertex u is written deg(u) is the number of edges
containing u. The degree of a loop is 2
• In a directed graph for any vertex v the number of edges which have v as their
initial vertex is called the out-degree of v. The number of edges which have v as
their terminal vertex is called the in-degree of v. The sum of in-degree and out-
degree of a vertex is called the degree of that vertex. If deg(u)=0, then u is called
an isolated node and a graph containing only isolated node is called a null graph
• The maximum degree of a graph G, denoted by Δ(G), is the maximum degree of
its vertices, and the minimum degree of a graph, denoted by δ(G), is the minimum
degree of its vertices.
• A sequence of edges of a digraph such that the terminal vertex of the edge
sequences in the initial vertex of the next edge, if exist, is called a path
E={(v1,v2),(v2,v3),(v3,v4)}
• A path P of length n from a vertex u to vertex v is defined as sequence of n+1
nodes
P=(v0,v1,v2--------------vn)
such that u=v0; vi-1 is adjacent to vi for i=1,2,3----------n; and v=vn.
• The path is said to be closed or a circular path if v0=vn.
• The path is said to be simple if all nodes are distinct, with the exception that v0 may
equal vn; that is P is simple if the nodes v0,v1,v2--------vn-1 are distinct and the nodes
v1,v2------------vn are distinct.
• A cycle is closed simple path with length 2 or more. A cycle of length k is called a
k-cycle.
• A graph G is said to be connected if and only if there is a simple path between any
two nodes in G. In other words, a graph is connected if it does not contain any
isolated vertices. A graph that is not connected can be divided into connected
components (disjoint connected subgraphs). For example, this graph is made of
three connected components
• A graph G is said to be complete if every node u in G is adjacent to every node v in
G. A complete graph with n vertices (denoted Kn) is a graph with n vertices in
which each vertex is connected to each of the others (with one edge between each
pair of vertices). In other words, there is path from every vertex to every other
vertex. Clearly such a graph is also a connected graph.
• A complete graph with n nodes will have n(n-1)/2 edges
• A connected graph without any cycles is called a tree graph or free graph or simply
a tree.
• Here are the five complete graphs:
• A graph is said to be labeled if its edges are assigned data. G is said to
be weighted if each edge e in G is assigned a non negative numerical
value w (e) called the weight or length of e.
• In such a case, each path P in G is assigned a weight or length which
is the sum of the weights of the edges along the path P. If no weight is
specified, it is assumed that each edge has the weight w (e) =1
• Multiple edges- Distinct edges e and e’ are called multiple edges if
they connect the same endpoints, that is, if e=[u, v] and e’=[u, v].
Such edges are also called parallel edges and a graph that contains
these multiple or parallel edges is called a multigraph. Also a graph
containing loops is also not a simple graph but a multigraph.
Directed Multi Graph
Weighted Graph
• Representation of a graph-There are two main ways of representing a
graph in memory. These are:
• Sequential
• Linked List
• Sequential Representation- The graphs can be represented as matrices
in sequential representation. There are two most common matrices.
These are:
• Adjacency Matrix
• Incidence Matrix
• The adjacency matrix is a sequence matrix with one row and one
column devoted to each vertex. The values of the matrix are 0 or 1. A
value of 1 for row i and column j implies that edge eij exists between vi
and vj vertices. A value of 0 implies that there is no edge between the
vertex vi and vj. Thus, for a graph with v1,v2,v3………..vn vertices, the
adjacency matrix A=[aij] of the graph G is the n x n matrix and can be
defined as:
1 if vi is adjacent to vj (if there is an edge between vi and vj)
• aij =
• Here NODE will be the name or key value of the node, NEXT
will be a pointer to the next node in the list NODE and ADJ will
be a pointer to the first element in the adjacency list of the node,
which is maintained in the list EDGE. The last rectangle indicates
that there may be other information in the record such as indegree
of the node, the outdegree of the node, status of the node during
execution etc.
• The nodes themselves will be maintained as a linked list and
hence will have a pointer variable START for the beginning of the
list and a pointer variable AVAILN for the list of available space
• Edge List- Each element in the list EDGE will correspond to an
edge of graph and will be a record of the form:
• DEST LINK
• The field DEST will point to the location in the list NODE of the
destination or terminal node of the edge. The field LINK will link
together the edges with the same initial node, that is, the nodes in the
same adjacency list. The third area indicates that there may be other
information in the record corresponding to the edge, such as a field
EDGE containing the labeled data of the edge when graph is a
labeled graph, a field weight WEIGHT containing the weight of the
edge when graph is a weighted graph and so on.
Node Adjacency List
A D A B,C,D
E C
B C,E
B C
C
C
D
E
• Traversing a Graph- There are two standard ways of traversing a
graph
• Breadth-first search
• Depth-First Search
• The breadth-first search will use a queue as an auxiliary structure to
hold nodes for future processing, and analogously, the depth-first
search will use a stack
• During the execution of algorithm, each node N of G (Graph) will be
in one of the three states, called the status of N as follows:
• STATUS=1: (Ready state) The initial state of the node N
• STATUS=2: (Waiting state) The node N is one the queue or stack,
waiting to be processed
• STATUS=3: (Processed state). The node N has been processed
• The general idea behind a breadth-first search beginning at a starting
node A as follows:
• Examine the starting node A
• Examine all neighbors of A
• Then examine all neighbors of neighbors of A and so on.
• Keep track of neighbors of node and guarantee that no node is
processed more than once. This is accomplished by using a queue to
hold nodes that are waiting to be processed and by using a field
STATUS which tells the status of any node.
• The breadth-first search algorithm helps in finding the minimum path
from source to destination node
• Algorithm: This algorithm executes a breadth-first search on a graph
G beginning at a starting node A. This algorithm can
process only those nodes that are reachable from A. To
examine all the nodes in graph G, the algorithm must be
modified so that it begins again with another node that is
still in the ready state
• Step 1: Initialize all nodes to the ready state (STATUS=1)
• Step 2: Put the starting node A in Queue and change its status to the
waiting state (STATUS=2)
• Step 3: Repeat Steps 4 and 5 until Queue is empty:
• Step 4: Remove the front node N of Queue. Process N and change the
status of N to the processed state (STATUS=3)
• Step 5: Add to the rear of Queue all the neighbors of N that are in the
ready state ( STATUS=1) , and change their status to the
waiting state (STATUS=2)
[End of Step 3 Loop]
• Step 6: Exit
• Algorithm: This algorithm executes a depth-first search on a graph G
beginning at a starting node A. This algorithm can
process only those nodes that are reachable from A. To
examine all the nodes in graph G, the algorithm must be
modified so that it begins again with another node that is
still in the ready state
Step 1: Initialize all nodes to the ready state (STATUS=1)
Step 2: Push the starting node A onto STACK and change its status to
the waiting state (STATUS=2)
Step 3: Repeat steps 4 and 5 until STACK is empty
Step 4: Pop the top node N of STACK. Process N and change its status
to the processed state (STATUS=3)
Step 5: Push onto the STACK all the neighbors of N that are still in the
ready state (STATUS=1) and change their status to the waiting
state (STATUS=2)
[End of Step 3 Loop]
Step 6: Exit
Difference between BFS and DFS
• The BFS uses a queue for its implementation whereas DFS uses a
stack
• BFS is mostly used for finding the shortest distance between the two
nodes in a graph whereas DFS is mostly used to find the nodes that
are reachable from a particular node.
• BFS is called breadth first search as first it processes a node then its
immediate neighbors and so on. In other words, FIFO queue puts all
its newly generated nodes at the end of a queue which means that
shallow nodes are expanded before the deeper nodes. BFS traverses a
graph breadth- wise. DFS first traverses the graph till the last
reachable node then back tracks the nodes for processing. In other
words, DFS expands deepest unexpanded node first first traverses the
depth of the graph.
• BFS ensures that all the nearest possibilities are explored first
whereas DFS keeps going as far as it can then goes back to look at
other options
• Dijkstra’s Algorithm- This technique is used to determine the shortest
path between two arbitrary vertices in a graph.
• Let weight w(vi ,vj) is associated with every edge (vi,vj) in a given
graph. Furthermore, the weights are such that the total weight from
vertex vi to vertex vk through vertex vj is w(vi,vj) + w(vj,vk). Using
this technique, the weight from a vertex vs (starting of the path) to the
vertex vt( the end of the path) in the graph G for a given path (vs,v1),
(v1,v2),(v2,v3)…..(vi,vt) is given by w(vs,v1) + w(v1,v2) +w(v2,v3)+
….+w(vi,vt)
• Dijkstra’s method is a very popular and efficient one to find every
path from starting to terminal vertices. If there is an edge between two
vertices, then the weight of this edge is its length. If several edges
exist, use the shortest length edge. If no edge actually exists, set the
length to infinity. Edge(vi,vj) does not necessarily have same length
as edge (vj,vi). This allows different routes between two vertices that
take different paths depending on the direction of travel
• Dijkstra’s technique is based on assigning labels to each vertex. The label is equal to the
distance (weight) from the starting vertex to that vertex. The starting vertex has the label 0 A
label can be in one of the two states-temporary or permanent. A permanent label that lies
along the shortest path while a temporary label is one that has uncertainty whether the label is
along the shortest path.
• Algorithm:
• Step 1: Assign a temporary label l(vi)= ∝ to all vertices except vs (the starting
vertex)
• Step 2: [Mark vs as permanent by assigning 0 label to it]
l(vs)=0
• Step 3: [Assign value of vs to vk where vk is the last vertex to be made permanent]
vk=vs
• Step 4: If l(vi) > l(vk) + w(vk,vi) [weight of the edge from vk to vi]
l(vi)= l(vk) +w(vk,vi)
• Step 5: vk=vi
• Step 6: If vt has temporary label, repeat step 4 to step 5 otherwise the value of vt is
permanent label and is equal to the shortest path vs to vt
• Step 7: Exit
Dijkstra’s Algorithm
An Example
∞ ∞
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
∞ ∞
Initialize
Select the node with
the minimum temporary
distance label.
Update Step
2
∞ ∞
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
∞ ∞
4
Choose Minimum Temporary Label
2 ∞
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
4 ∞
Update Step
6
2 ∞
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
4 ∞
3 4
The predecessor
of node 3 is now
node 2
Choose Minimum Temporary Label
2 6
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
3 4
Update
2 6
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
3 4
2 6
2 4 4
2 2
0
1 2 3
1 6 ∞
4 2
3 3 5
3 4
Update
2 6
2 4 4
2 2
0
1 2 3 6
1 6 ∞
4 2
3 3 5
3 4
2 6
2 4 4
2 2
0
1 2 3
1 6 6
4 2
3 3 5
3 4
Update
2 6
2 4 4
2 2
0
1 2 3
1 6 6
4 2
3 3 5
3 4
2 6
2 4 4
2 2
0
1 2 3
1 6 6
4 2
3 3 5
3 4
2 6
2 4 4
2 2
0
1 2 3
1 6 6
4 2
3 3 5
3 4
A B C
• Solution for towers of hanoi problem for n=3 is done in seven moves
as:
• Move top disk from peg A to peg C
• Move top disk from peg A to peg B
• Move top disk from peg C to peg B
• Move top disk from peg A to peg C
• Move top disk from peg B to peg A
• Move top disk from peg B to peg C
• Move top disk from peg A to peg C
initial A->C A->B
A->C B -> A
C->B
B -> C A -> C
Rather than finding a separate solution for each n, we use the
technique of recursion to develop a general solution
• Move top n-1 disks from peg A to peg B
• Move top disk from peg A to peg C: A->C
• Move the top n-1 disks from peg B to peg C.
• Let us introduce a general notation
TOWER(N, BEG, AUX, END)
to denote a procedure which moves the top n disks from initial
peg BEG to final peg END using the peg AUX as a auxillary.
• For n=1,
• TOWER(1, BEG, AUX, END) consist of single instruction
BEG->END
For n>1, solution may be reduced to the solution of following three
subproblems:
• TOWER(N-1, BEG, END, AUX)
• TOWER(1,BEG, AUX, END) BEG->END
• TOWER(N-1, AUX, BEG, END)
• Each of the three sub problems can be solved directly or is essentially
the same as the original problem using fewer disks. Accordingly, this
reduction process does yield a recursive solution to the towers of
Hanoi problem.
• In general the recursive solution requires 2n – 1 moves for n disks.
• Algorithm: TOWER(N, BEG, AUX, END)
This procedure produces a recursive solution to the
Towers of Hanoi problem for N disks
Step 1: If N=1, then:
Write: BEG->END
Return
Step 2: [Move N -1 disks from peg BEG to peg AUX]
Call TOWER(N-1, BEG, END, AUX)
Write: BEG->END
Step 3: [Move N-1 disks from peg AUX to peg END]
Call TOWER(N-1, AUX, BEG, END)
Step 4: Return
• TOWER(1,A,C,B)—---A->B
• TOWER(2,A,B,C) -----A->C
TOWER(1,B,A,C) ----------B->C
• TOWER(3,A,C,B)--------------------------------------------------------------------A->B
• TOWER(1,C,B,A)---------------- C->A
• TOWER(2,C,A,B) --------------------------------C->B
• TOWER(1,A,C,B)----------------A->B
• TOWER(4,A,B,C)------------------------------------------------------------------A->C
• TOWER(1,B,A,C) -------------------B->C
• TOWER(2,B,C,A) ----------------------------------B->A
TOWER(1,C,B,A) --------------------C->A
TOWER(3,B,A,C)---------------------------------------------------------B->C
• TOWER(1,A,C,B) ------------------- A-> B
• TOWER(2,A,B,C) ------------------------------------A->C
• TOWER(1,B,A,C) --------------------B->C
Operations on Graph
• Suppose a graph G is maintained in memory by the linked list
representation
• GRAPH(NODE,NEXT,ADJ,START,AVAILN,DEST,LINK,AVAIL
E)
• The various operations possible on graph are insertion, deletion and
searching a node in a graph.
• Algorithm: INSNODE( NODE, NEXT, ADJ, START, AVAIL, N )
• while(ptr1!=NULL)
• {
• if(new1->num<ptr1->num)
• {save=ptr1;
• ptr1=ptr1->left;
• }
• else
• {
• save=ptr1;
• ptr1=ptr1->right;
• }
• }
• if(new1->num<save->num)
• {
• save->left=new1;
• }
• else
• {
• save->right=new1;
• }
• }
void pre()
{
int leafcount=0,nonleaf=0;
struct tree* ptr,*temp;
ptr=root;
while(ptr!=NULL)
{
printf("%d",ptr->num);
if(ptr->left==NULL&&ptr->right==NULL)
{
leafcount++;
}
else
{
nonleaf++;
}
if(ptr->right!=NULL)
{
push(ptr->right);
}
ptr=ptr->left;
}
while(top!=NULL)
{
temp=pop();
while(temp!=NULL)
{
printf("%d",temp->num);
if(temp->left==NULL&&temp->right==NULL)
{
leafcount++;
}
else
{
nonleaf++;
}
if(temp->right!=NULL)
push(temp->right);
temp=temp->left;
}
}
printf("leaf nodes are %d and nonleaf nodes are %d",leafcount,nonleaf);
}
void push(struct tree* p)
{
struct stack* new1;
new1=(struct stack*)malloc(sizeof(struct stack));
new1->tr=p;
new1->link=top;
top=new1;
}
struct tree* pop()
{
struct stack* s1;
s1=top;
top=top->link;
return s1->tr;
}
• Program to search for an item in a binary search tree
#include<stdio.h>
#include<conio.h>
struct tree
{
int num;
struct tree* left;
struct tree* right;
};
struct tree* root=NULL;
struct stack
{
struct tree* tr;
struct stack* link;
};
struct tree* pop();
struct stack* top=NULL;
void pre();
void search();
void create();
void push();
void main()
{
char ch='y';
int choice,choice1;
while(ch=='y')
{
printf("enter the type of operation 1. create 2. search");
scanf("%d",&choice);
switch(choice)
{
case 1: create();
break;
case 2: search();
break;
}
• while(ptr1!=NULL)
• {
• if(new1->num<ptr1->num)
• {save=ptr1;
• ptr1=ptr1->left;
• }
• else
• {
• save=ptr1;
• ptr1=ptr1->right;
• }
• }
• if(new1->num<save->num)
• {
• save->left=new1;
• }
• else
• {
• save->right=new1;
• }
• }
• Write an algorithm to divide a linked list into three sublists based on a reminder
value of data.
• Algorithm: Sublist( INFO ,LINK, ITEM, START)
This algorithm divides the list into three sublists based on
some reminder values of data. PTR stores the address of
current node of base list. PTR1 stores the address of current
node of first new list created and PTR2 stores the address of
current node of second sublist created.START1 and
START2 and START3 stores the address of the starting address of the
three sublists
Step 1: START1:=AVAIL and AVAIL:=LINK[AVAIL]
START2:=AVAIL and AVAIL:=LINK[AVAIL]
START3:=AVAIL and AVAIL:=LINK[AVAIL]
Step 2: Set PTR:=START, PTR1:=START1 and PTR2:=START2 and
PTR3:=START3
Step 3: Repeat while INFO[PTR]≠ITEM
INFO[PTR]:=INFO[PTR1]
LINK[PTR1]:=AVAIL and AVAIL:=LINK[AVAIL]
PTR1:=LINK[PTR1]
PTR:=LINK[PTR]
[End of Loop]
Step 4: Repeat while INFO[PTR]≠ITEM
INFO[PTR]:=INFO[PTR2]
LINK[ PTR2]:=AVAIL and AVAIL:=LINK[AVAIL]
PTR2:=LINK[PTR2]
PTR:=LINK[PTR]
[End of Loop]
Step 5: Repeat while PTR ≠ NULL
INFO[PTR]:=INFO[PTR3]
LINK[PTR3]:=AVAIL and AVAIL:=LINK[AVAIL]
PTR3:=LINK[PTR3]
PTR:=LINK[PTR]
[End of Loop]
Step 6: Exit
Problem: Let there be a doubly linked list with three elements P , Q and R.
• Write an algorithm to insert S between P and Q
• Write an algorithm to delete the head element P from list
Sol:
(A)
Algorithm: INSERT(INFO,BACK,FORW)
This algorithm inserts an element S between elements P and
Q.
Step 1: IF AVAIL:=NULL, then:
Write:’OVERFLOW’
[End of If structure]
Step 2: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
Step 3: Set INFO[NEW]:=S
Step 4: Set PTR:=START
Step 5: Repeat while INFO[PTR]≠P
PTR:=FORW[PTR]
[End of Loop]
Step 6: Set TEMP:=FORW[PTR]
Set FORW[PTR]:=NEW
Set BACK[NEW]:=PTR And FORW[NEW]:=TEMP
Set BACK[TEMP]:= NEW
Step 7: Exit
(b) Algorithm: Del (INFO, BACK,FORW,START)
This algorithm deletes the head node of a doubly linked
list
• Step 1:Set PTR:=START
• Step 2: Set START:= FORW[PTR]
• Step 3: Set BACK[FORW[PTR]]:=NULL
• Step 4: [Returning memory to avail list] Set FORW[PTR]:=AVAIL
and AVAIL:=PTR
• Step 5: Exit
• Problem: Suppose the names of few students of a class are as below:
Ram, Sham, Mohan, Sohan, Vimal, Komal
It is assumed that the names are represented as single linked list
(a) Write a program or algorithm to insert the name Raman between sham and mohan
(b) Write a routine to replace the name vimal with guman
Sol:
(a) Algorithm: INSRT(INFO,LINK,START)
This algorithm inserts an item Raman between Sham and Mohan
Step 1: Set PTR:=START
Step 2: If AVAIL=NULL
Write: ‘OVERFLOW’
Exit
Step 3: Set NEW:=AVAIL and AVAIL:=LINK[AVAIL]
Step 4: Set INFO[NEW]:=Raman
Step 5: Repeat while INFO[PTR]≠Sham
PTR:=LINK[PTR]
[End of Loop]
Step 6: Set LINK[NEW]:=LINK[PTR]
Set LINK[PTR]:=NEW
Step 7:Exit
(b) Algorithm:Replace(INFO,LINK,START)
This algorithm replaces the name vimal in the linked
list with name Guman. START pointer stores the
address of starting address of the linked list
• Step 1: Set PTR:=START
• Step 2: Repeat while INFO[PTR]≠Vimal
PTR:=LINK[PTR]
[End of Loop]
• Step 3: Set INFO[PTR]:=Guman
• Step 4: Exit
• Problem: Calculate the depth of a tree with 2000 nodes
• Solution: The formulae for calculating the depth of a tree with n number
of nodes is
Depth= Log2N + 1
Here N is 2000
Find out the 2’s power between which the value 2000 lies
210 =1024 and 211 =2048
Thus 2000 lies between 210 and 211
Putting in formulae
Depth= Log2 (210 to 211 ) + 1
Selecting the lower limit
= Log2 (210 ) + 1
=10Log22 +1
=10+1=11
• Static and dynamic data structures-A static data structure in
computational complexity theory is a data structure created for an
input data set which is not supposed to change within the scope of the
problem. When a single element is to be added or deleted, the update
of a static data structure incurs significant costs, often comparable
with the construction of the data structure from scratch. In real
applications, dynamic data structures are used, which allow for
efficient updates when data elements are inserted or deleted.
• Static data structures such as arrays allow
- fast access to elements
- expensive to insert/remove elements
- have fixed, maximum size
• Dynamic data structures such as linked lists allow
- fast insertion/deletion of element
- but slower access to elements
- have flexible size
• Applications of Binary Tree/Binary Search Tree
• For making decision trees
• For expressing mathematical expressions
• Faster searching as search is reduced to half at every step.
• For sorting of an array using heap sort method
• For representation of Organization charts
• For representation of File systems
• For representation of Programming environments
• Arithmetic Expression Tree
• � Binary tree associated with an arithmetic expression
• � internal nodes: operators
• � external nodes: operands
• � Example: arithmetic expression tree for the
• expression (2 × (a − 1) + (3 × b))
• Trailer node- A trailer node is like a header node in a linked list
except that it stores the address of the last node in a linked list. The
significance of this node is in a doubly linked list that can be traversed
in both the direction. In doubly linked list, we can take the advantage
of trailer node in searching a node in a sorted doubly linked list. The
search will be more efficient.
For convenience, a doubly linked list has a header node and a
trailer node. They are also called sentinel nodes, indicating
both the ends of a list.
header trailer