DATA STRUCTURES
PLACEMENT TRAINING
                   15.10.2020
Dr. P. Latchoumy, Associate Professor
Department of Information Technology
BSACIST
                        SESSION - I
                          Topics
   Problem Solving Phases
   Top-Down Design
   Efficiency of Algorithms
   Analysis of Algorithms
   Time & Space Complexity
   Big O Notation
   Sample algorithms - Analysis
   Introduction to Data Structures
   Data structure Types
   Abstract Data Type
   Memory Representation of :
   Arrays
   Structures
   Unions
   Pointers
              Problem Solving
• Algorithm
    Step by step procedure for solving a problem
    Solution to a problem that is independent of any
   Programming language
     An algorithm is a sequence of computational steps that
   transform the input into the output”
     Correct algorithm halts with the correct output for every
   input instance
“Algorithm is any well-defined computational procedure
that takes some value, or set of values, as input and
produces some value, or set of values, as output.
• Programs:
       Set of instructions expressed in any programming
Language (C,C++,JAVA,VB,J2EE,.NET etc.,)
      A program is the expression of an algorithm in a
programming language.
• Data structure:
A data structure is a way to store and organize data in order to
facilitate access and modifications.
             Program=Algorithm + Data structure
 Expressing Algorithms:
           English description
               Pseudo-code
Pseudo-code:
 Like a programming language but its rules are less
 Written as a combination of English and programming
         constructs
 Based on selection (if, switch) and iteration (while, repeat)
            constructs in high-level programming languages
Independent of actual programming language
   Algorithm:
   1. Start
   2. Read two numbers as n1,n2
   3. find the sum of n1 & n2 as sum=n1+n2
   4. print the sum value
   5. Stop
Pseudo-code:
 BEGIN
NUMBER s1, s2, sum
OUTPUT ("Input number1:")
INPUT s1
OUTPUT ("Input number2:")
INPUT s2
sum=s1+s2
OUTPUT sum
END
 The Problem-Solving
       aspect:
 Requirements for solving problems by computer
 •Algorithm
 • Data Structure
 • Programming Language
                             Problem
        Algorithm & Programming Language with data structure
input                   “computer”                       output
     The Problem Solving aspects
• Problem solving is a creative process and requires
  systemization and mechanization.
• Different Strategies work for different people and there is no
  universal method for solving a problem.
• Hence Computer problem solving is about understanding.
• The various aspects for problem solving,
    1. Problem definition phase
    2. Getting started on a problem
    3. The use of specific examples
    4. Similarities among problems
    5. Working backwards from the solution
    6. General problem solving strategies
1. Problem Definition Phase
Understand the problem - success in solving problem
The first phase is problem definition phase
Think out, what must be done rather than how to do it.
Try to extract a set of defined tasks from the problem
statement.
Hence, Lot of care must be taken in working out precisely
what must be done
 Example :
      Finding the square root
      Finding the greatest common divisor
From the definition develop an algorithm
Example: GCD of two integers
Algorithm:
1.   Get the two positive non-zero integers smaller and larger whose GCD is to
     be computed.
2. Repeat the following steps until a zero remainder is obtained,
         (a) get the remainder from dividing the larger integer (n)by the
             smaller integer (m).
                    r=n mod m;
          (b) Let the smaller integer assume the role of larger integer.
                     n=m;
          (c ) Let the remainder assume the role of a divisor.
                     m=r;
//repeat until r==0
 3. Return the GCD of the original pair of integers.
                      gcd = n;
    2.Getting started on a problem
• There may be many ways to solve the problem and
  also many solutions to most problems.
  • It is difficult to find out which solutions are likely to be
    productive or not.
  • If you are concerned about the details of the
    implementation before you completely understand the
    problem, then there will a block (Can’t further move
    on).
  • Work out an implementation independent solution.
  • Gather more detail about problem
  • You start coding for the problem
 3.The use of specific examples
An approach to start the problem – pick a specific example of the
general problem and try to work out the mechanism to solve this
problem.
E.g., Find out the maximum number from the given set of
numbers.
- Choose a particular set of numbers like 2,3,6,9,12 and work out
the mechanism to find the maximum in this set.
Use some geometrical or schematic diagrams representing
certain aspects of the problem.
The specification for a particular problem need to be examined
very carefully to check whether the proposed algorithm can meet
those requirements.
If the specifications are difficult to formulate, a chosen set of test
cases can help us to find the solution.
 4. Similarities among problems
 Another way to solve a problem is to see if there are
  any similarities between the current problem and
  other problems that we have solved or we have seen
  solved.
 Make an effort to be aware of the similarities among
  problems.
 But, experience may not always be helpful.
  Sometimes, it may block us from discovering a better
  solution to a problem.
 To get a better solution to a problem, Try to solve the
  problem independently.
 Skill to be developed in problem solving – Ability to
  view a problem from a variety of angles.
      5. Working backwards from the
                 solution
• If we do not know where to start on a problem,
     We can work backwards to the starting conditions (if
     the expected result and initial conditions are known)
   Whatever attempts that we make to get started on a
   problem write down the various steps and
   explorations we made
   Once we have solved a problem we must remember
   the steps that we went about discovering the solution
   The most important one is developing problem
   solving skills in practice
 6.General Problem solving strategies
• General & powerful computational strategies that
  are repeatedly used are,
       Divide and Conquer
       Binary doubling
       Dynamic programming
           Backtracking
           Branch and Bound
           Greedy method
Divide and Conquer:
•          It is defined as one large complex problem is
  divided into number of sub problems and finds the
  solution. The sub problem solutions are combine to
  form the solution for large problem.
• Example: Merge sort algorithm
Dynamic programming:
      Dynamic programming is an algorithm that can be used when
the solution to a problem can be viewed as the result of a sequence
of decisions.
Example: TRAVELING SALESMAN PROBLEM
           Backtracking: During the search if infeasible solution is
            sensed then backtrack to previous node.
              Example: 8-queens problem
           Branch and Bound:
                 Branch- splitting procedure
                 Bound- computes upper and lower bounds
              Example: Knapsack problem
           Greedy method: Find the feasible solution from the set
            of solution for the given problem
                Example: job scheduling
               Top Down Design
Once the problem is defined, we have a idea of how to solve it.
 Top down design - Technique for Designing algorithm
 Another name for top down design is Stepwise Refinement
Top-down design is a strategy that makes the solution of a
 problem from a vague outline to a precisely defined algorithm and
 program implementation.
 It provides a way of handling inherent logical complexity.
 It allows us to build our solutions to a problem in a stepwise
fashion
  2.1 Breaking Problem into Sub problems
 Divide the task into subtasks.
 After problem solving phase, we get the general outlines of a
  solution- may consists of single statement or set of statements.
 Take the general statements one by one and break them down
  into a set of more precisely defined subtasks. (i.e.) Divide the
  task into subtasks.
 The subtasks accurately describes how the final goal is to be
  reached.
 The interaction between the subtasks has to be precisely
  defined.
 The process of repeatedly breaking a task into subtask and then
  each subtask into still smaller subtask must continue till we end
  up with subtasks that can be implemented as program
  statements.
• Most Algorithms – require two or three levels
• For large software projects – requires more levels.
Subtask1a
2.2 Choice of a suitable Data Structure
All programs operate on data and the way data is
organized has a significant effect on the final solution.
Inappropriate choice DS leads to clumsy, inefficient and
  difficult in implementation. An Appropriate choice leads to
  simple, transparent and efficient implementation.
 Effective problem solving strategy – Making appropriate
  choices about the associated data structures.
 Data structures and algorithms are highly linked to each
  other.
 Small change in Data Organization can have a
  significant influence on the algorithm required to solve
  the problem.
Choice of a suitable Data Structure
Consider the following things when setting up data
  structure :
   1. Can the data structure be easily searched?
   2. Can the data structure be easily updated?
   3. Does the data structure provide a way of
      recovering an earlier state in the computation ?
   4. Does the data structure involve the excessive use
      of storage?
   5. Can the problem be formulated in terms of one of
      the common data structures (e.g. array, queue,
      stack, tree graph, list)
   6. How can we arrange the intermediate results,
      such that the information can be accessed faster.
   2.3 Construction of Loops
   To implement subtasks (Realized as computations), series
   of Iterative constructs or loops are needed.
   The loops along with the input, output statements,
   computable expressions, assignments make up the
   program.
Loop: It is defined as repetitively execute the set of
   instructions defined number of times.
   Eg.. While, for
To construct any loop consider 3 things
         Initial Conditions that need to apply before the loop begins to execute (
         i=0;)
         Invariant relation that must apply after each iteration of the loop (i<=10;
         i++)
          Conditions under which the iterative process must terminate
         for(i=0;i<=10;i++)
       Trouble In constructing loops is due to,
         Getting correct initial condition.
         Getting right number of times to execute loop.
2.4 Establishing initial conditions for loops
–   Set the loop variables.
– Variable: Value can be changed during run time
    Values can take the values inorder to solve the smallest problem
    associated with the loop.
–    Set the number of iterations ‘n’ in the range i=0 to n-1
  Example : Find the sum of set of numbers
•      Solution :
        –Set the loop variable as ‘i’
        –Sum variable as ‘S’
        –The sum of zero numbers is zero and so the initial
  values of ‘i’ and ‘S’ as zero
        i : = 0; S : = 0;
  2.5 Finding the iterative construct
  Once we know the conditions for solving the smallest
  problem the next step is try to extend it to the next
  smallest problem (when i=1)
Solution to summation problem for n>=0
         i=0;
          S=0;
     while i < n do
                 begin
                         i= i+1;
                         S=S+a[i];
               end
  Need to consider about the termination of loops
• 2.6 Termination of loops
Loops can be terminated in a number of ways. The
  termination condition is based on the nature of the problem.
Simple condition- Number of iterations are known in advance
Example :
  for i = 1 to n do
    begin
          …
    end
The above loop terminates unconditionally after ‘n’ iterations
 Other way - Terminates only When some conditional
  expression becomes false
Example :
  while (x>0) and (x<10) do
    begin
    …
    end
•   Debugging
•   Testing
•   Documentation
•   Verification
      3. The Efficiency of Algorithms
Efficiency of algorithm mainly depends upon
 Design
 Implementation
 Analysis of algorithms
Every algorithm must use some of a computer’s resources to
complete its task    (Ex : CPU time, internal memory,
Network B/W).
Suggestions to design efficient algorithms
 Redundant computations
 Referencing array elements
 Inefficiency due to late termination
 Early detection of desired output conditions
 Trading storage for efficiency gains
3.1. Redundant computations
Inefficency Problem here is because of
    redundant computations or Unnecessary storage.
    Effect will be more serious when it is embedded within a loop
The most common mistake is
    Recalculate part of an expression that remains constant throughout
      the entire execution phase of the loop repeatedly
Example :
 x:=0;
  a:=5;
 for i:=1 to n do
          begin
                    x:=x+1;
                    y:=(a*a)+x;
                    writeln(‘x=‘,x,’y=‘,y)
          end
  The unnecessary multiplications and additions can be removed
 by pre computation before executing the loop
a:=5
M := a*a;
x:=0;
For i:=1 to n do
 begin
       x:=x+1;
       y:=M+x;
       writeln(‘x=‘,x,’y=‘,y)
 end
  **Eliminate redundancies in the innermost loops of
 computation for efficient algorithms.
3.2. Referencing Array elements
Example : To Find the maximum and its position in an array
Version 1 :
p:=1;
for i:=2 to n do
    if a[i]>a[p] then p:=i;
 max:=a[p]
 Version 2 :
 p:=1;
 max:=a[1];
 for i:=2 to n do
     if a[i]>max then
     max:=a[i];
 //p:=i
 //end
 //max : = a[p]
 • The version 2 is preferred because the
   conditional test ( a[i] > max) is more efficient
   than the test in version 1
 •       Use of variable max needs only one
   memory reference
 •       Introduction of variable max makes it
   clear what task is to be accomplished
 Where as, in version 1, variable a[p] requires 2
memory references
3.3. Inefficiency due to late termination
 Inefficiency due to more tests done than required to solve a
  problem.
 Example, perform linear search on an alphabetically ordered list of
   names for some particular name
 In an inefficient implementation all names were examined
even if the point in the list was reached where it was known that the
   name could not occur late.
    // Inefficient Algorithm
    while name sought < > current name and no end-of-file do
             get next name from list
A more efficient implementation would be
     while name sought > current name and not end-of-file do
              get next name from list
         test if current name is equal to name sought
An inefficient example of search implementation would be when search
  through the dictionary is continued even after a point when it is
  certain that the name cannot be found. Worse is when the loop
  continues even after it is found.
i=0;
while(i < dictSize)
{
  if (name != dict[i])
    i++;
   else
    printf("Found !!");
 }
More efficient code would be by doing early exit
i= 0;
while( name <= dict[i] && i < dictSize)
{
  if ( name == dict[i])
{
     printf("Found !!");
     break;
  }
  i++;
}
3.4. Early detection of desired output conditions
It sometimes happen due to the nature of the input data, that
    algorithm establishes desired output condition before the
    general conditions for termination have been met.
e.g., Bubble sort – sort a set of data that is already almost in
    sorted order.
3.5. Trading storage for efficiency gains
 Trade off between storage & efficiency - To improve the
  performance of an algorithm
 tradeoff- precompute or save the intermediate results and
  avoid unnecessary test and computation later on.
 Speed up algorithm- using least number of loops.
    Analysis of Algorithms
The complexity of an algorithm is a function describing the efficiency
of the algorithm in terms of the amount of data the algorithm must
process/use and the amount of time it takes for execution.
There are two main complexity measures of the efficiency of an
algorithm:time and space complexity
Time complexity is a function describing the amount of time an
algorithm takes in terms of the amount of input to the algorithm.
"Time" can mean the number of memory accesses performed, the
number of comparisons between integers, the number of times
some inner loop is executed.
Space complexity is a function describing the amount of memory
(space) an algorithm takes in terms of the amount of input to the
algorithm.
       Analysis of Algorithms
TIME COMPLEXITY
   Time complexity estimates the time to run an algorithm.
What’s the running time of the following algorithm?
// Compute the maximum element in the array a.
Algorithm max(a):
   max ← a[0]
   for i = 1 to len(a)-1
      if a[i] > max
          max ← a[i]
   return max
The answer depends on factors such as input, programming
language and runtime, coding skill, compiler, operating
system, and hardware.
       Analysis of Algorithms
Time complexity
• We often want to reason about execution time in a way that
   depends only on the algorithm and its input.
• This can be achieved by choosing an elementary operation, which
   the algorithm performs repeatedly, and
• define the time complexity T(n) as the number of such operations
   the algorithm performs given an array of length n.
For the algorithm above we can choose the comparison a[i] > max as
an elementary operation.
This captures the running time of the algorithm well, since comparisons
dominate all other operations in this particular algorithm.
Also, the time to perform a comparison is constant: it doesn’t depend
on the size of a.
The time complexity, measured in the number of comparisons, then
becomes T(n) = n - 1.
            Analysis of Algorithms
 Algorithm contains(a, x): // Tell whether the array a contains x.
   for i = 0 to len(a)-1
      if x == a[i]
          return true
   return false
The comparison x == a[i] can be used as an elementary operation in this
case. The number of comparisons depends on the number of elements, n,
in the array and also the value of x & the values in a.
If x isn’t found in a the algorithm makes n comparisons, but if x equals a[0]
there is only one comparison. Because of this, we often choose to study
worst-case time complexity: Let T1(n), T2(n), … be the execution times for
all possible inputs of size n.
The worst-case time complexity W(n) is then defined as
W(n) = max(T1(n), T2(n), …).
The worst-case time complexity for the contains algorithm thus becomes
W(n) = n.
             Analysis of Algorithms
Lists of common runtimes from
fastest to slowest.
O           Complexity
O(1)        constant       fast
O(log n)     logarithmic
O(n)         linear
O(n * log n) log linear
O(n^2)       quadratic
O(n^3)       cubic
O(2^n)       exponential
O(n!)        factorial     slow
         Analysis of Algorithms
What is Big O Notation?
  Big O is a notation for measuring the complexity of an algorithm.
  We measure the rate of growth of an algorithm in the number of
  operations it takes to complete or the amount of memory it
  consumes.
  Big O notation is used to define the upper bound, or worst-case
  scenario, for a given algorithm.
  O(1), or constant time complexity, is the rate of growth in which the
  size of the input does not affect the number of operations
  performed.
the formal definition for big-O analysis
A function T(n) is O(f(n)) if T(n)=c* f(n), where c> 0 and for all n >= n0
    he way to read the above statement is as follows.
   T
   n is the size of the data set.
   f(n) is a function that is calculated using n as the parameter.
   O(f(n)) means that the curve described by f(n) is an upper bound for the
   resource needs of a function.
  Analysis of Algorithms
Linear Search
Binary search
Bubble sort
Best Case
                  Linear Search
   The element being searched may be found at the first position.
  In this case, the search terminates in success with just one
  comparison.
  Thus in best case, linear search algorithm takes O(1) operations.
Worst Case
  The element being searched may be present at the last position or
  not present in the array at all.
  In the former case, the search terminates in success with n
  comparisons.
  In the later case, the search terminates in failure with n
  comparisons.
  Thus in worst case, linear search algorithm takes O(n) operations.
   Thus, we have-Time Complexity of Linear Search Algorithm is O(n).
  Here, n is the number of elements in the linear array.
               Binary Search
Binary Search time complexity analysis
   In each iteration or in each recursive call, the
  search gets reduced to half of the array.
  So, for n elements in the array, there are log2n
  iterations or recursive calls.
  Thus, we have-
  Time Complexity of Binary Search Algorithm is
  O(log2n).
  Here, n is the number of elements in the sorted
  linear array.
                Bubble sort
Time Complexity Analysis of Bubble Sort
  In Bubble Sort, n-1 comparisons will be done in
  the 1st pass, n-2 in 2nd pass, n-3 in 3rd pass
  and so on. So the total number of comparisons
  will be,
  (n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1
  Sum = n(n-1)/2
  i.e O(n2)
  Hence the time complexity of Bubble Sort is
  O(n2).
  The main advantage of Bubble Sort is the
  simplicity of the algorithm.
                 Bubble sort
The space complexity for Bubble Sort is O(1), because
only a single additional memory space is required i.e. for
temp variable.
Also, the best case time complexity will be O(n), it is
when the list is already sorted.
Following are the Time and Space complexity for the
Bubble Sort algorithm.
Worst Case Time Complexity [ Big-O ]: O(n2)
Best Case Time Complexity [Big-omega]: O(n)
Average Time Complexity [Big-theta]: O(n2)
Space Complexity: O(1)
Introduction to Data Structures
What is Data Structure?
 Data structure is a representation of data and the
 operations allowed on that data.
 A data structure is a way to store and organize data in
 order to facilitate the access and modifications.
 Data Structure are the method of representing of logical
 relationships between individual data elements related to
 the solution of a given problem.
What are Data Structures and
        Algorithms?
Data Structures are methods of organizing large
amounts of data.
An algorithm is a procedure that consists of finite set of
instructions which, given an input from some set of
possible inputs, enables us to obtain an output,
 if such an output exists or else obtain nothing at all if
there is no output for that particular input through a
systematic execution of the instructions.
Inputs                     Outputs
            Instructions
(Problems                  (Answers)
)
            Computers
Programming   Data                     Software
                          Algorithms
Languages     Structure                Systems
                Basic Data Structure
                      Basic Data Structures
    Linear Data Structures                       Non-Linear Data Structures
Arrays Linked Lists Stacks Queues             Trees       Graphs       Hash Tables
    Types of Data Structure
Linear & Non-Linear Data Structures
Linear: In Linear data structure, values are
arrange in linear fashion.
    Array: Fixed-size
    Linked-list: Variable-size
    Stack: Add to top and remove from top
    Queue: Add to back and remove from front
    Priority queue: Add anywhere, remove the
    highest priority
   Types of Data Structure
Non-Linear: The data values in this structure
are not arranged in order.
  Hash tables: Unordered lists which use a
  ‘hash function’ to insert and search
  Tree: Data is organized in branches.
  Graph: A more general branching structure,
  with less strict connection conditions than for
  a tree
       Type of Data Structures
Homogenous or Non-Homogenous
Homogenous:
In this type of data structures, values of the same types of
data are stored.
     Array
Non-Homogenous:
In this type of data structures, data values of different types
are grouped and stored.
     Structures
     Classes
                       array
                   Linked list
       queue
tree           stack
                   Stacks
A stack is a list with the restriction that
inserts and deletes can be performed in
only one position, namely the end of the
list called the top.
Last in first out (LIFO)   Data4        Top
Operations                 Data3
insert/push                Data2
remove/pop
                           Data1
top
make empty
                       Queues
  Like stacks, queues are lists. With a queue,
  however, insertion is done at one end, whereas
  deletion is performed at the other end.
• Last in last out or first in first out
  Operations: enqueue, dequeue, front
  Types : priority queues and
          dequeue (double ended queue)
     Front                             Back
     Data1     Data2      Data3     Data4
                              List
The general list is of the form a1, a2, a3, . . . , an.
The size of this list is n.
For any list except the null list, ai+l follows (or succeeds) ai (i < n)
and that ai-1 precedes ai (i > 1).
The first element of the list is a1, and the last element is an.
The position of element ai in a list is i.
Some popular operations are:
print_list and make_null, which do the obvious things;
 find, which returns the position of the first occurrence of a key;
insert and delete, which generally insert and delete some key from
some position in the list; and
find_kth, which returns the element in some position (specified as
an argument).
                     Linked List
  A Flexible structure, because can grow and
  shrink on demand.
Elements can be:
 Inserted
 Accessed
 Deleted
At any position
                                   last
             first
                       Tree
A Tree is a collection of elements called nodes.
One of the node is distinguished as a root, along with a
relation (“parenthood”) that places a hierarchical
structure on the nodes.
                               Root
Selection of Data Structure
The choice of particular data model depends on
two consideration:
  It must be rich enough in structure to represent the
  relationship between data elements
  The structure should be simple enough that one can
  effectively process the data when necessary
  Abstract Data Type and Data Structure
Abstract Data Types (ADTs) stores data and allow various
operations on the data to access and change it.
     A mathematical model, together with various operations
     defined on the model
     An ADT is a collection of data and associated operations
     for manipulating that data
Data Structures
     Physical implementation of an ADT
     data structures used in implementations are provided in a
     language (primitive or built-in) or are built from the
     language constructs (user-defined)
     Each operation associated with the ADT is implemented
     by one or more subroutines in the implementation
        Abstract Data Type
ADTs support abstraction, encapsulation, and
information hiding.
Abstraction is the structuring of a problem into well-
defined entities by defining their data and operations.
The principle of hiding the used data structure and to
only provide a well-defined interface is known as
encapsulation.
Abstract Data Types (ADTs)
One of the basic rules concerning programming is to
break the program down into modules.
Each module is a logical unit and does a specific job. Its
size is kept small by calling other modules.
Modularity has several advantages.
(1) It is much easier to debug small routines than large
routines;
(2) It is easier for several people to work on a modular
program simultaneously;
(3) A well-written modular program places certain
dependencies in only one routing, making changes
easier.
Abstract Data Types (ADTs)
An abstract data type (ADT) is a set of operations.
Abstract data types are mathematical abstractions;
nowhere in an ADT’s definition is there any mention of
how the set of operations is implemented.
Objects such as lists, sets, and graphs, along with their
operations, can be viewed as abstract data types, just
as integers, reals, and booleans are data types.
Integers, reals, and booleans have operations
associated with them, and so do ADTs.
Abstract Data Types (ADTs)
The basic idea is that the implementation of the
operations related to ADTs is written once in the
program, and any other part of the program that needs
to perform an operation on the ADT can do so by
calling the appropriate function.
If for some reason implementation details need to be
changed, it should be easy to do so by merely
changing the routings that perform the ADT operations.
There is no rule telling us which operations must be
supported for each ADT; this is a design decision.
    The Core Operations of ADT
Every Collection ADT should provide a way to:
    add an item
    remove an item
    find, retrieve, or access an item
Many, many more possibilities
    is the collection empty
    make the collection empty
    give me a sub set of the collection
No single data structure works well for all purposes,
and so it is important to know the strengths and
limitations of several of them
                    Array
• Collection of related data items/elements
eg., int RRN[60];
•    float python_marks[60];
•    char name[20];
•    char name[5][10]
                  Strucutres
• Collection of different data items/elements
   struct student
   {
    int rrn; // 2 bytes
    float tot_mark; // 4 bytes
    char name[20]; // 20 bytes
   };
 // totel size- 26 bytes
• struct student s[60];
• s[30].rrn; 30th student's rrn
                        Union
Similar to structure. In structure each element having its
own storage but in union, storage is alloted w.r.t. maximum
size element
union student
  {
   int rrn; // 2 bytes
   float tot_mark; // 4 bytes
   char name[20]; // 20 bytes
  };
eg. structure : 26 bytes
    union : 20 bytes, it allocates maximum space variable
For effieient storage usage, we can use union.
                  Pointers
• Pointer is a variable which is having address of
  another variable
int a;
a=10; // a is an ordinary variable
int *p; // p is a pointer variable
p=5000 // address
*p=10 // content at address 5000
                      SESSION-II
                            Topics
   List ADT
   Array implementation of List
   Stack ADT
   Array implementation of Stack
   Queue ADT
   Array implementation of Queue
   Circular Queue
   Array implementation of Circular Queue
   Double Ended Queue
   Priority Queue
   Singly Linked List
   Doubly Linked Lists
   Stack using Linked List
   Queue using Linked List
                The List ADT
The form of a general list: A1, A2, A3, …, AN;
The size of this list is N;
An empty list is a special list of size 0;
For any list except the empty list, we say that Ai+1 follows
(or succeeds) Ai (i<N) and that Ai-1 precedes Ai (i>1);
The first element of the list is A1, and the last element is
AN. We will not define the predecessor of A1 or the
successor of AN.
The position of element Ai in a list is i.
             The List ADT
  There is a set of operations that we would like to
perform on the list ADT:
   PrintList
   MakeEmpty
   Find: return the position of the first occurrence of a
    key
   Insert and Delete: insert and delete some key from
    some position in the list
   FindKth: return the element in some position
   Next and Previous: take a position as argument and
    return the position of the successor and predecessor
              The List ADT
Example: The list is 34, 12, 52, 16, 13
   Find(52)
   Insert(X, 3)
   Delete(52)
The interpretation of what is appropriate for a function is
entirely up to the programmer.
Array Implementation of Lists
    Disadvantages:
   An estimate of the maximum size of the list is required
    before usage of spaces. Usually this requires a high
    overestimate, which wastes considerable space.
   Insertion and deletion are expensive.
   For example, inserting at position 0 requires first pushing
    the entire array down one spot to make room.
   whereas deleting the first element requires shifting all
    the elements in the list up one,
   so the worst case of these operations is O(n).
    Simple Array Implementation
              of Lists
   Because the running time for insertions and deletions is so
    slow and the list size must be known in advance, simple
    arrays are generally not used to implement lists.
Sample Input & Output
 If the list is 34, 12, 52, 16, 12,
   then find(52) might return 2;
 insert(x,3) might makes the list into 34, 12, 52, x, 16, 12
(if we insert after the position given); and
 delete(3) might turn that list into 34, 12, 52, 16, 12
     Basic Operations on LIST
1. Create a LIST
2. Insert an element onto a LIST
3. Delete an element from a LIST
4. Print the entire LIST
5. Find the element in the LIST
6. Make an empty LIST
INSERT()          LIST Operations
READ pos
 FOR (i=MAX-1;i>=pos-1;i--)
   b[i+1]=b[i];
 READ element to be inserted as, no
  b[pos]=no;
 n++;
DELETE()
READ pos
FOR (i=pos;i<n-1;i++)
  b[i-1]=b[i];
n--;
SEARCH()
READ element to be searched as e
FOR (i=0;i<n;i++)
 IF (b[i]==e)
    PRINT e along with its Position
ELSE
  PRINT e is not in the LIST
// ARRAY IMPLEMENTATION OF LIST ADT
#include<stdio.h>
#include<conio.h>
#define MAX 10
void create();
void insert();
void deletion();
void search();
void display();
int a,b[20], n, p, e, f, i, pos;
void main()
{
//clrscr();
int ch;
char g='y';
do
{
printf("\n main Menu");
printf("\n 1.Create \n 2.Delete \n 3.Search \n 4.Insert \n 5.Display\n 6.Exit \n");
printf("\n Enter your Choice");
scanf("%d", &ch);
// IMPLEMENTATION OF LIST ADT
 switch(ch)
{
case 1: create();
         break;
case 2: deletion();
         break;
case 3: search();
         break;
case 4: insert();
         break;
case 5: display();
        break;
case 6: //exit();
         break;
default: printf("\n Enter the correct choice:");
 }
// end of switch-case
   printf("\n Do u want to continue:::");
   scanf("\n%c", &g);
 } while(g=='y'||g=='Y'); // end of do-while
getch();
} // end of main()
void create()
{
printf("\n Enter the number of elements");
scanf("%d", &n);
   for(i=0;i<n;i++)
      {
         printf("\n Enter the Element:%d",i+1);
        scanf("%d", &b[i]);
      }
}
void deletion()
{
printf("\n Enter the position u want to delete::");
scanf("%d", &pos);
if(pos>=n)
{
printf("\n Invalid Location::");
}
else
{
 for(i=pos+1;i<n;i++)
{
b[i-1]=b[i];
}
n--;
}
printf("\n The Elements after deletion");
for(i=0;i<n;i++)
{ printf("\t%d", b[i]); }
}
void search()
{
printf("\n Enter the Element to be searched:");
scanf("%d", &e);
for(i=0;i<n;i++)
{
if(b[i]==e)
{
printf("Value is in the %d Position", i);
}
else
{
printf("Value %d is not in the list::", e);
continue;
}
}
}
void insert()
{
printf("\n Enter the position u need to insert::");
scanf("%d", &pos);
if(pos>=n)
 {
 printf("\n invalid Location::");
 }
 else
 {
 for(i=MAX-1;i>=pos-1;i--)
 { b[i+1]=b[i]; }
 printf("\n Enter the element to insert::\n");
 scanf("%d",&p);
 b[pos]=p;
 n++;
 }
 printf("\n The list after insertion::\n");
display();
}
void display()
{
printf("\n The Elements of The list ADT are:");
for(i=0;i<n;i++)
{
printf("\n\n%d", b[i]);
}
}
SAMPLE INPUT & OUTPUT:
size: 4
list, b[]: 11, b[0], 24, b[1], 18, b[2], 56, b[3]
loc:          0,        1,         2,       3
insert: 33 at 2nd position
list, b[]: 11, b[0], 24, b[1], 33, b[2], 18, b[3], 56, b[4]
loc:          0,        1,         2,       3,       4
size:5
        Applications of LIST
- to manage financial account
- to maintain the Aadhar card details of People
- to manage the list of product in the super market
- to manage the programes in computer
- to maintain the student / employee details
                    Stack
A stack is a list with the restriction that
inserts and deletes can be performed in
only one position, namely the end of the
list called the top.
Last in first out (LIFO)   Data4        Top
Operations                 Data3
insert/push                Data2
remove/pop
                           Data1
top
make empty
                 Stack Model
Stacks are sometimes known as LIFO (last in, first out)
lists.
         Top                  7
       Stack model: only the top element is accessible
                       Stack ADT
The fundamental operations on a stack are:
  Push, which is equivalent to an insert,
 (We perform a Push by inserting at the rear of the list),
  Pop, which deletes the most recently inserted element (We perform
  a Pop by deleting the element at the front of the list).
   The most recently inserted element can be examined prior to
   performing a Pop by use of the Top routine.
  (A Top operation merely examines the element at the
  front of the list, returning its value)
• Running out of space when performing a Push is an implementation
  error but not an ADT error.
• A Pop or Top on an empty stack is generally considered an error in
  the stack ADT.
                       Stack ADT
Operations
  push()-to insert the element at the top of the stack
  pop()-to delete the element from the top of the stack
  top()-display the top element of the stack
  make_empty()-to make the stack as empty
  isfull()-check whether the stack is full or not (use before push())
  isempty()-check whether the stack is empty or not (use before
  pop())
  display()-to print/display the content of the stack
     Implementation of Stack
  Since a stack is a list, any list implementation will do.
  There are two implementations:
 Array implementation.
 Pointers implementation.
  No matter in which case, if we use good programming
  principles, the calling routines do not need to know which
  method is being used.
     Array Implementation of Stack
  To push some element X onto the stack,
1. increment TopOfStack and
2. then set Stack[TopOfStack]=X,
3. where Stack is the array representing the actual stack.
  To pop,
1. set the return value to Stack[TopOfStack] and
2. then decrement TopOfStack.
             Stack Operations
BASIC OPERATION ON A STACK
1. Create a stack
2. Push an element onto a stack
3. Pop an element from a stack
4. Print the entire stack
5. Read the top of the stack
6. Check whether the stack is empty or full
               Stack Operations
PUSH()
IF (TOP>=N)           // Check for stack overflow
  CALL STACK_FULL
ELSE
  TOP<-TOP+1       // Increment TOP
  STACK [TOP]<-Item // Insert element
End PUSH
POP()
IF (TOP<=0)     // Check for underflow on stack
  CALL STACK_EMPTY
ELSE
  Item<-STACK [TOP] // Return top element of the stack
  TOP<-TOP-1 // Decrement top pointer
PRINT()            Stack Operations
for(i=0;i<=top;i++)
 printf("%d\t",s[i]);
TOP()
if(top!=-1)
printf("%d\t",s[top]);
EMPTY()
if(top==-1)
  printf("\nstack is empty");
FULL()
if(top==size-1)
  printf("\nStack is full");
// ARRAY IMPLEMENTATION OF STACK ADT
#include<stdio.h>
#include<conio.h>
#define size 5
int item;
int s[10];
int top=-1;
void display()
{
int i;
if(top==-1)
{
printf("\nstack is empty");
return;
}
printf("\nContent of stack is:\n");
for(i=0;i<=top;i++)
printf("%d\t",s[i]);
}
void push()
{
if(top==size-1)
{
printf("\nStack is full");
return;
}
printf("\nEnter item:\n");
scanf("%d",&item);
s[++top]=item;
}
void pop()
{
if(top==-1)
{
printf("\nstack is empty");
return;
}
printf("\nDeleted item is: %d",s[top]);
top--;
}
void main()
{
int ch;
//clrscr();
printf("\n1.push\t\t2.pop\n3.display\t4.exit\n");
do{
printf("\nEnter your choice:\n");
scanf("%d",&ch);
switch(ch)
{
case 1: push();
          break;
case 2: pop();
         break;
case 3: display();
          break;
case 4: //exit(0); break;
default: printf("\nWrong entry ! try again");
}
} while(ch<=4);
getch();
}
          Stack Applicaitons
The various applications of a stack are
 Infix into postfix Expression.
 Evaluation of postfix Expression.
 Implementation of Recursion
 Factorial
 Quick Sort
 Tower of hanoi
                       Queues
  Like stacks, queues are lists. With a queue,
  however, insertion is done at one end, where as
  deletion is performed at the other end.
• Last in last out or first in first out
• Queue is called First In First Out (FIFO) list.
  Types : priority queues, circular queue and
          dequeue (double ended queue)
                    Queue Model
     Front                             Back
     Data1     Data2      Data3     Data4
             The Queue ADT
The basic operations on a queue are:
  enqueue(), which inserts an element at the end of the
  queue (called the rear).
  dequeue(), which deletes (and returns) the element at
  the start of the queue (known as the front).
  front(), which returns the element at the beginning of
  the queue.
  qcreate(), which create a queue
  qdisplay(), which print the content of queue
  qempty()-check whether the queue is empty or not
  qfull()-check whether the queueu is full or not
Array Implementation of
        Queues
For each queue data structure, we keep an array,
Queue[], and the positions Front and Rear, which
represent the ends of the queue.
We also keep track of the number of elements that
are actually in the queue, Size. All this information is
part of one structure.
The following figure shows a queue in some
intermediate state. The cells that are blanks have
undefined values in them:
                   5     2   7    1
                 Front           Rear
Array Implementation of
        Queues
To Enqueue an element X, we increment Size and
Rear, then set Queue[Rear]=X.
To Dequeue an element, we set the return value to
Queue[Front], decrement Size, and then increment
Front.
Whenever Front or Rear gets to the end of the array,
it is wrapped around to the beginning. This is known
as a circular array implementation.
        Initial State                        2       4
                                            Front Rear
       Enqueue (1)      1                    2       4
                        Rear                Front
       Enqueue (3)      1      3             2       4
                               Rear         Front
   Dequeue, which       1      3             2       4
   returns 2                   Rear                 Front
   Dequeue, which       1      3             2       4
   returns 4            Front Rear
   Dequeue, which       1      3            2       4
   returns 1                   Rear
                               Front
Dequeue, which
returns 3 and makes     1      3             2       4
the Queue empty                Rear Front
Linked List Implementation of
           Queues
         Front            Header
                 ……
         Rear         /
Linked List Implementation of
           Queues
              Front
Empty Queue           /
              Reart
              Front
Enqueue x                 x   /
              Reart
              Front
Enqueue y                 x       y   /
              Reart
              Front
Dequeue x                 x       y   /
              Reart
                   Queues
BASIC OPERATIONS INVOLVED IN A QUEUE
1.Create a queue
2.Check whether a queue is empty or full
3.Add an item at the rear end
4.Remove an item at the front end
5.Read the front of the queue
6.Print the entire queue
             The Queue ADT
The basic operations on a queue are:
  enqueue(), which inserts an element at the end of the
  queue (called the rear).
  dequeue(), which deletes (and returns) the element at
  the start of the queue (known as the front).
  front(), which returns the element at the beginning of
  the queue.
  qdisplay(), display/print the current status of the queue
  qempty(), check whether the queue is empty, it returns
  true, if the queue is empty, else returns false
  qfull(), check whether the queue is full, it returns true, if
  the queue is full, else returns false
                          Queues
INSERTION
Algorithm
1. Check whether the queue is full before attempting to insert
   another element.
2. Increment the rear pointer &
3. Insert the element at the rear pointer of the queue.
Pseudo Code
    Begin INSERT
      If (Rear=N) [Overflow?]
         Then Call QUEUE_FULL
      Else
         Rear<-Rear+1 [Increment rear pointer]
         Read Item
         Q[Rear]<-Item [Insert element]
     End INSERT
                          Queues
 Deletion operation involves the following algorthemic steps:
1. Check whether the queue is empty.
2. Increment the front pointer.
3. Remove the element.
Pseudo Code
Begin DELETE
  if(Front=Rear) [Underflow?]
     Then print “ queue is empty”
 Else
    Front<-Front+1 [Incrementation]
    Item<-Q [Front] [Delete element]
    print Item
End DELETE
// ARRAY IMPLEMENTATION OF QUEUE ADT
#include<stdio.h>
#include<conio.h>
#define SIZE 5          /* Size of Queue */
int Q[SIZE],f=0,r=-1;      // Global declarations
void enqueue (int elem)
{                 /* Function for Insert operation */
  if(qfull())
      printf("\n\n Overflow!!!!\n\n");
  else
  {
      ++r;
      Q[r]=elem;
  }
}
int dequeue()
{                /* Function for Delete operation */
   int elem;
   if(qempty())
   {
     printf("\n\nUnderflow!!!!\n\n");
     return(-1);
   }
   else
   {
      elem=Q[f];
      f=f+1;
      return(elem);
   }
}
int qfull()
{
   /* Function to Check Queue Full */
   if(r==SIZE-1) return 1;
   return 0;
}
int qempty()
{
   /* Function to Check Queue Empty */
   if(f > r) return 1;
   return 0;
}
qdisplay()
{             /* Function to display status of Queue */
  int i;
  if(qempty()) printf(" \n Empty Queue\n");
  else
  {
     printf("Front->");
     for(i=f;i<=r;i++)
         printf("%d ",Q[i]);
     printf("<-Rear");
  }
}
void main()
{                  /* Main Program */
  int opn,elem;
  do
  {
     clrscr();
     printf("\n ### Queue Operations using Arrays### \n\n");
     printf("\n Press 1-Insert, 2-Delete,3-Display,4-Exit\n");
     printf("\n Your option ? ");
     scanf("%d",&opn);
switch(opn)
      {
      case 1: printf("\n\nRead the element to be Inserted ?"); scanf("%d",&elem);
              enqueue(elem);
              break;
      case 2: elem=dequeue();
               if( r != -1)
                  printf("\n\nDeleted Element is %d \n",elem);
               break;
      case 3: printf("\n\nStatus of Queue\n\n");
               qdisplay();
               break;
      case 4: printf("\n\n Terminating \n\n");
               break;
      default: printf("\n\nInvalid Option !!! Try Again !! (1-4)\n\n");
               break;
      }
      printf("\n\n\n\n Press a Key to Continue . . . ");
      getch();
  } while(opn <= 4);
getch();
}
Example Applications
When jobs are submitted to a printer, they are
arranged in order of arrival.
Every real-life line is a queue. For instance, lines at
ticket counters are queues, because service is first-
come first-served.
A whole branch of mathematics, known as queueing
theory, deals with computing, probabilistically, how
long users expect to wait on a line, how long the line
gets, and other such questions.
                      Circular Queue
  Circular queue is another form of a linear queue in which
  the last position is connected to the first position of the list.
Procedure ADDQ (item, Q, n, front, rear)
/* insert item in the circular queue stored in Q (0: n-1);
rear points to the last item and front is one position counterclockwise
     from the first item in Q */
begin ADDQ
    rear = (rear+1) % n          //advance rear clockwise
    if front = rear
    then call QUEUE-FULL
    else
  Q(rear)= item              //insert new item
end ADDQ
                   Circular Queue
 Procedure DELETEQ (item, Q, n, front, rear)
//removes the front element of the queue Q (0: n-1)
begin DELETE
 if front = rear
 then call QUEUE-EMPTY
 else
   front = (front + 1) mod n //advance front clockwise
   item = Q(front)       //set item to front of queue
end DELETEQ
    The Double-Ended Queue ADT
• The Double-Ended Queue, or Deque, ADT stores
  arbitrary objects. (Pronounced ‘deck’)
• Richer than stack or queue ADTs. Supports insertions
  and deletions at both the front and the end.
   The Double-Ended Queue (Contd..)
• Another variation of queue is known as dequeue. Unlike
  queue, in dequeue, both insertion and eletion operations are
  made at either end of the structure
• ie, the element either at the front or rear of the queue can be
  deleted and either the element can be added either at the
  front or at rear of the queue.
The implementation can be restricted into two ways
  a) An input restricted dequeue which allows insertions at one
  end say Rear only, but allows deletions at both ends
  b) An output restricted dequeue where deletions take place at
  one end say front only but allows insertions at both ends
The Double-Ended Queue (Contd..)
Main dqueue operations:
 insertFirst(object o): inserts   element   o   at   the
 beginning of the deque
  insertLast(object o): inserts element o at the end of
  the deque
  RemoveFirst(): removes and returns the element at
  the front of the deque
  RemoveLast(): removes and returns the element at
  the end of the deque
The Double-Ended Queue (Contd..)
Auxiliary queue operations:
  first(): returns the element at the front without
  removing it
  last(): returns the element at the front without
  removing it
  size(): returns the number of elements stored
  isEmpty(): returns a Boolean value indicating whether
  no elements are stored
Exceptions
  Attempting the execution of deque at front on an
  empty deque throws an EmptyDequeException
                   Priority Queue
• A regular queue is a first-in and first-out data structure.
• Elements are appended to the end of the queue and are
 removed from the beginning of the queue.
• In a priority queue, elements are assigned with priorities. When
  accessing elements, the element with the highest priority is
  removed first.
• A priority queue has a largest-in, first-out behavior.
• For example, the emergency room in a hospital assigns
  patients with priority numbers; the patient with the highest
  priority is treated first.
                 Priority Queue
• A priority queue is a collection of elements where each
  element has an associated priority.
• Elements are added and removed from the list in a manner
  such that the element with the highest (or lowest) priority is
  always the next to be removed.
• When using a heap mechanism, a priority queue is quite
  efficient for this type of operation.
• Two Types:
 Ascending Priority Queue
 Descending Priority Queue
Ascending Priority Queue
• The elements are inserted arbitrarily but the smallest element is
  deleted first
Descending Priority Queue
• The elements are inserted arbitrarily but the biggest element is
  deleted first.
                    SESSION-III
                    Linked Lists
    In order to avoid the linear cost of insertion and
    deletion, we need to ensure that the list is not
    stored contiguously, since otherwise entire parts
    of the list will need to be moved.
          A1          A2           A3           A4         A5
                                A linked list
   The linked list consists of a series of structures, which are not
    necessarily adjacent in memory.
   Each structure contains the element and a pointer to a structure
    containing its successor. We call this the Next pointer.
   The last cell’s Next pointer points to NULL;
                       Linked Lists
    If P is declared to be a pointer to a structure, then
    the value stored in P is interpreted as the
    location, in main memory, where a structure can
    be found.
    A field of that structure can be accessed by
    P->FieldName, where FieldName is the name of
    the field we wish to examine.
         A1      800     A2     712   A3         992   A4     692   A5         0
              1000            800          712              992          692
                       Linked list with actual pointer values
   In order to access this list, we need to know where the first cell
    can be found. A pointer variable can be used for this purpose.
                Linked Lists
    To execute PrintList(L) or Find(L, Key), we merely
    pass a pointer to the first element in the list and
    then traverse the list by following the Next
    pointers.
    The Delete command can be executed in one pointer
    change.
          A1        A2       A3        A4        A5
   The Insert command requires obtaining a new cell
    from the system by using a malloc call and then
    executing two pointer maneuvers.
     A1        A2                 A3        A4        A5
                         X
                Linked Lists
  There are several places where you are likely to
go wrong:
(1) There is no really obvious way to insert at the front
of the list from the definitions given;
(2) Deleting from the front of the list is a special case,
because it changes the start of the list; careless coding
will lose the list;
(3) A third problem concerns deletion in general.
Although the pointer moves above are simple, the
deletion algorithm requires us to keep track of the cell
before the one that we want to delete.
                       Linked Lists
             One simple change solves all three problems. We
             will keep a sentinel node, referred to an a header
             or dummy node.
    Header       A1        A2           A3           A4   A5
L                        Linked list with a header
            To avoid the problems associated with deletions, we
             need to write a routing FindPrevious, which will return
             the position of the predecessor of the cell we wish to
             delete. If we use a header, then if we wish to delete
             the first element in the list, FindPrevious will return
             the position of the header.
         Doubly Linked Lists
    Sometimes it is convenient to traverse lists
    backwards. The solution is simple. Merely add an
    extra field to the data structure, containing a
    pointer to the previous cell. The cost of this is an
    extra link, which adds to the space requirement
    and also doubles the cost of insertions and
    deletions because there are more pointers to fix.
    A1        A2            A3            A4   A5
                   A doubly linked list
   How to implement doubly linked lists?
    Doubly-Linked Lists
 It is a way of going both directions in a linked list,
  forward and reverse.
 Many applications require a quick access to the
 predecessor node of some node in list.
Circularly Linked Lists
A popular convention is to have the last cell keep
a pointer back to the first. This can be done with
or without a header. If the header is present, the
last cell points to it.
It can also be done with doubly linked lists, the
first cell’s previous pointer points to the last cell.
  A1         A2            A3             A4   A5
             A double circularly linked list
    Advantages over Singly-
         linked Lists
• Quick update operations:
  such as: insertions, deletions at both ends (head and
  tail), and also at the middle of the list.
• A node in a doubly-linked list store two references:
  • A next link; that points to the next node in the list,
    and
  • A prev link; that points to the previous node in the
    list.
             Doubly Linked List
A doubly linked list provides a natural implementation of
the List ADT
Nodes implement Position and store:         prev            next
 • element
 • link to the previous node
 • link to the next node
Special trailer and header nodes                   elem      node
header                                   nodes/positions trailer
                                               elements
               Sentinel Nodes
• To simplify programming, two special nodes have been
  added at both ends of the doubly-linked list.
• Head and tail are dummy nodes, also called sentinels,
  do not store any data elements.
• Head: header sentinel has a null-prev reference (link).
• Tail: trailer sentinel has a null-next reference (link).
What we see from a Douby-
      linked List?
A doubly-linked list object would need to store the
    following:
1.   Reference to sentinel head-node;
2.   Reference to sentinel tail-node; and
3.   Size-counter that keeps track of the number of
     nodes in the list (excluding the two sentinels).
Inserting into a Doubly Linked
              List
                   Insertion
•     We visualize operation AddAfter(p, X), which
    returns position q
               A           B        C
                       p
               A           B            q   C
                                    X
                       p        q
                Performance
• In the implementation of the List ADT by means of a
  doubly linked list,
  • The space used by a list with n elements is O(n)
  • All the operations of the List ADT run in O(1) time
LINKED LIST IMPLEMENTATION OF
STACK ADT
#include <stdio.h>
#include <conio.h>
void pop();
void push(int value);
void display();
struct node
{
   int data;
   struct node *link;
};
struct node *temp;
void main()
{
int choice,data;
   while(1)           // while true
   {
      printf("\n1.Push\n2.Pop\n3.Display\n4.Exit\n");
      printf("\nEnter ur choice:");
      scanf("%d",&choice);
      switch(choice)
      {
      case 1:printf("Enter a new element :"); //To push a new element into stack
                scanf("%d",&data);
               push(data);
               break;
      case 2: pop();                // pop the element from stack
                 break;
      case 3: display(); // Display the stack elements
                  break;
      case 4: exit(); // To exit
      }
    }
getch();
}
void display()
{
      temp=top;
         if(temp==NULL)
         {
             printf("\nStack is empty\n");
         }
        printf("\n The Contents of the Stack are...");
         while(temp!=NULL)
         {
             printf(" %d ->",temp->data);
             temp=temp->link;
         }
}
void push(int data)
{                                                           50     Top
   // creating a space for the new node
       temp= (struct node *) malloc(sizeof(struct node));   temp
       temp->data=data;
        temp->link=top;
        top=temp;
       display();
}
void pop()
{
        if(top!=NULL)
        {
            printf("The poped element is %d",top->data);
            top=top->link;
        }
        else
        {
            printf("\nStack Underflow");
        }
display();
}
/**** Program to Implement Queue using Linked List ****/
#include<stdio.h>
#include<conio.h>
struct node
{
int info;
struct node *link;
};
struct node *front, *rear;
*front= NULL;
*rear=NULL'
void insert();
void delete();
void display();
int item;
 void main()
{
int ch;
do
{
printf("\n\n1.\tEnqueue\n2.\tDequeue\n3.\tDisplay\n4.\tExit\n");
printf("\nEnter your choice: ");
scanf("%d", &ch);
switch(ch)
{
case 1: insert();
         break;
case 2: delet();
         break;
case 3: display();
          break;
case 4: exit(0);
default: printf("\n\nInvalid choice. Please try again...\n");
}
} while(1);
getch();
}
 void insert()
{
printf("\n\nEnter ITEM: ");
scanf("%d", &item);
if(rear == NULL)
{
rear = (struct node *)malloc(sizeof(struct node));
rear->info = item;
rear->link = NULL;
front = rear;
}
else
{
rear->link = (struct node *)malloc(sizeof(struct node));
rear = rear->link;
rear->info = item;
rear->link = NULL;
}
}
 void delete()
{
struct node *ptr;
if(front == NULL)
    printf("\n\nQueue is empty.\n");
else
{
ptr = front;
item = front->info;
front = front->link;
free(ptr);
printf("\nItem deleted: %d\n", item);
if(front == NULL)
rear = NULL;
}
}
 void display()
{
struct node *ptr = front;
if(rear == NULL)
   printf("\n\nQueue is empty.\n");
else
{
printf("\n\n");
while(ptr != NULL)
{
printf("%d\t",ptr->info);
ptr = ptr->link;
}
}
}
SESSION-IV-Tree-Preliminaries
   The root node of a tree is the parent of the root nodes
   of its subtrees, which are its children. Children of the
   same parent are siblings.
   The number of subtrees a node has is called its
   degree. A node of degree 0 is a leaf.
   A non-empty list of nodes N0, …, Nk-1 in which each
   node, Ni is parent of the next node Ni+1, for 0 ≤ i <
   k-1, is a path.
   If N0, …, Nk-1 is a path, its length is k-1.
   Thus the length is the number of edges traversed in
   going from N0 to Nk-1.
   A trivial path consisting of a single node has length 0.
   Note that there is a unique path from the root to every
   node in the tree.
         Tree-Preliminaries
• The depth of a node is the length of the path from the
  root to the node.
• The height of a node is the length of a longest path
  from the node to a leaf.
• The root has depth 0, and a leaf has height 0.
• The height of the tree is the height of the root.
• The set of nodes at the same depth comprise a level.
   Siblings are always at the same level, but not all
  nodes at a given level are siblings.
                              Terms
Path - refers to sequence of nodes along the edges of a tree.
Root − Node at the top of the tree is called root. There is only one root per
tree and one path from root node to any node.
Parent − Any node except root node has one edge upward to a node called
parent.
Child − Node below a given node connected by its edge downward is called
its child node.
Leaf − Node which does not have any child node is called leaf node.
                                     Terms
Subtree − Subtree represents descendents of a node.
Visiting − Visiting refers to checking value of a node when control is on the
node.
Traversing − Traversing means passing through nodes in a specific order.
Levels − Level of a node represents the generation of a node. If root node
is at level 0, then its next child node is at level 1, its grandchild is at level 2
and so on.
keys − Key represents a value of a node based on which a search
operation is to be carried out for a node.
Tree represents nodes
 connected by edges
                       Binary Tree
• Binary Tree is a special datastructure used for data storage
  purposes.
• A binary tree has a special condition that each node can have two
  children at maximum.
• A binary tree have benefits of both an ordered array and a linked
  list as search is as quick as in sorted array and insertion or deletion
  operation are as fast as in linked list.
• A binary tree is a finite set of nodes that is either empty, or consists
  of a root node and two disjoint binary trees, called the left and right
  subtrees of the root.
            Binary Tree
It is easy to see that a binary tree can have at most
2n nodes at level n.
A binary tree of height n can have as many as 2n+1-1
nodes, and as few as n+1.
We call a binary tree full if every level, except
possibly the last, has as many nodes as possible.
The fundamental importance of binary trees is due
largely to the fact we can construct binary trees
containing n nodes in which the length of the longest
path is bounded by log2n.
    Binary Tree Representations
• Linear Representation using array
         A         B          C       D         E         F
     1         2        3        4       6       7
node: at i, left child : 2i, right child : 2i+1 locations
• Linked Representation using Linked List
Node
A tree node should look like the below structure. It has data part
and references to its left and right child nodes.
struct node
{
   int data;
   struct node *leftChild;
   struct node *rightChild;
};
            Biary Search Tree
• Binary Search tree exhibits a special behaviour.
• A node's left child must have value less than its parent's
  value and node's right child must have value greater than
  it's parent value.
           BST Basic Operations
The basic operations that can be performed on binary search tree data structure,
are following:
Insert − insert an element in a tree / create a tree.
Search − search an element in a tree.
delete-delete an item from the tree.
Tree Traversals
Preorder Traversal − traverse a tree in a preorder manner.
Inorder Traversal − traverse a tree in an inorder manner.
Postorder Traversal − traverse a tree in a postorder manner.
         Insert & Search in BST
Insert Operation
To insert X into Tree, proceed down the Tree with a find. If X is
found do nothing. Otherwise insert X at the last spot on the path
traversed.
• The very first insertion creates the tree.
   Afterwards, whenever an element is to be inserted.
   First locate its proper location.
   Start search from root node then if data is less than key value,
   search empty location in left subtree and insert the data.
   Otherwise search empty location in right subtree and insert the data.
Search Operation
  Whenever an element is to be searched,
  Start search from root node then if data is less than key value,
  search element in left subtree otherwise search element in right
  subtree.
  Follow the same algorithm for each node.
             Delete in BST
   If the node is leaf, it can be deleted immediately.
(case 1)
             Delete in BST
   If the node has 1 child, the node can be deleted after
   its parent adjust a poointer to bypass the node.
(case 2)
           Delete in BST
The complicated case is deals with the node with 2
children,
3 is replaced with the smallest data in its subtree (6),
& then that node is deleted. (case 3)
// replace with the smallest in the right subtree
                Tree Traversals
• Traversal is a process to visit all the nodes of a tree and
  may print their values too.
• Because, all nodes are connected via edges (links) we
  always start from the root (head) node. That is, we cannot
  random access a node in tree.
•   There are three ways which we use to traverse a tree −
   In-order Traversal
   Pre-order Traversal
   Post-order Traversal
• Generally we traverse a tree to search or locate given item
  or key in the tree or to print all the values it contains.
                    Inorder Traversal
                    (Left,Root,Right)
     • In this traversal method, the left sub-tree is visited first, then root and
       then the right sub-tree.
     • We should always remember that every node may represent a subtree
       itself.
     • If a binary tree is traversed inorder, the output will produce sorted key
       values in ascending order.
We start from A, and following in-order traversal, we
move to its left subtree B. B is also traversed in-ordered.
And the process goes on until all the nodes are visited.
The output of in-order traversal of this tree will be −
D→B→E→A→F→C→G
     Algorithm:
     Until all nodes are traversed −
     Step 1 − Recursively traverse left subtree.
     Step 2 − Visit root node.
     Step 3 − Recursively traverse right subtree.
       Preorder                                          Traversal
  (Root,Left,Right)
  • In this traversal method, the root node is visited first, then left subtree
    and finally right sub-tree.
  • We start from A, and following pre-order traversal, we first visit Aitself
    and then move to its left subtree B. B is also traversed pre-ordered.
    And the process goes on until all the nodes are visited.
  • The output of pre-order traversal of this tree will be −
    A→B→D→E→C→F→G
Algorithm
Until all nodes are traversed −
Step 1 − Visit root node.
Step 2 − Recursively traverse left subtree.
Step 3 − Recursively traverse right subtree.
                (Left, Right, Root)
   • In this traversal method, the root node is visited last, hence
     the name. First we traverse left subtree, then right subtree
     and finally root.
    • We start from A, and following pre-order traversal, we first visit
      left subtree B. B is also traversed post-ordered. And the
      process goes on until all the nodes are visited.
    • The output of post-order traversal of this tree will be −
       D→E→B→F→G→C→A
Algorithm
Until all nodes are traversed −
Step 1 − Recursively traverse left subtree.
Step 2 − Recursively traverse right subtree.
Step 3 − Visit root node.
                inorder, preorder & postorder
                           routines
                                     void preorder (Tree T)          void postorder (Tree T)
    void inorder (Tree T)
                                    {                                {
{
                                      if (T!=NULL)                     if (T!=NULL)
     if (T!=NULL)
                                      {                                {
     {
                                         printelement(T->element);        postorder(T->left);
        inorder(T->left);
                                         preorder(T->left);               postorder(T->right);
        printelement(T->element);
                                         preorder(T->right);             printelement(T->element);
        inorder(T->right);
                                      }                                }
     }
                                    }                                }
}
                        AVL Tree
• Adelson, Velski & Landis, AVL trees are height balancing binary
  search tree.
• AVL tree checks the height of left and right sub-trees and assures
  that the difference is not more than 1. This difference is called
  Balance Factor.
• Here we see that the first tree is balanced and next two trees are
  not balanced −
    BalanceFactor = height(left-sutree) − height(right-sutree)
                       AVL Tree
• In second tree, the left subtree of C has height 2 and right
  subtree has height 0, so the difference is 2.
• In third tree, the right subtree of A has height 2 and left is
  missing, so it is 0, and the difference is 2 again.
• AVL tree permits difference (balance factor) to be only 1, 0 or -1.
• If the difference in the height of left and right sub-trees is more
  than 1, the tree is balanced using some rotation techniques.
                         AVL Rotations
To make itself balanced, an AVL tree may perform four kinds of
rotations −
• Left rotation
  An insertion into the right subtree of the right child of node i
• Right rotation
  An insertion into the left subtree of the left child of node i
• Left-Right rotation
  An insertion into the right subtree of the left child of node i
• Right-Left rotation
  An insertion into the left subtree of the right child of node i
• First two rotations are single rotations and
• next two rotations are double rotations.
                 Left Rotation
• Single Rotation
• If a tree become unbalanced, when a node is inserted into
  the right subtree of right subtree, then we perform single
  left rotation −
 Node A has become unbalanced as a node is inserted in
 right subtree of A's right subtree.
 Perform left rotation by making A left-subtree of B.
               Left-Right Rotation
Double rotations are slightly complex version of already explained
versions of rotations. To understand them better, we should take note
of each action performed while rotation. Let's first check how to
perform Left-Right rotation. A left-right rotation is combination of left
rotation followed by right rotation.
 A node has been inserted into right subtree of left subtree. This makes C
 an unbalanced node. These scenarios cause AVL tree to perform left-right
 rotation.
 We first perform left rotation on left subtree of C. This makes A, left subtree of B.
                  Left-Right Rotation
Node C is still unbalanced but now, it is because of left-subtree of left-subtree.
    We shall now right-rotate the tree making B new root node of this
    subtree. C now becomes right subtree of its own left subtree.
      The tree is now balanced.
       Right-Left Rotation
Second type of double rotation is Right-Left Rotation. It is a
combination of right rotation followed by left rotation.
 A node has been inserted into left subtree of right
 subtree. This makes A an unbalanced node, with
 balance factor 2.
 First, we perform right rotation along C node, making C the
 right subtree of its own left subtree B. Now, B becomes right
 subtree of A.
 Right-Left Rotation
Node A is still unbalanced because of right subtree of
its right subtree and requires a left rotation.
A left rotation is performed by making B the new
root node of the subtree. A becomes left subtree
of its right subtree B.
The tree is now balanced.