UNIT-1
Visual Process Modeling
● Effective problem-solving requires accurately defining
the problem, transitioning from an undesirable starting
state to a desirable goal state.
● Undesirable states are characterized by inefficiencies,
obstacles, or gaps, while desirable goals are specific,
measurable, and actionable.
● Identifying undesirable states and setting actionable
goals is crucial for effective problem-solving.
Strategies for Understanding the Problem
● Rephrasing the problem ensures clarity and shared
understanding among stakeholders.
● Visual representations like diagrams and flowcharts
can help understand complex problems.
● Listing known and unknown factors helps in
structuring the approach to problem-solving.
Defining the Goal
● Goals should have clear and measurable criteria,
adhering to the SMART framework (Specific,
Measurable, Achievable, Relevant, Time-bound).
● The solution should specify the functionalities that the
solution should deliver.
● Feature specifications should describe the desired
features in detail to avoid ambiguity during
development.
Visualizing the Solution
● High-level descriptions provide a broad overview of
the solution without excessive detail.
● Specification development involves outlining desired
functionalities, performance metrics, and constraints.
● Prototyping and visualization create mock-ups to
gather feedback before full-scale implementation.
Continuous Validation
● Regular testing is essential to identify and rectify any
misalignments or unforeseen issues.
● Iterative refinement uses feedback loops to
continuously improve the solution.
● Measuring success involves using predefined metrics
to ensure the solution delivers the desired outcomes.
Decomposition in Problem Solving: An Elaboration
● Decomposition breaks down a large problem into
smaller, more manageable sub-problems or tasks.
● Solving each of these smaller problems individually
will ultimately lead to solving the entire problem.
● Decomposition simplifies complex problems,
improves time management, optimizes resource
allocation, enhances focus, and fosters collaboration.
Steps in Decomposition
● Clearly understand the problem, defining goals and
constraints.
● Divide the problem into smaller, manageable sub-
problems.
● Solve each sub-problem independently using
appropriate tools and techniques.
Steps in Decomposition (Continued)
● Combine the solutions to address the overall problem.
● Evaluate the effectiveness of the overall solution and
re-evaluate sub-problems if needed.
● Examples include planning a family vacation,
completing a school project, and building a computer,
each broken down into manageable steps.
Advantages of Decomposition in Problem Solving
● Decomposition leads to a clearer understanding of the
problem by breaking it into smaller components.
● It increases efficiency and speed by allowing
simultaneous work on multiple parts.
● Focus is improved as you give your full attention to
one part of the problem at a time.
Challenges in Decomposition
● Over-splitting the problem into too many sub-
problems can make it difficult to manage.
● Ignoring interdependencies between sub-problems
can lead to ineffective solutions.
● Focusing too much on individual components might
cause you to lose sight of the overall goal.
Example of Problem Decomposition: Organizing a
Family Reunion
● The main problem is to organize a fun and
memorable family reunion.
● Decompose the problem into sub-tasks such as
choosing a date and time, selecting a venue, creating
a guest list, planning food and beverages, organizing
activities, arranging decorations, ensuring safety and
creating a schedule.
● Solve each sub-problem independently by polling
family members, visiting venues, sending invitations,
deciding on catering, planning activities for different
age groups, and making seating arrangements.
Example of Problem Decomposition: Organizing a
Family Reunion (Continued)
● Combine the solutions, ensuring the venue is
prepared, invitations are sent, food is organized, and
a schedule is set.
● Review all components to confirm attendance, food
arrangements, bookings, and safety measures.
● Decomposition helps reduce stress and increases the
likelihood of a successful reunion.
Logical Sequencing in Problem Solving: An In-Depth
Exploration
● Logical sequencing involves arranging tasks in a
specific order to achieve a desired outcome.
● It ensures tasks are completed in the right order,
maximizes efficiency, and minimizes errors.
● Logical sequencing transforms a complex process
into a more predictable and manageable one.
Steps Involved in Logical Sequencing
● Understand the problem and define the objective by
asking key questions and clarifying the ultimate goal.
● Break down the problem into smaller, manageable
sub-problems.
● Establish logical relationships and sequence of
actions, identifying dependencies and whether tasks
can be parallel or sequential.
Steps Involved in Logical Sequencing (Continued)
● Set milestones and monitor progress to ensure tasks
are completed before moving to the next step.
● Ensure task completeness and avoid overlaps,
reviewing all steps and checking for dependencies
between tasks.
● Final execution and feedback loop involve
continuously assessing progress and making
adjustments as needed.
Example of Logical Sequencing: Organizing a Picnic
● The objective is to organize a successful family picnic.
● Steps include choosing a date and location, preparing
a guest list, making a food plan, shopping for
supplies, arranging transportation, setting up at the
location, preparing food, starting the picnic, and
cleaning up.
● Each step depends on the previous one, ensuring a
logical and efficient process.
Applying Logical Sequencing to More Complex
Problems
● For launching a new product, steps include defining
the objective, researching, developing a strategy,
designing, testing, launching, and providing post-
launch support.
● The ability to logically order tasks is essential for
maintaining efficiency and ensuring successful
outcomes.
● Incorporating logical sequencing ensures every step
is purposeful and contributes to the overall solution.
Algorithms: A Comprehensive Explanation
● An algorithm is a set of instructions designed to solve
a problem or perform a task.
● Key characteristics include clarity, finiteness, input
and output, and effectiveness.
● Algorithms are encountered in many aspects of daily
life, such as following a recipe.
Examples of Algorithms in Daily Life
● Making a cup of tea involves clear, finite steps with
water and a tea bag as input, and a cup of tea as
output.
● Sorting a list of books alphabetically involves
comparing and swapping books until they are
arranged, with the unordered list as input and the
ordered list as output.
● Finding the best route involves comparing available
routes and choosing the shortest, with location and
routes as input, and the best route as output.
Examples of Algorithms in Daily Life (Continued)
● Solving a math problem like addition involves taking
two numbers as input and producing their sum as
output.
● A morning routine involves a series of steps from
waking up to leaving the house, with the waking state
as input and being ready for the day as output.
● Algorithms are structured processes used daily to
solve problems, make decisions, and complete tasks
efficiently.
Pseudocode: A Non-Technical Explanation
● Pseudocode is a simple, informal way to express
algorithms using plain language.
● It helps understand the logic, simplifies complex
problems, is cross-platform, and facilitates
communication.
● Basic principles include using simple language,
structuring steps, being specific, and avoiding
technical details.
Examples of Pseudocode
● Pseudocode for making a cup of tea uses simple
language to describe the steps involved.
● Pseudocode for sorting books alphabetically outlines
the logic of the bubble sort algorithm.
● Pseudocode for finding the largest number uses
simple comparisons to find the largest number in the
list.
Examples of Pseudocode (Continued)
● Pseudocode for a morning routine is a clear
representation of how to get ready for the day.
● Advantages of using pseudocode include clarifying
the problem-solving process, easy modification,
universal understanding, and efficient communication.
● Pseudocode helps break down complex tasks into
manageable steps, making it easier to understand
and solve problems.
Flowchart
● A flowchart is a graphical representation of an
algorithm or process using symbols and arrows.
● Flowcharts consist of input, process, and output
steps.
● Standard symbols include start/end, process,
decision, input/output, connector, and flow arrow.
Elements of Flowchart
● Terminal indicates Start, Stop, and Halt.
● Input/Output denotes any function of input/output
type.
● Processing represents arithmetic instructions.
Elements of Flowchart (Continued)
● Decision represents a decision point.
● Flow lines indicate the exact sequence in which
instructions are executed.
● The document includes flowcharts for adding two
numbers, calculating the area of a circle, converting
temperature from Fahrenheit to Celsius, finding the
greatest from 2 numbers, printing even numbers, and
printing odd numbers.
Elements of Flowchart (Continued)
● The document includes flowcharts for calculating the
average from 25 exam scores, finding the largest
number among three numbers, and finding the area
and perimeter of a square.
UNIT-2
2.1 ANALYSIS AND VERIFICATION
● An algorithm is a sequence of unambiguous
instructions for solving a problem in a finite amount of
time.
● Analyzing algorithms helps evaluate suitability for
applications and can lead to improvements.
● Computational complexity classifies algorithms by
efficiency and problems by difficulty, focusing on
order-of-growth worst-case performance.
Analysis of Algorithms
● A complete analysis involves implementing the
algorithm, determining time for basic operations, and
developing a realistic input model.
● Basic analysis includes Time Complexity Analysis,
Space Complexity Analysis, and Correctness
Analysis.
● Time complexity uses notations like Big O, Omega,
and Theta; Space complexity evaluates memory
usage.
Worst-case, Best-case, and Average-case Analysis
● Worst-case analysis evaluates maximum time or
space required, while best-case assesses the
minimum.
● Average-case analysis considers the expected
performance over all possible inputs.
● Asymptotic analysis focuses on behavior as input size
approaches infinity, aiding in scalability
understanding.
1.1 Order of growth
● Order of growth, uses Big O notation, describes
algorithm behavior as input size increases.
● Types of asymptotic notations include Big O, Omega,
and Theta, allowing analysis and comparison of
algorithm efficiency.
● Asymptotic notation simplifies algorithm analysis by
focusing on growth rates without constant factors.
Asymptotic Notations
● Asymptotic notations are mathematical tools for
expressing the time complexity of algorithms,
including Best Case, Average Case and Worst Case.
● Three main asymptotic notations are Big Oh (O),
Omega (Ω), and Theta (θ).
● Big O notation represents the upper bound
(worst-case complexity), where f(n) is O(g(n)) if
0 ≤ f(n) ≤ cg(n) for all n ≥ n0.
Big O Asymptotic Notation, Ο
● Big O notation describes the upper bound of an
algorithm's growth rate, representing the worst-case
scenario.
● Common orders of growth include O(1) constant,
O(log n) logarithmic, O(n) linear, O(n log n) log-linear,
O(n^2) quadratic, O(2^n) exponential and O(n!)
factorial.
● Asymptotic analysis with order of growth helps in
algorithm selection, optimization, and scalability for
efficient problem-solving.
Omega Notation Ω
● Omega notation represents the lower bound of an
algorithm’s running time, providing the best-case
complexity.
● It indicates the condition allowing an algorithm to
complete execution in the shortest time.
● Function f is Ω(g) if there is a constant c > 0
and a natural number n0 such that c*g(n) ≤
f(n) for all n ≥ n0.
Theta Notation θ
● Theta notation encloses the function from above and
below, analyzing the average-case complexity of an
algorithm.
● It represents both the upper and lower bounds of an
algorithm's running time.
● Function f is Θ(g) if there are constants c1, c2
> 0 and a natural number n0 such that c1*
g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0.
Space and Time Complexity
● Algorithm analysis involves estimating efficiency
based on space and time complexity.
● Space complexity is the amount of space required by
an algorithm.
● Time complexity is the amount of time needed by an
algorithm to run to completion.
Generally, we make three types of analysis, which is
as follows:
● Analyzing algorithms involves finding the best in
terms of time and memory usage before
implementation.
● Three types of analysis include worst-case, average-
case, and best-case time complexity.
● Worst-case is the maximum time, average-case is the
average time, and best-case is the minimum time
needed for execution.
Complexity of Algorithm
● Algorithm complexity measures the number of steps
required to solve a problem, evaluating the order of
operations.
● O(f) notation represents the algorithm's complexity,
also termed Asymptotic or "Big O" notation.
● Complexity can be constant, logarithmic, linear,
n*log(n), quadratic, cubic, or exponential, indicating
the order of steps encountered.
Typical Complexities of an Algorithm
● Constant Complexity: O(1) involves a constant
number of steps, independent of input size.
● Logarithmic Complexity: O(log(N)) involves log(N)
steps, often with a base of 2, e.g., 20 steps for N =
1,000,000.
● Linear Complexity: O(N) involves a number of steps
directly proportional to the number of elements N,
e.g., 500 steps for 500 elements.
Typical Complexities of an Algorithm (cont.)
● Linearithmic complexity has a runtime of O(nlog(n)),
where the number of steps grows as Nlog(N).
● Quadratic complexity: O(n^2) involves N^2 operations
on N elements, e.g., 10,000 steps if N = 100.
● Exponential complexity: O(2^n) involves a number of
operations that grow exponentially with input data
size.
Space and time complexity
● Time complexity measures the time an algorithm
takes as a function of input size, expressed in Big O
notation.
● Linear Search has O(n) time complexity, as it iterates
through each element of the array once.
● Binary Search has O(log n) time complexity, halving
the search space in each iteration.
Space Complexity
● Space complexity is the amount of memory required
by an algorithm as a function of input size, expressed
in Big O notation.
● Linear Search has O(1) space complexity because it
uses constant space, regardless of input size.
● Recursive Factorial Calculation has O(n) space
complexity due to the memory consumed by recursive
calls on the call stack.
2.2 BRUTE FORCE
● Brute force algorithms explore every option to solve a
problem, suitable for small-scale issues due to high
temporal complexity.
● Features include methodical listing of potential
solutions, relevance to small problem spaces, and no
optimization or heuristics.
● Pros: Guaranteed correct solution, generic method,
ideal for simpler problems; Cons: Inefficient for real-
time problems, compromises system power.
Applications of Brute Force Algorithm
● String Matching: Brute force matches the pattern with
the text, character by character, sliding the pattern
along the text.
● The running time of the string matching algorithm is
O(nm).
● Traveling Salesman Problem: Find the shortest route
to visit all cities exactly once and return to the starting
point.
Steps to Solve the Problem
● Solve TSP by finding adjacent matrix of the graph,
performing the shortest_path algorithm, and
understanding next_permutation.
● To find the adjacent matrix, create a multidimensional
array edges_list having dimension num_nodes *
num_nodes.
● Run a loop num_nodes times, take inputs first_node
and second_node, and set edges_list[first_node]
[second_node] equal to 1.
Step - 2 - Performing The Shortest Path Algorithm
● Core steps include considering a starting city,
generating permutations of the remaining cities, and
calculating the edge sum.
● After calculating the edge sum, track the minimum
path sum and return the minimum edge cost.
● The next_permutation() function rearranges objects in
lexicographical order and returns true if a greater
arrangement exists.
Complexity
● TSP's time complexity is proportional to n! (factorial of
n), making it O(n!).
● TSP space complexity is O(n^2) due to the n * n
adjacent matrix.
● The most amount of space in this graph algorithm is
taken by the adjacent matrix.
2.3 DIVIDE AND CONQUER
● Divide and Conquer involves breaking a problem into
smaller pieces, solving each independently, and
merging solutions.
● The steps are: Divide the problem, Conquer each
subproblem recursively, and Combine the solutions.
● Applications include Binary Search, Quicksort, Merge
Sort, Closest Pair of Points, and Strassen's Algorithm.
Applications of Divide and Conquer Approach (cont.)
● Further applications include Cooley-Tukey Fast
Fourier Transform (FFT) and Karatsuba algorithm for
fast multiplication.
● Advantages include solving complex problems,
efficient cache memory usage, and parallelism.
● Disadvantages include high memory management
needs due to recursion and potential system crashes
from excessive recursion.
Merge Sort
● Merge sort divides an array into equal halves until
atomic values are reached, without hampering
element order.
● It merges atomic values back, comparing elements to
form new sorted lists.
● The process repeats until all data values are sorted.
Merge Sort (cont.)
● Merge sort involves dividing the list until it can no
more be divided.
● The algorithm conquers by sorting each subarray
individually.
● The sorted subarrays are then merged back together
in sorted order until all elements are merged.
Efficiency
● Divide and conquer efficiency is evaluated by time
and space complexity.
● Time complexity depends on subproblem count, size,
and cost of dividing and combining.
● Merge Sort has O(nlog n) time complexity and Binary
Search has O(log n) time complexity.
Efficiency (cont.)
● Space complexity in divide and conquer depends on
auxiliary space and recursive stack space.
● Merge Sort requires O(n) auxiliary space and O(log n)
stack space, while Quick Sort requires O(log n) stack
space in the best case.
● Understanding time and space complexity is crucial
for evaluating algorithm efficiency and scalability.
2.4 GREEDY ALGORITHM
● A Greedy algorithm selects the best option at each
step, aiming for the optimal result but not guaranteed
to find it.
● It's easy to understand and implement but requires
properties like Greedy Choice Property and Optimal
Substructure.
● Steps include finding the best subproblem,
determining solution components, and creating an
iterative process for an optimum solution.
Example of Greedy Algorithm
● The Alex's task problem involves maximizing the
number of tasks completed within a time limit.
● A greedy solution involves sorting tasks by time,
selecting tasks iteratively, and updating current time
until it reaches the limit.
● A route-finding problem is solved by selecting vertices
with minimum edge weight and adding them to a tree
structure without creating cycles.
Limitations of Greedy Algorithm
● Greedy algorithms make decisions without
considering the broader problem, not guaranteeing
the best answer.
● Analyzing the accuracy of a greedy algorithm is
problematic; even with a proper solution,
demonstrating its accuracy is difficult.
● Optimization problems with negative graph edges
cannot be solved using a greedy algorithm.
Applications of Greedy Algorithm
● The Greedy Algorithm is used for constructing
Minimum Spanning Trees using Prim’s and Kruskal’s
Algorithms.
● It is used to implement Huffman Encoding for
compressing files.
● The Greedy Algorithm is also used to solve
optimization problems, such as Graph - Map Coloring
and the Knapsack Problem.
Characteristics of a Greedy Method
● The greedy method involves making locally optimal
choices at each stage, but it’s not always guaranteed
to find the best solution.
● It is relatively easy to implement and understand.
● The greedy method can be quite slow.
Components of a Greedy Algorithm
● Key components include candidate solutions (typically
a graph), ranking criteria, a selection function, and a
pruning method.
● The selection function picks the candidate with the
highest ranking.
● The pruning step ensures the algorithm doesn't
consider solutions that are worse than the best one
found so far.
Pseudo Code of Greedy Algorithms
● Pseudo code involves setting the current state to the
initial state and looping until the current state equals
the goal state.
● Within the loop, it chooses the next state using the
chooseNextState() function and sets the current state
to the next state.
● The chooseNextState() function finds all possible next
states and selects the one closest to the goal state.
Disadvantages of Using Greedy Algorithms
● A major disadvantage of greedy algorithms is that
they may not find the optimal solution and can be very
sensitive to changes in input data.
● Greedy algorithms can be difficult to implement and
understand.
● The Fractional Knapsack Problem is solvable using a
greedy approach by calculating and sorting the
profit/weight ratio for each item.
Fractional Knapsack Problem
● To solve the Fractional Knapsack Problem calculate
the ratio (profit/weight) for each item.
● Sort all items in decreasing order of the ratio, then
initialize res = 0, curr_cap = given_cap.
● Iterate and if the weight of the current item is less
than or equal to remaining capacity, add the value to
result, otherwise add as much as possible.
2.5 BACKTRACKING ALGORITHM
● Backtracking is a brute-force algorithm that explores
all possible solutions and backtracks if the current
solution is not suitable.
● Backtracking algorithms are used when sufficient
information isn't available to make the best choice.
● It use recursion, and each decision leads to a new set
of choices requiring backtracking.
How Does a Backtracking Algorithm Work?
● Backtracking seeks a path to a feasible solution
through intermediate checkpoints.
● If checkpoints don't lead to a viable solution, the
algorithm returns to checkpoints and takes another
path.
● The algorithm exhausts all possible combinations until
a viable solution is found, making it a brute-force
technique.
Applications of Backtracking
● Backtracking is applied in the N-queen problem, sum
of subset problem, graph coloring, and Hamilton
cycle.
● The N-Queens problem is to place N queens on an N
x N chessboard so that no queens attack each other.
● The queens cannot be in the same row, column, or
diagonal.
4 - Queens Problem
● The queen can move any number of steps in any
direction (vertical, horizontal, diagonal), but can’t
change direction while moving.
● To solve, use backtracking algorithm or approach.
● Backtracking starts with one possible move, then
backtracks and selects another if the move doesn't
lead to a solution.
Algorithm for N-Queens Problem using Backtracking
● Place queens row-wise from the left-most cell.
● Return true and print the solution matrix if all queens
are placed.
● If placing the queen doesn't lead to a solution, unmark
the cell, backtrack, and try other rows.
The Working of an Algorithm to solve the 4-Queens
Problem
● Place a queen in a position and check all possibilities
for attacks.
● Ensure only one queen in each row/column and
backtrack if attacked in all positions.
● Repeat the process until all queens are successfully
placed.
Step 1
● Create a 4×4 chessboard for the 4-Queens problem.
● Place the Queen Q1 at the left-most position (row 1,
column 1).
● Mark the cells under attack from Q1 with cross marks.
Step 3
● The possible safe cells for Queen Q2 at row 2 are
column 3 and 4 because these cells do not come
under the attack from a queen Q1.
● So, here we place Q2 at the first possible safe cell
which is row 2 and column 3.
● Mark the cells of the chessboard with cross marks
that are under attack from a queen Q2.
Step 4
● No safe place is remaining for the Queen Q3 if we
place Q2 at position [2, 3]. Therefore make position
[2, 3] false and backtrack.
● We place Q2 at the second possible safe cell which is
row 2 and column 4.
● Mark the cells of the chessboard with cross marks
that are under attack from a queen Q2.
Step 6
● The only possible safe cell for Queen Q3 is remaining
in row 3 and column 2.
● Therefore, we place Q3 at the only possible safe cell
which is row 3 and column 2.
● Mark the cells of the chessboard with cross marks
that are under attack from a queen Q3.
Step 7
● We see that no safe place is remaining for the Queen
Q4 if we place Q3 at position [3,2] therefore make
position [3,2] false and backtrack.
● Backtrack to the first queen Q1.
● Place the Queen Q1 at column 2 of row 1.
Step 9
● The only possible safe cell for Queen Q2 is remaining
in row 2 and column 4.
● Place the Queen Q2 at column 4 of row 2.
● Mark the cells of the chessboard with cross marks
that are under attack from a queen Q2.
Step 10
● The only possible safe cell for Queen Q3 is remaining
in row 3 and column 1.
● Place the Queen Q3 at column 1 of row 3.
● Mark the cells of the chessboard with cross marks
that are under attack from a queen Q3.
Step 11
● The only possible safe cell for Queen Q4 is remaining
in row 4 and column 3.
● Therefore, place Q4 at row 4 and column 3.
● A solution for the 4-queens problem is achieved with
all 4 queens placed exactly in each row/column
individually.
Working of an Algorithm
● One solution involves placing queens according to
rows, while another involves moving each queen one
step forward clockwise.
● Instead of placing queens according to rows, the
same process can be done column-wise.
● The N - Queens problem is to place N - queens in
such a manner on an N x N chessboard that no
queens attack each other.
Space and time complexity: Backtracking algorithm
● Backtracking time complexity depends on the number
of function calls, e.g., O(2^N) if called twice and
O(K^N) in general.
● For the N-Queens problem, the time complexity is
O(n!) because the queen function iterates n times and
calls the place(row) function for each iteration.
● N-Queens has a space complexity of O(N^2), where
N is the number of queens.
UNIT-3
3. 1 Elementary Data Organization
● Data structures organize data logically.
● Choosing a data model depends on mirroring real-
world relationships and processing data effectively.
● Basic terminology includes data, data items (group
and elementary), entities, fields, records, and files.
Need of Data Structure
● Data structure modification should be easy and
require less time.
● They need to save storage memory space and data
representation needs to be easy.
● They need to provide easy access to large databases.
Data Type and Data Structure
● Data types define variable forms, while data
structures collect different kinds of data using objects.
● Data types hold values but not data, while data
structures hold multiple data types.
● Data types have abstract implementations, whereas
data structures have concrete implementations, and
time complexity is more important in data structures.
Categories of Data Structure
● Linear data structures arrange elements sequentially
(e.g., array, stack, queue, linked list).
● Static data structures have fixed memory size (e.g.,
array), while dynamic data structures can be updated
during runtime (e.g., queue, stack).
● Non-linear data structures do not arrange elements
sequentially (e.g., trees, graphs), and traversal
requires multiple runs.
3. 2 Abstract Data Type – ADT
● ADTs provide a logical view of data objects with
specified operations, enabling reuse and modularity.
● ADTs consist of data declaration, operation
declaration, and encapsulation.
● Key features of ADTs include abstraction, modularity,
and separation of concerns.
Components of ADTs
● Data defines the type of information stored (e.g.,
integers, strings).
● Operations specify actions on the data, including
insertions, deletions, searching, and sorting.
● The interface is the user-facing contract, while the
implementation is the internal mechanism.
Common Abstract Data Types
● List ADT: Ordered collection allowing duplicates, with
operations like insert, delete, and traverse.
● Stack ADT: LIFO structure with push, pop, and peek
operations.
● Queue ADT: FIFO structure with enqueue and
dequeue operations, including variants like circular
queue and priority queue.
● Set ADT: Collection of unique elements with union,
intersection, and difference operations.
● Map (Dictionary) ADT: Collection of key-value pairs
with insert, delete, and search operations.
Abstract Data Models
● Abstract Data Models describe how ADTs can be
used to design data-centric systems, separating
logical functionality from implementation details.
● Benefits include flexibility, scalability, and code
reusability.
● ADTs can be implemented using arrays, linked lists,
or trees, depending on performance requirements.
Applications of ADTs
● ADTs facilitate modular design and reusable
components in software development.
● They enable efficient data retrieval and management
in database systems.
● ADTs are used for optimal path calculation in network
routing.
3. 3 Fundamentals of Linear Data Structures
● Linear data structures arrange data elements
sequentially.
● Characteristics include linear sequence, easy memory
implementation, and single-level traversal.
● Types include arrays, queues, stacks, and linked lists.
Types of Linear Data Structures
● Arrays store homogeneous elements at contiguous
memory locations, accessed via indexing.
● Linked lists store objects sequentially with each object
containing data and a reference to the next object.
● Stacks follow LIFO/FILO principle with push and pop
operations.
Queue
● Queue is a type of data structure where the elements
to be stored follow the rule of First In First Out (FIFO).
● There are Input restricted queue, Output restricted
queue, Circular queue, Double-ended queue, and
Priority queue.
● The two main operations governing the structure of
the queue are enqueue, and dequeue.
Pros and Cons of Linear Data Structures
● Advantages include sequential element access,
efficient data insertion/deletion, and easy
implementation.
● Disadvantages include fixed-size arrays, insufficient
search operations, and increased memory usage for
linked lists.
3. 3.1 Arrays and Array Operations
● Arrays store elements of the same data type in
contiguous memory locations, accessed via an index
system.
● Arrays help solve high-level problems and make it
easier to manipulate and maintain the data.
● There are One-Dimensional Arrays, Two-Dimensional
Arrays, and Three-Dimensional Arrays.
How Do You Initialize an Array?
● Method 1: int a[6] = {2, 3, 5, 7, 11, 13};
● Method 2: int arr[]= {2, 3, 5, 7, 11};
● Method 3: dynamic input using loops.
Array Operations
● Common operations include traversing, insertion,
deletion, searching, and sorting.
● Traversing involves visiting each element once, e.g.,
printing each element.
● Insertion can occur at the beginning, end, or a specific
index, each with corresponding code examples.
Array Operations Cont.
● Deletion removes elements from the beginning or
end.
● Searching for an element can be done linearly or
using a binary search (for sorted arrays).
● Sorting arranges elements in a user-defined order.
3. 3.2 List and Operations in Linked List
● A linked list stores data elements dynamically,
connected by pointers.
● Need of Linked Lists are used because Linked list's
size can be increased or decreased as needed.
● Common operations include traversal, insertion,
deletion, searching, and sorting.
Representation of linked lists
● We can visualize a linked list as a chain of nodes;
where every node contains a data field and a
reference link to the next node in the list.
● The first node in the sequence is termed as ‘head’.
● Each node consists of a data field and Reference to
the next node.
Insertion after specified node
● Insertion involves creating a new node and linking it to
the list at the beginning of the Linked List.
● Doing this will place the new node in the middle of the
two nodes; as we wanted.
● Insertion at the Beginning and End of the Linked List,
involves updating pointers.
Deletion
● Just like insertion, deletion is also a multi-step
process.
● First, we track the node to be removed, by using
searching algorithms.
● Once, it’s located, we’ll change the reference link of
the previous node to the node that is next to our
target node.
Searching and Sorting
● Searching uses a loop to find a node in the list.
● Sorting elements uses Bubble Sort, comparing and
swapping adjacent nodes.
● To display all the nodes in the linked list, When the
temp is null, it means you traversed all the nodes.
3. 3.3 Doubly and Circularly Linked List
● A doubly linked list allows each element to link to both
the next and previous nodes, enhancing navigation.
● Doubly Linked List can manage the user's history,
allowing easy forward (redo) and backward (undo)
movements through previously visited pages.
● Each node in a doubly linked list includes data, a next
pointer, and a prev pointer.
How It Works?
● Head and Tail, enhances operations such as adding
or removing elements at the ends of the list.
● Traversal, can start from the head to move forward
through the list or from the tail to move backward.
● Insertions and Deletions, can be done efficiently at
any point in the list.
Difference between Singly and Doubly Linked List
● Singly Linked List Traversal is Only forwards while
Doubly Linked List Traversal is Both forwards and
backwards.
● Singly Linked List Memory Usage Less memory per
node while Doubly Linked List More memory per
node.
● Doubly Linked List Typically slower for operations
while Faster for operations involving the end of the list
due to the tail pointer.
Doubly Linked List Operations
● Inserting nodes involves adjusting pointers from both
directions, including at the beginning, end, or after a
given node.
● Removing nodes requires pointer adjustments from
preceding and succeeding nodes, handling the head
and tail.
● Searching involves traversing the list from either the
head or tail.
Advantages and Disadvantages of Doubly Linked
Lists
● Advantages include bidirectional navigation, easier
insertion/deletion, and efficient operations at both
ends.
● Disadvantages include increased memory usage,
complexity in managing pointers, and slower
individual operations.
Circular Lists
● A circular linked list has each node pointing to the
next, with the last node pointing back to the first.
● Node, Each element in the list is contained in a
"node".
● Head is a pointer or reference to the first node in the
list.
Types of Circular Linked List
● In a singly circular linked list, each node contains a
single pointer that points to the next node in the
sequence.
● Doubly Circular Linked List enhance the singly
circular linked list by adding an additional pointer in
each node.
● Advantages and Use Cases discussed for each type.
Circular Linked List Operations
● The search operation in a circular linked list involves
traversing the list from the head and checking each
node's data against the search value.
● Deleting nodes from a circular linked list can be more
complex, especially handling the head and ensuring
the circular nature of the list is maintained.
Applications of linked list in computer science:
● The implementation of stacks and queues and
Implementation of graphs.
● Dynamic memory allocation and Maintaining a
directory of names.
● Performing arithmetic operations on long integers and
Manipulation of polynomials.
Applications of linked list in the real world:
● Image viewer and Previous and next page in a web
browser.
● Music Player and GPS navigation systems.
● Robotics and Task Scheduling.
Applications of Circular Linked Lists
● Used for implementation of a queue and Useful for
implementation of a queue.
● Used in database systems to implement linked data
structures and Traffic light control systems.
● Video games use circular linked lists to manage sprite
animations and Circular Doubly Linked Lists are used
for the implementation of advanced data structures
like the Fibonacci Heap.
Application of Doubly Linked Lists:
● Redo and undo functionality and Use of the Back and
forward button in a browser.
● The most recently used section is represented by the
Doubly Linked list and Stack, Hash Table, and Binary
Tree.
● Operating systems use doubly linked lists to manage
the process scheduling and Used in networking.
3. 3.4 Stacks
● Stacks are LIFO/FILO linear data structures used for
managing function calls, expression evaluation, and
backtracking.
● Stacks are also used in parsing algorithms, undo
functionality, and memory management.
● Basic features include ordered list, LIFO structure,
push and pop functions, and overflow/underflow
states.
Basic operations of Stack:
● push() to insert an element into the stack.
● pop() to remove an element from the stack.
● top() Returns the top element of the stack.
Basic operations of Stack cont.:
● Push algorithm adds an item to the stack.
● Pop algorithm removes an item from the stack.
● Top and isEmpty algorithms return the top element
and true if the stack is empty.
Types of Stacks:
● Fixed Size Stack, has a fixed size and cannot grow or
shrink dynamically.
● Dynamic Size Stack, can grow or shrink dynamically..
● Infix to Postfix Stack, and Expression Evaluation
Stack..
Implementation of Stack:
● The basic operations that can be performed on a
stack include push, pop, and peek.
● There are two ways to implement a stack – Using
array and Using linked list.
● Implementing Stack using Arrays, the push, pop,
peek, and isEmpty functions.
Applications of the Stack:
● Infix to Postfix /Prefix conversion and Redo-undo
features.
● Forward and backward features in web browsers and
Used in many algorithms.
● Backtracking is used to solve problems and In Graph
Algorithms.
3. 3.5 Queues
● Queues are FIFO linear data structures modified at
both ends: Rear (insertion) and Front (deletion).
● Queue Representation, The queue in the data
structure acts the same as the movie ticket counter.
● The queue can be implemented using Arrays, Linked-
lists, Pointers, and Structures.
Basic Operations for Queue in Data Structure
● Elements can only be operated at two data pointers,
front and rear.
● Enqueue() - Insertion of elements to the queue.
● Dequeue() - Removal of elements from the queue.
Enqueue() Operation
● The following steps should be followed to insert
(enqueue) data element into a queue.
● Check if the queue is full. If the queue is full, Overflow
error.
● Increment the rear pointer to point to the next
available empty space and Add the data element to
the queue location where the rear is pointing.
Dequeue() Operation
● Check if the queue is empty. If the queue is empty,
Underflow error.
● If the queue is not empty, access the data where the
front pointer is pointing.
● Increment front pointer to point to the next available
data element.
Functions Cont.
● Peek() function extracts the data element where the
front is pointing without removing it from the queue.
● isFull() operation checks if the rear pointer is reached
at MAXSIZE to determine that the queue is full.
● isNull() operation Check if the rear and front are
pointing to null memory space, i.e., -1.
Applications of Queue
● Printers use queues to maintain the order of pages
while printing.
● Interrupt handling in computes are operated in the
same order as they arrive.
● Customer service systems Develops call center
phone systems using the concepts of queues.
3. 4 Non-Linear Data Structures
● Non-linear data structures store data elements non-
sequentially, allowing complex relationships.
● Data elements are present at the multilevel, for
example, tree.
● Multiple runs are required to traverse through all the
elements completely.
Types of Non-Linear Data Structures
● Tree: Hierarchical structure where each node is
connected to one or more nodes in a parent-child
relationship.
● Graph: Collection of nodes (vertices) and edges
representing complex relationships.
Graph
● A graph can be defined as group of vertices and
edges that are used to connect these vertices.
● A graph G can be defined as an ordered set G(V, E)
where V(G) represents the set of vertices and E(G)
represents the set of edges which are used to
connect these vertices.
● A graph can be directed or undirected.
Graph Terminology:
● Path, can be defined as the sequence of nodes that
are followed in order to reach some terminal node V
from the initial node U.
● Cycle, A cycle can be defined as the path which has
no repeated edges or vertices except the first and last
vertices.
● Digraph is a directed graph in which each edge of the
graph is associated with some direction and the
traversing can be done only in the specified direction.
Representations of Graph
● Here are the two most common ways to represent a
graph : Adjacency Matrix and Adjacency List.
● An adjacency matrix is a way of representing a graph
as a matrix of boolean (0’s and 1’s).
● Initially, the entire Matrix is initialized to 0.
Graph Traversal in Data Structure
● We can traverse a graph in two ways: BFS ( Breadth
First Search ) and DFS ( Depth First Search ).
● Breadth First Search (BFS) algorithm traverses a
graph in a breadth ward motion and uses a queue to
remember to get the next vertex to start a search,
when a dead end occurs in any iteration.
● Depth First Search (DFS) algorithm traverses a graph
in a depth ward motion and uses a stack to remember
to get the next vertex to start a search, when a dead
end occurs in any iteration.
3. 4.1 Trees
● Tree data structures organize data hierarchically with
parent-child relationships.
● Examples include folder structures in operating
systems and tag structures in HTML/XML documents.
● Basic terms include parent node, child node, root
node, leaf node, ancestor, descendant, sibling, and
level.
Representation of Tree Data Structure:
● A tree consists of a root node, and zero or more
subtrees.
● There is an edge from the root node of the tree to the
root node of each subtree.
Types of Tree data structures:
● In a binary tree, each node can have a maximum of
two children linked to it.
● A Ternary Tree is a tree data structure in which each
node has at most three child nodes.
● Generic trees are a collection of nodes where each
node is a data structure that consists of records and a
list of references to its children.
Basic Operations of Tree Data Structure:
● Basic Operations of Tree Data Structure includes
Create – create a tree in the data structure.
● Insert − Inserts data in a tree and Search −
Searches specific data in a tree to check
whether it is present or not.
● Tree Traversal, the term 'tree traversal' means
traversing or visiting each node of a tree.
Tree Traversal
● Tree traversal techniques include preorder, inorder,
and postorder traversal.
● Preorder traversal follows 'root left right' policy, visiting
the root node before its subtrees.
● Postorder traversal follows 'left-right root' policy,
visiting the root node after its subtrees.
Inorder traversal
● This technique follows the 'left root right' policy. It
means that first left subtree is visited.
● After that root node is traversed, and finally, the right
subtree is traversed.
● As the root node is traversed between the left and
right subtree, it is named inorder traversal.
3. 4.2 Binary Trees
● Binary trees organize data hierarchically with each
node having at most two children.
● A full binary tree is a tree where every node has either
0 or 2 children.
● A complete binary tree is a tree where all levels are
fully filled except possibly the last level, which is filled
from left to right.
Types of Binary Tree in Data Structure
● A perfect binary tree is a tree where all internal nodes
have exactly two children and all leaf nodes are at the
same level.
● A balanced binary tree is a tree where the height of
the left and right subtrees of any node differ by at
most one.
● A degenerate binary tree is a tree where each parent
node has only one child. This makes the tree look like
a linked list.
Binary Tree Operations with Examples
● Binary trees support insertion, deletion, searching,
and traversal.
● Insertion adds a new node, typically at the first
available position in level order.
● Deletion removes a node and replaces it with the
deepest and rightmost node.
Searching and Traversal cont
● Searching finds a node with a given value using any
traversal method.
● Traversal visits all nodes in a specific order: in-order,
pre-order, post-order, or level-order.
Time and Space Complexity of Binary Tree
● worst Case Time Complexity Insertion O(n) while
space complexity is O(n).
● worst Case Time Complexity Deletion O(n) while
space complexity is O(n).
● worst Case Time Complexity Traversal (In-order, Pre-
order, Post-order) O(n) while space complexity is O(h)
(where h is the height of the tree).
Applications of Binary Trees with Examples
● Binary Search Trees (BST): Efficient searching,
insertion, and deletion.
● Heaps: Implement priority queues with efficient
maximum or minimum retrieval.
● Expression Trees: Represent and evaluate arithmetic
expressions.
UNIT-4
Data Storage
● Data is classified as structured, semi-structured, or
unstructured.
● A database is an organized collection of data used for
easy access, manipulation, and updates.
● A flat file database is a single table with rows of
information, suitable for simple data storage.
Flat File Database
● Flat file databases have a single table structure with
no data relationships, using simple formats like plain
text or CSV.
● These databases don't support complex querying and
have limited scalability, making them ideal for small
datasets.
● Flat-file databases are easy to use, require minimal
overhead, and offer portability, making them
accessible for non-technical users.
Uses of Flat-File Database
● Flat-file databases are used for simple data storage,
inventory management, address books, and task
management.
● They also suit financial tracking, personal collection
management, small-scale data analysis, and logging.
● Flat-file databases are useful for recipe management,
data import/export, and simple client databases.
Advantages and Disadvantages of Flat-File Database
● Advantages include simplicity, low resource
requirements, and no need for specialized software,
making them cost-effective and easy to port.
● Disadvantages include difficulty in updating and
changing data formats, poor performance with
complex queries, and increased redundancy.
● Flat file databases are suited for storing data in a
single table structure whereas relational databases
store data in multiple table structure.
Relational Databases
● A database table is a collection of related data entries
organized in columns and rows.
● RDBMS is based on the relational model and uses
tables with primary keys to store data.
● Key terminologies include relations (tables), rows
(records/tuples), and columns (attributes).
RDBMS Terminologies and Properties
● A row contains specific information for each entry in
the table.
● A column contains all information associated with a
specific field.
● The degree of a table is the total number of attributes,
while cardinality is the total number of tuples.
Domains, Null Values, and Integrity in RDBMS
● A domain refers to the possible values each attribute
can contain, specified using standard data types.
● NULL values indicate a field was left blank during
record creation.
● Data integrity includes entity, domain, referential, and
user-defined integrity.
Characteristics and Constraints of Relational Model
● The relational model organizes data in tables with
atomic values and unique keys, ensuring data
independence and consistency.
● Integrity constraints enforce rules such as domain
constraints, entity integrity (no NULL primary keys),
and referential integrity.
● Constraints are checked before any operation
(insertion, deletion, and updation) and the operation
fails if there is a violation of any of the constraints.
Relational Model Operations
● Relational model operations include selection,
projection, and join to manipulate and retrieve data.
● Selection retrieves rows based on a condition, while
projection retrieves specific columns.
● Join combines rows from two or more tables based on
related columns, with types like inner, left outer, right
outer, and full outer join.
Set Operations in Relational Model
● Union combines the results of two queries, excluding
duplicates, while intersection returns only the rows
that appear in both relations.
● Difference returns rows that appear in the first relation
but not in the second.
● The relational database defines relationships in the
form of tables related to each other based on data
common to each.
DBMS vs. RDBMS
● DBMS stores data as files, whereas RDBMS stores
data in a tabular form.
● RDBMS supports normalization and ACID properties,
providing better security than DBMS.
● RDBMS supports distributed databases and handles
large amounts of data, while DBMS is meant for
smaller organizations.
Benefits of Relational Databases
● Relational databases offer flexibility, ACID
compliance, ease of use with SQL, and support
collaboration.
● They provide built-in security with role-based access
control and use normalization to reduce data
redundancy and improve integrity.
Data Read & Write in Local Storage
● Local storage involves storing data directly on a
device for accessible data without network reliance.
● Mechanisms involve opening, writing, and closing files
for writing, and opening, reading, and processing files
for reading.
● Local storage offers high performance, privacy, cost-
effectiveness, and offline accessibility.
Limitations and Types of Computer Memory
● Local storage has limited capacity, hardware
dependency, lack of remote accessibility, and risk of
hardware failure.
● Primary memory (RAM, ROM), secondary memory
(hard disks, SSDs), and tertiary memory are the main
types of computer memory.
● Primary storage devices are used for immediate
processing, while secondary storage is used for
permanent storage.
Types of Computer Storage Devices
● Primary storage devices include RAM (SRAM, DRAM,
SDRAM) and ROM (PROM, EPROM).
● Magnetic storage devices include floppy disks, hard
disks, magnetic cards, tape cassettes, and
SuperDisks.
● Flash memory devices include pen drives, SSDs, SD
cards, memory cards, and multimedia cards.
Optical Storage and Hard Disk Drives
● Optical storage devices include CDs (CD-R, CD-RW),
DVDs (DVD-R, DVD-RW), and Blu-ray Discs.
● Hard disk drives (HDDs) store data on tracks and
sectors, with read-write heads performing read and
write operations.
● Key factors affecting HDD performance include seek
time, rotational latency, data transfer time, and
controller time.
Data Read & Write in Server Storage
● A server provides functionality for clients, sharing data
or performing computations.
● Server storage allows users to store and access data
from multiple devices and locations via network
protocols.
● Server storage offers centralization, security,
collaboration, and customization, but has network and
security concerns.
Types of Servers
● Common server types include web servers, database
servers, file servers, application servers, and mail
servers.
● Proxy servers, DNS servers, game servers, virtual
servers, and FTP servers also exist.
● Data is stored on a server through a combination of
hardware and software systems, including physical
storage, RAID configurations, and file systems.
Data Storage Architecture and File Systems
● Physical storage includes HDDs and SSDs, often
grouped into RAID configurations for redundancy and
performance.
● File systems like NTFS, ext4, and XFS are used to
organize and manage data on the drives.
● Servers employ data redundancy and backup
strategies to prevent data loss, including RAID,
regular backups, and replication.
Security and Server Characteristics
● Security measures for data storage include
encryption, access control, and auditing.
● Key server characteristics are high performance,
reliability, always-on availability, and networked
operation.
● Servers provide resources and services while clients
request them, with applications including hosting
websites, managing data, and running enterprise
applications.
Data Read & Write in Cloud Storage
● Cloud storage is a virtual locker for remotely storing
data, copied over the Internet into data servers.
● Key features include resource availability, easy
maintenance, large network access, and security.
● Cloud storage systems include block-based, file-
based, and object-based storage.
Cloud Storage Architecture and Types
● Cloud storage architecture involves distributed
resources functioning as one, ensuring durability
through replication.
● Public cloud storage is provided by third-party
providers like AWS, Google Cloud, or Azure.
● Private cloud storage is dedicated to a single
organization, while hybrid cloud storage combines
public and private options.
Types of Cloud Storage and How It Works
● Hybrid Cloud Storage combines public and private
cloud storage to balance security and scalability.
● Uploading data involves using a client to send files to
the cloud provider, who manages storage,
redundancy, backups, and encryption.
● Data is accessed via the cloud provider’s API, web
interface, or integrated applications.
Benefits of Cloud Storage
● Cloud storage offers scalability, accessibility, cost-
effectiveness, and data redundancy.
● It also provides reliability, security with encryption and
access controls, and disaster recovery capabilities.
● Leading cloud storage providers include Amazon Web
Services (AWS), Google Cloud, Microsoft Azure,
Dropbox, and Box.
Cloud Storage Providers
● Leading cloud storage providers include Amazon Web
Services (AWS), Google Cloud, Microsoft Azure,
Dropbox, and Box.
● Apple iCloud and pCloud are also notable cloud
storage providers.
Database Query Methods (DDL, DML)
● SQL commands are categorized into Data Definition
Language (DDL), Data Manipulation Language
(DML), Data Control Language (DCL), Data Query
Language (DQL), and Transaction Control Language
(TCL).
● DDL is used for creating, modifying, and deleting
database objects like tables and indices.
● DDL statements describe, comment on, and label
database objects, imposing constraints on tables.
DDL Commands: CREATE, DROP, ALTER,
TRUNCATE, COMMENT, RENAME
● CREATE is used to create databases or objects,
while DROP deletes objects.
● ALTER modifies the structure of the database,
TRUNCATE removes all records from a table.
● COMMENT adds comments to the data dictionary,
and RENAME renames objects.
SQL CREATE TABLE Statement
● CREATE TABLE is used to create a new table in the
database, defining column names, data types, and
constraints.
● A table’s structure, including column names, data
types, and constraints like NOT NULL, PRIMARY
KEY, and CHECK, are defined when it is created in
SQL.
● It aids in ensuring data integrity.
SQL DROP Command
● In SQL, the DROP command is used to permanently
remove an object from a database, such as a table,
database, index, or view.
● When you DROP a table, the table structure and its
data are permanently deleted, leaving no trace of the
object.
● DROP object object_name.
TRUNCATE Command
● The TRUNCATE command removes all rows from a
table but preserves the structure of the table for future
use.
● The key differences between DROP and TRUNCATE
statements are that DROP removes the data and
definition including the full structure but TRUNCATE
preserves the structure deleting all data.
SQL ALTER AND RENAME TABLE STATEMENT
● The ALTER TABLE statement in SQL is used to add,
remove, or modify columns in an existing table.
● It allows for structural changes like adding new
columns, modifying existing ones, deleting columns,
and renaming columns within a table.
● Syntax: ALTER TABLE table_name clause
[column_name] [datatype].
Common Use Cases for SQL ALTER TABLE
● You can add new columns to an existing table and
remove columns from the table if it's no longer
needed.
● You can change the data type or size of an existing
column, and rename an existing column in the table.
● Add constraints such as PRIMARY KEY, FOREIGN
KEY, UNIQUE, or CHECK to enforce rules on the
table and remove an existing constraint from the
table.
Data Manipulation Language (DML) in SQL
● The SQL commands that deal with the manipulation
of data present in the database belong to DML.
● INSERT adds fresh data to a table, UPDATE Change
the data that is already in a table, and DELETE takes
a record out of a table.
● Characteristics of DML is that It performs interpret-
only data queries and is a dialect which is used to
select, insert, delete and update data in a database.
SQL INSERT INTO Statement
● The INSERT INTO statement in SQL is used to add
new rows of data to a table in a database.
● There are two main ways to use the INSERT INTO
statement by specifying the columns and values
explicitly or by inserting values for all columns without
specifying them.
● There are two primary syntaxes of INSERT INTO
statements depending on the requirements.
SQL UPDATE Statement
● The UPDATE statement in SQL is used to modify
existing records in a table.
● We can update single columns as well as multiple
columns using the UPDATE statement as per our
requirement.
● Where, table_name: name of the table, column1:
name of first, second, third column, value1: new value
for first, second, third column, condition: condition to
select the rows for which the.
SQL DELETE Statement
● The SQL DELETE statement removes one or more
rows from a database table based on a condition
specified in the WHERE clause.
● It's a DML (Data Manipulation Language) operation
that modifies the data within the table without altering
its structure.
● Syntax: DELETE FROM table_name WHERE
some_condition.
UNIT-5
Networking Essentials
● Data communication involves exchanging data
between devices through a transmission medium,
relying on hardware and software components. Key
characteristics include delivery, accuracy, timeliness,
and jitter-free transmission.
● Components of data communication are: message,
sender, receiver, transmission medium, and protocol.
Protocols govern data communication by providing
the rules for communication.
● Computer network hardware includes servers, clients,
peers, transmission media, and connecting devices
like routers and switches. Networking software
consists of network operating systems and protocol
suites such as OSI and TCP/IP.
Data Flow and Network Connections
● Communication modes include simplex
(unidirectional), half-duplex (transmit or receive), and
full-duplex (simultaneous transmit and receive).
Connection types are point-to-point (dedicated link)
and multipoint (shared link).
● Physical topologies define the layout of a network.
Common topologies include mesh (dedicated links to
every device), star (central hub), bus (shared
backbone cable), and ring (devices connected in a
circle).
● Network types are LAN (local area network) and WAN
(wide area network), differing in geographical span.
WANs can be point-to-point or switched.
IP Addressing
● An IP address uniquely identifies a device on a
network, enabling data transmission and reception. IP
addresses facilitate device access to the internet via
Internet Service Providers (ISPs).
● IPv4 addresses use a 32-bit format while IPv6
addresses use a 128-bit format to accommodate
more devices. IP addresses are classified based on
usage as public (globally unique) or private (local
network).
● IP addresses can be assigned as static (permanent)
or dynamic (temporary, via DHCP). IPv4 addresses
are further classified into Classes A, B, C, D, and E,
each designated for different network sizes and
purposes.
Configuring and Managing the Campus Network
● A campus network interconnects LANs across
multiple buildings, emphasizing reliability and
scalability. The design involves core, distribution, and
access layers.
● Key network design considerations are high speed,
redundancy, minimal policy application at the core
layer, and efficient data flow. Media selection
depends on desired speed, distance, and budget.
● Load balancing distributes network traffic across
multiple interfaces to prevent overloads. IP
assignment can be static or dynamic (DHCP).
Network Services and VLAN Configuration
● DNS (Domain Name Service) translates domain
names to IP addresses and vice versa. DHCP
automates IP address allocation.
● VLANs logically segment a network, enhancing
security and reducing broadcast traffic. VLAN types
include port-based, tag-based (802.1Q), and protocol-
based.
● Network monitoring with SNMP (Simple Network
Management Protocol) gathers performance data.
Traffic analysis identifies usage patterns and potential
issues.
Network Security
● Network security protects data and network integrity
through various layers of defense. Aspects of network
security include privacy, message integrity, endpoint
authentication, and non-repudiation.
● Security types are email security, network
segmentation, access control, sandboxing, cloud
network security, web security, firewalls, application
security, and mobile device security. Security tools
include firewalls, VPNs, intrusion prevention systems,
and behavioral analysis.
● Network security involves physical, technical, and
administrative measures. Firewalls are crucial for
monitoring and filtering network traffic based on
defined security rules.
Firewalls
● Firewalls filter packets between an organization's
internal network and the Internet. They can be packet-
filter firewalls (filtering based on network and transport
layer headers) or proxy-based firewalls (filtering
based on application layer content).
● Packet-filter firewalls use filtering tables to decide
which packets to discard, based on IP addresses, port
addresses, and protocols. Proxy firewalls examine
message content at the application level to determine
if a request is legitimate before forwarding it.
● Proxy firewalls can implement policies based on URL
content, distinguishing between packets arriving at
the same port. They act as intermediaries between
client and server, providing an additional layer of
security.