🔥 High Priority (Must-Prepare)
These are essential questions you should master for high scores:
1. Minimize the DFA as shown in below figure.
2. Show that the language L= {ab:n+m} is not regular.
3. Construct a PDA accepting the set of all strings over (a, b) with equal number of
a's and b's.
4. Design a Turing Machine over (1, b) which can compute a concatenation function
over L (1).
5. Differentiate between deterministic and non-deterministic finite automata. Convert
NDFA to DFA.
6. Explain Greibach and Chomsky Normal form.
7. Explain the procedure for minimization of finite automata with example.
8. Explain Hamiltonian path problem.
9. Design a DFA to accept the language L = {a ∈ (a, b, c) where a starts and ends
with the same symbol}.
10.Design a Turing Machine which recognizes L = {1ⁿ 2ⁿ 3ⁿ, where n > 0}.
11.Write an unrestricted grammar to accept L = {aⁱ bʲ cᵏ dⁿ | i, j, k, n = 1}.
12.Pushdown Automata for a given language—Explanation with Example.
13.Chomsky classification of languages with a suitable example.
⚡ Medium Priority (Prepare Selectively)
These strengthen your understanding but don’t appear as often:
1. Explain leftmost & rightmost derivation, sentential forms, and null production.
2. E-R Model to Relational Model conversion (Library Management System).
3. SQL Queries (Doctor-Patient System).
4. Shadow paging & log-based recovery techniques—Advantages & Disadvantages.
5. Normalization & Normal Forms with examples.
6. Concurrency Control (Lock-based & Timestamp-based protocols).
7. Pushdown Automaton model with diagram.
8. Explain regular languages & list applications (Compiler/Web Design).
9. Explain Recursive & Recursively Enumerable Language.
10.Properties of Context-Free Grammar.
11.Variation of Turing Machine.
Low Importance Questions (Rarely Asked or Less Critical)
These questions are still important but appear less frequently in exams:
1. Explain the relationship between Hamiltonian Path and Euler Path.
2. Unrestricted grammar construction for a given language.
3. Shadow paging & log-based recovery techniques comparison.
4. Normalization forms with examples—rarely asked in depth in computation.
5. Write short note on concurrency control schemes (Lock-based and
Timestamp-based protocols).
6. Difference between DFA and NDFA conversion (this is sometimes covered under a
broader question).
7. Explain the representation methods of Turing Machine with examples.
8. Regular Language applications beyond computation theory—more conceptual
than problem-solving.
9. Find doctors specializing in ‘NEURO’—a SQL-related question, not entirely
computation-focused.
🔥 High Priority Part-C Answers
1. Minimization of Deterministic Finite
Automaton (DFA)
Minimizing a DFA means reducing the number of states while keeping the accepted language
unchanged. This makes the DFA simpler and more efficient.
Steps for Minimization
1. Remove unreachable states – These are states that cannot be reached from the start
state.
2. Merge equivalent states – Group states that behave the same for all inputs.
3. Construct the minimized DFA – Redefine the state transitions with fewer states while
maintaining the language.
Example: Minimizing a DFA
Given DFA
State 0 1 Final State
q0 q1 q2 No
q1 q0 q2 Yes
q2 q3 q4 No
q3 q1 q4 Yes
q4 q3 q0 No
Step 1: Partition States
● Final states: {q1, q3}
● Non-final states: {q0, q2, q4}
Step 2: Merge Equivalent States
● {q1, q3} → Q1
● {q2, q4} → Q2
● {q0} → Q0
Minimized DFA
New 0 1
State
Q0 Q1 Q2
Q1 Q0 Q2
Q2 Q1 Q0
✅ Result: Optimized DFA with fewer states, making computation more efficient.
Conclusion
Minimizing a DFA reduces complexity, improves efficiency, and simplifies automata
representation without altering its behavior.
2. Proving L = {aⁿbⁿ | n+m} is Not Regular Using the Pumping Lemma
Introduction
To prove that L = {aⁿbⁿ | n+m} is not regular, we use the Pumping Lemma, a key tool to show
that certain languages cannot be expressed using a finite automaton.
Pumping Lemma Statement
If L is a regular language, then there exists a pumping length p such that any string s in L,
where |s| ≥ p, can be split as s = xyz, satisfying:
1. |xy| ≤ p
2. |y| > 0 (y is not empty)
3. xyⁱz ∈ L for all i ≥ 0 (pumping y must always result in a valid string)
Proof by Contradiction
Step 1: Assume L is Regular
● Suppose L = {aⁿbⁿ | n+m} is regular.
● By the Pumping Lemma, there exists a pumping length p.
Step 2: Choose a String from L
Select s = aⁿbⁿ, where n ≥ p (so the string is long enough to be pumped).
Example: If p = 4, take s = a⁴b⁴ → aaaa bbbb.
Step 3: Split into xyz (Pumping Lemma Conditions)
Divide s into parts xyz, ensuring |xy| ≤ p:
● Let x = aᵏ, y = aʳ, z = aⁿ⁻⁽ᵏ⁺ʳ⁾bⁿ
● Here, y consists only of a’s.
Step 4: Pumping y Should Still Be in L
● Consider xy²z (i.e., repeat y once more):
○ New string = aⁿ⁺ʳbⁿ
○ Number of a's is greater than the number of b's.
Step 5: Contradiction
● The new string does not follow L’s condition (equal a’s and b’s).
● Since the Pumping Lemma requires all pumped versions to stay in L, but this does not,
it proves L cannot be regular.
Since the pumping lemma leads to a contradiction, the initial assumption (L is regular) must
✅
be false.
Therefore, L is not a regular language.
3. Constructing a Pushdown Automaton (PDA) for Strings with Equal ‘a’s
and ‘b’s
A Pushdown Automaton (PDA) is a type of finite automaton with a stack, enabling it to
recognize context-free languages. Here, we design a PDA that accepts strings containing
equal numbers of ‘a’s and ‘b’s.
PDA Design Principles
● States: q0 (Start), q1 (Processing), q_accept (Accept).
● Stack Operations:
○ Push ‘a’ onto the stack when encountered.
○ Pop for each ‘b’ (ensuring the count of ‘a’s matches ‘b’s).
● Acceptance Condition: The string is valid if the stack is empty at the end.
Formal PDA Definition
A PDA is defined as a 7-tuple (Q, Σ, Γ, δ, q0, F):
1. Q = {q0, q1, q_accept} (Set of states)
2. Σ = {a, b} (Input alphabet)
3. Γ = {A, Z} (Stack symbols, where Z is the initial stack marker)
4. δ (Transition function):
○ (q0, a, Z) → (q1, AZ) → Push 'A' onto stack
○ (q1, a, A) → (q1, AA) → Push 'A' again
○ (q1, b, A) → (q1, ε) → Pop 'A' for each 'b'
○ (q1, ε, Z) → (q_accept, Z) → Accept when stack is empty
5. q0 = Start state
6. F = {q_accept} (Final state)
Example Strings Accepted by PDA
✅ aabb
✅
1. Push ‘a’, Push ‘a’ → Stack = {AA}
2. Pop ‘b’, Pop ‘b’ → Stack = {} Accepted
✅ abab
1. Push ‘a’ → Stack = {A}
2. Pop ‘b’ → Stack = {}
✅
3. Push ‘a’ → Stack = {A}
4. Pop ‘b’ → Stack = {} Accepted
This PDA successfully recognizes all strings with equal numbers of ‘a’s and ‘b’s by using a
stack. The PDA rejects unbalanced strings like aaa, abb, ensuring correctness.
4. Designing a Turing Machine for Concatenation
A Turing Machine (TM) for concatenation takes two words w₁ and w₂ and joins them together
by shifting w₂ next to w₁ on the tape. The "#" symbol is used as a separator.
Turing Machine Components
1. Tape: Stores w₁, #, w₂ and provides space for movement.
2. States:
○ q0 (Start) – Reads w₁ and moves right.
○ q1 (Find #) – Skips to the separator #.
○ q2 (Shift w₂ Left) – Moves w₂ to overwrite #.
○ q_accept (Final State) – Ensures w₁ and w₂ are joined correctly.
Transitions
Current State Read Write Symbol Move Next State
Symbol
q0 x x → q0
q0 y y → q0
q0 # # → q1
q1 z ε (erase) ← q2
q2 y z ← q2
q2 x y ← q2
q2 ε x → q_accept
Example Execution
Input: xy#yz
Steps:
1. Move right to find # (q1).
2. Erase # and shift w₂ (yz) left (q2).
3. Final Tape Output: xyz ✅
The Turing Machine successfully concatenates w₁ and w₂, demonstrating fundamental
computation principles in automata theory.
5. DFA vs. NDFA & NDFA to DFA Conversion
Sure! Here's a simple and clear comparison table between Deterministic Finite Automaton
(DFA) and Non-Deterministic Finite Automaton (NDFA) with key differences:
Feature DFA (Deterministic Finite NDFA (Non-Deterministic Finite
Automaton) Automaton)
Transition per Only one transition for Can have multiple transitions
Input each input for the same input
ε-transitions Not allowed Allowed
State Each state has fixed next Multiple possible paths exist
Behavior states
Determinism Fully predictable Explores multiple paths
simultaneously
Complexity Easier to implement More flexible but harder to
construct
Acceptance of One unique path Multiple paths may lead to
String determines acceptance acceptance
Conversion Cannot convert to NDFA Can be converted to DFA
Use Cases Lexical analyzers, regular AI decision-making, theorem
expressions proving
✅ Key Takeaway:
● DFAs are structured, predictable, and efficient.
● NDFAs provide more flexibility but need conversion for practical applications.
NDFA to DFA Conversion
Converting a Non-Deterministic Finite Automaton (NDFA) into a Deterministic Finite
Automaton (DFA) ensures a structured, predictable computational model. The process
involves removing non-determinism by defining unique transitions for each input.
Steps for NDFA to DFA Conversion
1. Identify States in NDFA
List all states and their possible transitions.
2. Create DFA State Sets
● Group NDFA states into combined DFA states.
● Each set of NDFA states forms one DFA state.
3. Define DFA Transitions
● Construct transition table ensuring each DFA state follows a unique path.
● Resolve ε-transitions (empty moves) by merging states.
4. Remove Unreachable States
Identify states that have no incoming transitions and eliminate them.
5. Finalize DFA Transition Diagram
Ensure the DFA follows the one transition per input rule.
Example: NDFA to DFA Conversion
Given NDFA:
State Input = a Input = b ε-transition
q0 {q1, q2} - {q3}
q1 q2 q3 -
q2 q3 - -
q3 - q_accept -
Step 1: Create DFA State Sets
● {q0, q3} → Q0
● {q1, q2} → Q1
● {q3} → Q2
Step 2: Define DFA Transitions
DFA State Input = a Input = b
Q0 Q1 -
Q1 Q2 Q2
Q2 - q_accept
✅ Result: The DFA eliminates non-determinism while preserving language acceptance.
6. Explain Greibach and Chomsky Normal form.
Greibach & Chomsky Normal Forms in Context-Free Grammars (CFGs)
1. Chomsky Normal Form (CNF)
A Context-Free Grammar (CFG) is in Chomsky Normal Form if all production rules follow
these rules:
1. A → BC → Where A, B, C are non-terminals (B, C ≠ start symbol).
2. A → a → Where a is a single terminal.
✅ Example:
Original Rule: S → AB | aB | ε
CNF Equivalent:
● S → AB
● A → a
● B → C
✅ Usage: CNF is heavily used in parsing algorithms like CYK parsing, as it simplifies CFGs
into a structured form that can be efficiently processed.
2. Greibach Normal Form (GNF)
A CFG is in Greibach Normal Form if all production rules begin with a terminal, followed by
non-terminals:
1. A → aBC → Where a is a terminal and B, C are non-terminals.
2. No ε-productions (empty strings).
✅ Example:
Original Rule: S → AB | aB
GNF Equivalent:
● S → aBC
● A → bD
✅ Usage: GNF is useful for recursive descent parsing, as each derivation starts directly
with a terminal.
Comparison Between CNF & GNF
Feature Chomsky Normal Form (CNF) Greibach Normal Form (GNF)
Structure Rules involve two non-terminals Rules start with a terminal,
or a single terminal followed by non-terminals
Use Case Used in CYK parsing, efficient Used in recursive descent
grammar analysis parsing, simplifies syntax
ε-productions Allowed Not allowed
Conversion Easier to convert grammars into GNF conversion requires extra
Complexity CNF steps
7. DFA Minimization Procedure with Example
Minimizing a Deterministic Finite Automaton (DFA) reduces the number of states while
preserving the language it recognizes. This process improves efficiency and makes
computation simpler.
Steps for DFA Minimization
Step 1: Remove Unreachable States
● Start from the initial state and find all states reachable through transitions.
● Eliminate states that are never reached.
Step 2: Merge Indistinguishable States (State Partitioning Method)
● Partition states into groups based on their behavior.
● Initially, separate final states and non-final states.
● Then, refine partitions by checking transition behavior.
Step 3: Construct the Final Minimized DFA
● Define new merged states based on the partitioning.
● Update the transition table with the reduced states.
● Draw the optimized DFA diagram ensuring correctness.
Example: DFA Minimization Using Partitioning
Given DFA:
State Input = 0 Input = 1 Final State
q0 q1 q2 ❌ No
q1 q0 q2 ✅ Yes
q2 q3 q4 ❌ No
q3 q1 q4 ✅ Yes
q4 q3 q0 ❌ No
Step 1: Partition States
● Final states: {q1, q3}
● Non-final states: {q0, q2, q4}
Step 2: Merge Equivalent States
● {q1, q3} → Q1
● {q2, q4} → Q2
● {q0} → Q0
Step 3: Minimized DFA
New Input = 0 Input = 1
State
Q0 Q1 Q2
Q1 Q0 Q2
Q2 Q1 Q0
✅ Final Result: The minimized DFA has fewer states but retains the same recognition ability.
DFA minimization simplifies state transitions, reduces computation time, and improves
efficiency while preserving the accepted language.
8. Explain Hamiltonian path problem.
The Hamiltonian Path problem is a fundamental problem in graph theory. It determines
whether there exists a path in a given graph that visits each vertex exactly once. If this path
cycles back to the starting vertex, it is known as a Hamiltonian Cycle.
Definition & Characteristics
A Hamiltonian Path in a graph G(V, E) is a sequence of vertices v1, v2, ..., vn such that:
1. Each vertex is visited exactly once.
2. Edges exist between consecutive vertices.
3. It may or may not end at the starting vertex (forming a cycle).
✅ Hamiltonian Cycle: If the last vertex reconnects to the starting vertex, forming a closed
loop, the path becomes a Hamiltonian Cycle.
Complexity & NP-Completeness
The Hamiltonian Path problem is classified as NP-complete, meaning:
● No known polynomial-time algorithm exists to solve it efficiently for all cases.
● Finding Hamiltonian paths requires exponential time in the worst case.
● The problem is often solved using backtracking, dynamic programming, or
heuristics.
Example: Checking Hamiltonian Paths in a Graph
Consider the graph:
A —— B —— C
| | |
D —— E —— F
Possible Hamiltonian Paths:
● A → B → E → D → C → F
● D → A → B → C → F → E
✅ The graph can have multiple Hamiltonian paths, but finding one requires checking all
vertex sequences systematically.
Hamiltonian Path problems are crucial in route optimization, network analysis, and
computational biology. Since it is NP-complete, finding solutions for large graphs requires
advanced computational techniques.
9. Design a DFA to accept the language L = {a ∈ (a, b, c) where a starts and ends with
the same symbol}.
A Deterministic Finite Automaton (DFA) is required to recognize strings over {a, b, c} where
the first and last symbols are the same.
State Definitions
1. q0 (Start State) – Before reading any input.
2. q1, q2, q3 (Tracking Start Symbol) – To remember if the string starts with ‘a’, ‘b’, or ‘c’.
3. q_accept (Final State) – If the last symbol matches the first, the string is accepted.
Transitions
● If the first symbol is ‘a’, transition to q1 and keep track.
● If the first symbol is ‘b’, transition to q2.
● If the first symbol is ‘c’, transition to q3.
● If the last symbol matches the initial symbol, move to q_accept.
✅ Example Strings Accepted:
✅
❌
● "aa", "aba", "cac", "bbb"
● "abc", "bca", "cca" (Last symbol differs from first)
Here’s your DFA diagram for strings that start and end with the same symbol over {a, b, c}! 🎯
🚀
The generated image will help visualize how transitions work for different inputs. Let me know if
you'd like any adjustments or explanations!
10.Design a Turing Machine which recognizes L = {1ⁿ 2ⁿ 3ⁿ, where n > 0}.
A Turing Machine (TM) needs to recognize strings where the number of 1s, 2s, and 3s are
equal (1ⁿ 2ⁿ 3ⁿ, where n > 0). This ensures sequential matching of symbols before
accepting or rejecting the input.
Working Principle
1. Mark the first unmarked 1, then find a corresponding 2, mark it.
2. Locate a matching 3, mark it.
3. Repeat until all symbols are processed correctly.
4. If all 1s, 2s, and 3s match in count, accept; otherwise, reject.
Turing Machine Transitions
Current State Read Write Symbol Move Next State
Symbol
q0 1 X (Mark) → q1
q1 1 1 → q1
q1 2 Y (Mark) ← q2
q2 1 1 ← q2
q2 X X → q3
q3 3 Z (Mark) → q4
q4 Blank Blank → q_accept
✅ Stepwise Execution Example:
For input 111222333:
● Mark 1 → 2 → 3 → Continue processing
✅
❌
● If numbers match, accept
● If numbers differ, reject
This Turing Machine systematically validates whether a string belongs to 1ⁿ 2ⁿ 3ⁿ,
ensuring equal distribution of symbols before accepting or rejecting.
11. Write an unrestricted grammar to accept L = {aⁱ bʲ cᵏ dⁿ | i, j, k, n = 1}.
An Unrestricted Grammar is a Type-0 grammar in the Chomsky hierarchy, meaning it has no
restrictions on production rules. It allows any transformation from the left-hand side (LHS) to
the right-hand side (RHS), making it the most powerful form of grammar.
Production Rules
To generate strings in L = {aⁱ bʲ cᵏ dⁿ | i, j, k, n ≥ 1}, the following unrestricted grammar can
be defined:
Grammar Definition
● Start Symbol → S
● Production Rules:
1. S → aSb
2. A → cAd
3. S → ε (Base case to terminate recursion)
Example: Derivation of a Valid String
For aabbccdd, apply recursive substitutions:
1. S → aSb
2. aSb → aaSbb
3. aaSbb → aabb A
4. A → cAd
5. cAd → ccAdd
6. ccAdd → ccdd
✅ Final output: aabbccdd (Valid)
This Unrestricted Grammar generates valid sequences recursively, allowing any i, j, k, n
values as long as each symbol appears at least once.
12. Pushdown Automata for Given Language
A Pushdown Automaton (PDA) is a computational model that extends a Finite Automaton
with a stack, enabling it to process Context-Free Languages (CFLs). The stack helps handle
recursive structures like balanced parentheses or equal occurrences of symbols.
PDA Working Principle
1. Push symbols onto the stack when processing input.
2. Pop matching symbols to ensure the language’s constraints are met.
3. Accept the string if the stack is empty at the end, indicating all symbols were correctly
processed.
Formal Definition of PDA
A PDA is defined as a 7-tuple (Q, Σ, Γ, δ, q0, F), where:
● Q → Set of states {q0, q1, q_accept}
● Σ → Input alphabet {a, b}
● Γ → Stack symbols {A, Z} (Z is the initial stack marker)
● δ → Transition function:
○ (q0, a, Z) → (q1, AZ) → Push A onto the stack
○ (q1, a, A) → (q1, AA) → Keep pushing A
○ (q1, b, A) → (q1, ε) → Pop A for each b
○ (q1, ε, Z) → (q_accept, Z) → Accept when stack is empty
● q0 → Initial state
● F → Accepting state {q_accept}
Example: PDA for Balanced Strings (Equal a’s and b’s)
✅ Input: "aabb"
1. Push a’s: Stack → {AA}
2. Pop b’s: Stack → {} (empty) ✅ Accepted
✅ Input: "abab"
✅
1. Push 'a' → Pop 'b' → Push 'a' → Pop 'b'
2. Stack remains empty → Accepted
🚫 Rejected Example: "aaa" → Stack not empty → ❌ Rejected
A PDA successfully processes context-free languages using a stack to track symbols and
ensure balanced structure. It plays a critical role in parsing programming languages and
validating structured input formats.
13.Chomsky classification of languages with a suitable example.
The Chomsky Hierarchy, proposed by Noam Chomsky, categorizes formal languages based
on their complexity and computational power. This hierarchy helps understand how different
types of grammars relate to computational models.
The Four Language Types
1. Type 3 - Regular Languages
● Recognized by Finite Automata (FA).
● Grammar Rules: Right-linear or Left-linear production rules.
● Example:
○ S → aS | b
● Usage: Lexical Analysis, Regular Expressions, Pattern Matching.
2. Type 2 - Context-Free Languages (CFLs)
● Processed using Pushdown Automata (PDA).
● Grammar Rules:
○ Productions must have a single non-terminal on the left-hand side.
● Example:
○ S → aSb | ε (Generates balanced strings like aaabbb).
● Usage: Programming Languages (CFGs define syntax), Compiler Design.
3. Type 1 - Context-Sensitive Languages (CSLs)
● Needs Linear Bounded Automata (LBA) for recognition.
● Grammar Rules:
○ Productions can have context conditions (i.e., length must increase or stay the
same).
● Example:
○ A → aB only if preceded by C.
● Usage: Natural Language Processing (NLP), Advanced Parsing.
4. Type 0 - Unrestricted Languages
● Recognized by Turing Machines (TM).
● Grammar Rules:
○ No restrictions—can have any production rule.
● Example:
○ S → aSb | A → cAd | ε (Handles complex dependencies).
● Usage: Mathematical Computations, AI Reasoning, Recursive Functions.
Comparison Table
Type Recognized By Grammar Examples
Restrictions
Type 3 (Regular) Finite Automata (FA) Only one non-terminal S → aS | b
on LHS
Type 2 Pushdown Automata Single non-terminal on S → aSb | ε
(Context-Free) (PDA) LHS
Type 1 Linear Bounded Rules depend on A → aB if C
(Context-Sensitive) Automata (LBA) context precedes
Type 0 Turing Machine (TM) No restrictions S → aSb | A →
(Unrestricted) cAd | ε
The Chomsky Hierarchy provides a structured classification for languages, helping in
computability theory, programming language design, and artificial intelligence.
⚡ Medium Priority (Prepare Selectively)
1.Explain leftmost & rightmost derivation, sentential forms, and null production
Derivation refers to the step-by-step application of grammar rules to generate a string in a
language. It helps in parsing and understanding grammar structure.
Leftmost Derivation
● Expands the leftmost non-terminal first in each step.
● Ensures a structured progression from left to right.
Rightmost Derivation
● Expands the rightmost non-terminal first in each step.
● Used in reverse parsing techniques like LR parsing.
✅ Example Given Grammar:
S → AB | BC
A→a
B→b
C→c
For "abc":
Leftmost Derivation:
S → AB
AB → aB
aB → ab
Rightmost Derivation:
S → BC
BC → Bc
Bc → bc
2. Sentential Forms
✅
Sentential forms are intermediate steps in a derivation before reaching the final string.
Example (Leftmost Sentential Form for "abc"):
S → AB → aB → ab
3. Null Production
Null production (ε-production) is a grammar rule where a non-terminal can derive ε (empty
✅
string).
Example Rule:
A→ε
2. E-R Model to Relational Model conversion (Library Management System).
Step 1: E-R Model Components
The Entity-Relationship (E-R) model consists of:
● Entities:
○ Book – Represents library books.
○ Member – Represents library users.
○ Loan – Tracks book borrowing transactions.
● Attributes:
○ Book: Book_ID (PK), Title
○ Member: Member_ID (PK), Name
○ Loan: Loan_ID (PK), Member_ID (FK), Book_ID (FK), Issue_Date,
Return_Date
● Relationships:
○ "Borrow" relationship between Member and Book, linking them through Loan
transactions.
Step 2: Convert to Relational Model
The Relational Model organizes data into tables, ensuring proper relationships and
normalization.
Entity Attributes
Book Book_ID (PK), Title
Member Member_ID (PK), Name
Loan Loan_ID (PK), Member_ID (FK), Book_ID (FK),
Issue_Date, Return_Date
Primary Keys (PK) & Foreign Keys (FK):
● Book_ID uniquely identifies each book.
● Member_ID uniquely identifies each member.
● Loan_ID tracks each borrowing transaction.
● Member_ID and Book_ID in Loan act as Foreign Keys, linking members and books.
✅ Final Outcome: This conversion ensures data integrity, normalization, and efficient
querying in database management.
3. SQL Queries (Doctor-Patient System)
We have three tables representing a doctor-patient management system:
● DOCTOR(regno, name, telno, city, specialization) → Stores doctor details.
● PATIENT(pname, street, city) → Stores patient details.
● VISIT(pname, regno, date_of_visit, fee) → Records doctor visits by patients.
Sample SQL Queries
1. Get Doctors in Kota
Retrieve names and registration numbers of doctors located in Kota.
SELECT name, regno FROM DOCTOR WHERE city = 'Kota';
✅ Filters the DOCTOR table based on city.
2. Find Patients Who Visited on 12-Aug-2022
Identify patients who had an appointment on August 12, 2022.
SELECT pname, city FROM PATIENT
WHERE pname IN (SELECT pname FROM VISIT WHERE date_of_visit = '2022-08-12');
✅ Uses a nested query to match patients from the VISIT table.
3. Find Doctors Whose Names Start with ‘N’
Retrieve doctors whose names begin with ‘N’.
SELECT name FROM DOCTOR WHERE name LIKE 'N%';
✅ Uses LIKE wildcard filtering for pattern matching.
4. Shadow paging & log-based recovery techniques—Advantages & Disadvantages.
1. Shadow Paging
Shadow Paging is a database recovery technique that maintains a copy (shadow) of pages
before modification.
Working Principle:
● The original pages remain unchanged, while shadow pages store temporary changes.
● If a transaction fails, the system reverts to shadow pages, ensuring consistency.
● After a successful transaction, shadow pages replace the original ones.
✅ Advantages:
● No need for redo operations, making recovery faster.
● Efficient rollback mechanism, maintaining consistency.
🚫 Disadvantages:
● High storage consumption, as shadow copies require extra space.
● Complex page mapping, increasing system overhead.
2. Log-Based Recovery
Log-Based Recovery records transaction changes in logs before modifying the database,
allowing recovery in case of failure.
Working Principle:
● Every transaction logs its pre-change (UNDO log) and post-change (REDO log)
information.
● If a system crashes, these logs are used to undo or redo transactions.
✅ Advantages:
● Reliable rollback ensures accurate data restoration.
● Supports concurrent transactions, improving system robustness.
🚫 Disadvantages:
● Requires large storage for logs, increasing overhead.
● Processing log recovery takes time, especially for frequent updates.
Comparison Table
Feature Shadow Paging Log-Based Recovery
Storage Requires extra shadow pages Requires large log storage
Performance Fast recovery (no redo needed) Recovery depends on log size
Complexity Complex page mapping Requires detailed logging
Rollback Method Direct page swap Uses UNDO logs
Use Case Small databases, less frequent Large databases, frequent
updates transactions
5. Normalization & Normal Forms with Examples
Normalization is the process of organizing a database to minimize redundancy and
dependency by structuring data efficiently. It ensures data integrity, consistency, and
reduces anomalies during insert, update, or delete operations.
Types of Normal Forms (NF)
1NF (First Normal Form) – No Repeating Groups
● Each column holds atomic values (no sets or lists).
● Each row is uniquely identifiable using a primary key.
✅ Example:
| Student_ID | Name | Courses |
❌
|------------|-------|----------|
| 101 | Alex | DBMS, OS | (Repeating group)
🔹 After 1NF: Split courses into separate rows.
| Student_ID | Name |
|-- ----|-------|------------|
| 101 | Alex | DBMS |
| 101 | Alex | OS |
2NF (Second Normal Form) – Remove Partial Dependency
● Each non-key attribute must be fully dependent on the primary key.
● Removes partial dependencies, ensuring proper table relationships.
✅ Example Before 2NF:
| Student_ID | Name | Course | Instructor |
|------------|-------|---------|------------|
| 101 | Alex | DBMS | John |
| 101 | Alex | OS | Smith |
🔹 Issue: Instructor is dependent only on Course, not Student_ID.
🔹 After 2NF: Separate Instructor into a new table.
Cours Instructor
e
DBMS John
OS Smith
Now, Courses & Instructor are properly linked, avoiding redundancy.
3NF (Third Normal Form) – Remove Transitive Dependency
● All non-key attributes must be directly dependent on the primary key (no indirect
relationships).
✅ Example Before 3NF:
| Student_ID | Name | Course | Instructor | Department |
|------------|-------|---------|------------|------------|
| 101 | Alex | DBMS | John | CS |
| 101 | Alex | OS | Smith | CS |
🔹 Issue: Department depends on Instructor, not directly on Student_ID.
🔹 After 3NF: Separate Department into a new table.
Instructor Department
John CS
Smith CS
6. Concurrency Control (Lock-Based & Timestamp-Based Protocols)
1. Lock-Based Protocol
Locking ensures controlled access to database transactions, preventing conflicts in
multi-user environments.
Types of Locks:
1. Exclusive Lock (X-Lock):
○ Prevents simultaneous write operations.
○ A transaction must acquire an exclusive lock before modifying data.
○ Other transactions cannot read or write until the lock is released.
2. Shared Lock (S-Lock):
○ Allows multiple read operations simultaneously.
○ Transactions can read but cannot modify the data while a shared lock exists.
✅ Example Use Case:
● An X-Lock ensures only one transaction writes at a time, preventing inconsistent
updates.
● An S-Lock allows multiple users to read customer details without interference.
2. Timestamp-Based Protocol
This protocol assigns timestamps to transactions, ensuring older transactions execute first,
preventing conflicts.
How It Works:
● Each transaction gets a timestamp when it starts.
● The system orders execution based on these timestamps.
● Transactions follow rules:
1. If an older transaction requests a read/write, it proceeds.
2. If a younger transaction tries to overwrite an older one, it waits or rolls
back.
✅ Example Use Case:
● In an online banking system, timestamp ordering ensures consistent processing
without overwriting older transaction requests.
Comparison Table: Lock-Based vs. Timestamp-Based
Protocols
Feature Lock-Based Protocol Timestamp-Based Protocol
Conflict Handling Uses locks to prevent conflicts Uses timestamps to order
transactions
Read Operations Allows Shared Locks (S-Lock) Oldest transaction executes first
Write Operations Requires Exclusive Locks Newer transactions may wait or
(X-Lock) rollback
Concurrency Moderate, depends on locks Higher, prevents deadlocks
Level
Use Case Database write operations Banking, financial systems
7. Pushdown Automaton (PDA) Model with Diagram
A Pushdown Automaton (PDA) is a computational model designed to process context-free
languages using a stack. Unlike a Deterministic Finite Automaton (DFA), a PDA can store
and recall symbols dynamically, making it capable of handling nested and recursive
structures.
Components of a PDA
A PDA is formally defined as a 7-tuple (Q, Σ, Γ, δ, q₀, F):
1. Q → Set of states {q₀, q₁, q_accept}.
2. Σ → Input alphabet {a, b, c} (symbols from the language).
3. Γ → Stack alphabet (symbols that can be pushed/popped).
4. δ (Transition Function) → Defines how the machine moves between states based on
input and stack operations.
5. q₀ → Initial state.
6. F → Final (accepting) state.
7. Stack Operations:
○ Push → Place a symbol on the stack (e.g., Push 'A' for 'a').
○ Pop → Remove a symbol when conditions are met (e.g., Pop 'A' for 'b').
Example: PDA for Recognizing Equal Numbers of 'a' and
'b'
Goal: Accept strings where the number of 'a's matches the number of 'b's, like "aabb" or
"abab".
Transition Steps
1. Push 'A' onto the stack for each 'a'.
2. Pop 'A' for each 'b'.
3. Accept if the stack is empty at the end (ensuring equal count).
✅ Accepted Strings: "aabb", "abab"
🚫 Rejected Strings: "aaa", "abb"
Importance of PDA
● Used in compiler design for syntax parsing.
● Helps process nested structures like balanced parentheses.
● Extends finite automata capabilities to handle recursive
patterns.
8. Regular Languages & Their Applications
Regular languages are a class of formal languages that can be recognized by Finite
Automata (FA). They are defined using regular expressions and follow simple, structured
rules without recursion or complex dependencies.
✅ Example Regular Expression:
● a*b* matches strings like aaabbb, aabb, bbb, and aaaa.
● ^a.*b$ ensures a string starts with 'a' and ends with 'b'.
Applications in Software & Web Design
1. Lexical Analysis (Compilers)
● Used in programming language parsing to identify tokens like keywords, operators,
and identifiers.
● Example: Regex in lexical analyzers detects variable names or reserved words
efficiently.
2. Search Engines (Pattern Matching)
● Search engines use regular expressions to filter search queries, match keywords,
and rank results.
● Example: grep, sed, and awk use regex for text processing.
3. Form Validations (Emails, Passwords, URLs)
● Websites use regular expressions to validate user input, ensuring proper formatting
for emails, passwords, and usernames.
● Example:
○ Email validation regex: ^[a-zA-Z0-9._]+@[a-z]+\.[a-z]{2,3}$
○ Password validation regex ensures at least one uppercase, one number, and a
special character.
Regular languages play a vital role in software development, text processing, AI-driven
search algorithms, and cybersecurity. Their structured nature makes them perfect for fast
and efficient data validation and recognition.
9. Recursive & Recursively Enumerable Languages
1. Recursive Language
A recursive language is one that is decidable by a Turing Machine (TM). This means that a
TM will always halt for any input string, either accepting or rejecting it.
Characteristics:
● Fully decidable by a TM.
● Can be recognized & verified in finite time.
● Closure properties: Recursive languages are closed under union, intersection,
complement, and concatenation.
✅ Example:
L = {aⁿbⁿ | n ≥ 0} → A TM can easily verify that for every n, the string follows the pattern
and halts.
2. Recursively Enumerable (RE) Language
An RE language is recognized by a Turing Machine, but it may not be decidable. The TM
may enter an infinite loop for some inputs without halting.
Characteristics:
● Semi-decidable (TM may recognize but not decide).
● TM halts for accepted strings but may loop indefinitely for rejected ones.
● Closure properties: Closed under union & intersection, but not complement.
✅ Example:
The Halting Problem is an RE language—there exists a TM that recognizes valid
computations, but it cannot decide whether an arbitrary TM halts or runs forever.
Key Differences: Recursive vs. Recursively Enumerable
Languages
Feature Recursive Language Recursively Enumerable
Language
Decidability Decidable Semi-decidable
TM Behavior Always halts May loop indefinitely
Closure Closed under all Not closed under complement
Properties operations
Example {aⁿbⁿ | n ≥ 0} Halting Problem
10. Properties of Context-Free Grammar (CFG)
1. Closure Properties
Context-Free Grammars (CFGs) are closed under several operations, meaning that applying
these operations to CFG languages results in another CFG language.
✅ CFGs are closed under:
● Union → If L₁ and L₂ are context-free, L₁ ∪ L₂ is also context-free.
● Concatenation → If L₁ and L₂ are context-free, L₁L₂ is context-free.
● Kleene Star (L*) → If L is context-free, L* (repetition) is also context-free.
🚫 CFGs are NOT closed under:
● Intersection → L₁ ∩ L₂ may not be context-free.
● Complementation → L' may not be context-free.
2. Pumping Lemma for CFGs
✅
The Pumping Lemma is a method to prove whether a language is NOT context-free.
Key Idea:
● If a language is context-free, any sufficiently long string can be split into parts and
"pumped" (repeated multiple times) while still remaining in the language.
● If pumping fails for a given string, the language is not context-free.
✅ Use Case:
It helps prove languages like {aⁿbⁿcⁿ | n ≥ 0} are not context-free (since we cannot
pump substrings without breaking balance).
3. Parse Trees – Representation of Derivations
✅
A Parse Tree is a hierarchical structure used to represent CFG derivations.
Features:
● Shows how grammar rules expand step-by-step.
● Each node represents a grammar symbol (non-terminal or terminal).
● Useful in syntax analysis during compiler design.
✅ Example: Parse Tree for S → AB
S
/\
A B
CFGs are fundamental in programming languages, compiler design, and formal language
theory. Their closure properties, pumping lemma, and parse trees make them essential for
processing structured text.
11. Variation of Turing Machine
1. Multi-Tape Turing Machine
A Multi-Tape Turing Machine (MTM) has multiple tapes, each with its own read/write head,
improving efficiency.
✅ Key Features:
● Simultaneous reading/writing on multiple tapes, reducing complexity.
● Faster computation compared to a single-tape Turing Machine.
✅ Example Use Case:
● Used for parallel processing, speeding up computations in algorithms.
2. Non-Deterministic Turing Machine (NDTM)
A Non-Deterministic Turing Machine can explore multiple computation paths
simultaneously, increasing computational power.
✅ Key Features:
● Instead of following one path, it considers multiple possibilities at once.
● Used in NP problems, such as Graph Coloring and Hamiltonian Path Problem.
✅ Example Use Case:
● The traveling salesman problem (TSP) is efficiently solved using an NDTM.
3. Universal Turing Machine (UTM)
A Universal Turing Machine (UTM) is capable of simulating any other Turing Machine,
making it foundational to computability theory.
✅ Key Features:
● Encodes other Turing Machines within its tape.
● Can simulate algorithms, acting like a general-purpose processor.
✅ Example Use Case:
● UTM proves computability concepts, forming the basis of modern computers.
Conclusion
These Turing Machine variations extend computational power and efficiency. MTM speeds up
tasks, NDTM explores multiple solutions, and UTM establishes fundamental computing
principles.