0% found this document useful (0 votes)
9 views

PushdownAutomata

Push down Automata

Uploaded by

Ahmed Ibrahim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

PushdownAutomata

Push down Automata

Uploaded by

Ahmed Ibrahim
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Pushdown Automata

Reading: Chapter 6

1
Pushdown Automata (PDA)

• Informally:
– A PDA is an NFA-ε with a infinite stack.
– Transitions are modified to accommodate stack operations.

• Questions:
– What is a stack?
– How does a stack help?

• A DFA can “remember” only a finite amount of information, whereas a PDA can
“remember” an infinite amount of (certain types of) information.

2
• Example:

{0n1n | 0=<n} Is not regular

{0n1n | 0=<n<=k, for some fixed k} Is regular, for any fixed k.

• For k=3:
L = {ε, 01, 0011, 000111}

0 0 0
q0 q1 q2 q3

1 1 1 1
0,1 1 1
0,1 q7 q6 q5 q4
0
0
0
3
• In a DFA, each state remembers a finite amount of information.

• To get {0n1n | n>=0} with a DFA would require an infinite number of states
using the preceding technique.

• An infinite stack solves the problem for {0n1n | 0=<n} as follows:


– Read all 0’s and place them on a stack
– Read all 1’s and match with the corresponding 0’s on the stack

• Only need two states to do this in a PDA

• Similarly for {0n1m0n+m | n,m>=0}

4
Formal Definition of a PDA

• A pushdown automaton (PDA) is a seven-tuple:

M = (Q, Σ, Г, δ, q0, z0, F)

Q A finite set of states


Σ A finite input alphabet
Г A finite stack alphabet
q0 The initial/starting state, q0 is in Q
z0 A starting stack symbol, is in Г
F A set of final/accepting states, which is a subset of Q
δ A transition function, where

δ: Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

5
• Consider the various parts of δ:

Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

– Q on the LHS means that at each step in a computation, a PDA must consider its’
current state.
– Г on the LHS means that at each step in a computation, a PDA must consider the
symbol on top of its’ stack.
– Σ U {ε} on the LHS means that at each step in a computation, a PDA may or may
not consider the current input symbol, i.e., it may have epsilon transitions.

– “Finite subsets” on the RHS means that at each step in a computation, a PDA will
have several options.
– Q on the RHS means that each option specifies a new state.
– Г* on the RHS means that each option specifies zero or more stack symbols that
will replace the top stack symbol.

6
• Two types of PDA transitions:

δ(q, a, z) = {(p1,γ1), (p2,γ2),…, (pm,γm)}

– Current state is q
– Current input symbol is a
– Symbol currently on top of the stack z
– Move to state pi from q
– Replace z with γi on the stack (leftmost symbol on top)
– Move the input head to the next input symbol

a/z/ γ1 p1

q a/z/ γ2 p2
a/z/ γm
:

pm
7
• Two types of PDA transitions:

δ(q, ε, z) = {(p1,γ1), (p2,γ2),…, (pm,γm)}

– Current state is q
– Current input symbol is not considered
– Symbol currently on top of the stack z
– Move to state pi from q
– Replace z with γi on the stack (leftmost symbol on top)
– No input symbol is read

ε/z/ γ1 p1

q ε/z/ γ2 p2
ε/z/ γm
:

pm
8
• Example PDA #1: (balanced parentheses)

() (()) (())() ()((()))(())() ε

Question: How could we accept the language with a stack-based Java program?

M = ({q1}, { ( , ) }, {L, #}, δ, q1, #, Ø)

δ: (1) δ(q1, (, #) = {(q1, L#)} // push a left paren


(2) δ(q1, ), #) = Ø // too many right parens, reject
(3) δ(q1, (, L) = {(q1, LL)} // push a left paren
(4) δ(q1, ), L) = {(q1, ε)} // match a left and right paren
(5) δ(q1, ε, #) = {(q1, ε)} // empty the stack; accept
(6) δ(q1, ε, L) = Ø // too many left parens

• Goal: (acceptance)
– Terminate in a state
– Read the entire input string
– Terminate with an empty stack

• Informally, a string is accepted if there exists a computation that uses up all the input and leaves the
stack empty.

9
• Transition Diagram:

(, # | L#

ε, # | ε q0 (, L | LL

), L | ε

• Note that the above is not particularly illuminating.

• This is true for just about all PDAs, and consequently we don’t typically draw the
transition diagram.

* More generally, states are not particularly important in a PDA.

10
• Example Computation:

M = ({q1}, { ( , ) }, {L, #}, δ, q1, #, Ø)

δ:
(1) δ(q1, (, #) = {(q1, L#)} // push a left paren
(2) δ(q1, ), #) = Ø // too many right parens, reject
(3) δ(q1, (, L) = {(q1, LL)} // push a left paren
(4) δ(q1, ), L) = {(q1, ε)} // match a left and right paren
(5) δ(q1, ε, #) = {(q1, ε)} // empty the stack; accept
(6) δ(q1, ε, L) = Ø // too many left parens

Current Input Stack Rules Applicable Rule Applied


(()) # (1), (5) (1) --Why not 5?
()) L# (3), (6) (3)
)) LL# (4), (6) (4)
) L# (4), (6) (4)
ε # (5) (5)
ε ε - -

• Note that from this point forward, rules such as (2) and (6) will not be listed or referenced in any
computations.

11
• Example PDA #2: For the language {x | x = wcwr and w in {0,1}*}

01c10 1101c1011 0010c0100 c

Question: How could we accept the language with a stack-based Java program?

M = ({q1, q2}, {0, 1, c}, {B, G, R}, δ, q1, R, Ø)

δ:
(1) δ(q1, 0, R) = {(q1, BR)} (9) δ(q1, 1, R) = {(q1, GR)}
(2) δ(q1, 0, B) = {(q1, BB)} (10) δ(q1, 1, B) = {(q1, GB)}
(3) δ(q1, 0, G) = {(q1, BG)} (11) δ(q1, 1, G) = {(q1, GG)}
(4) δ(q1, c, R) = {(q2, R)}
(5) δ(q1, c, B) = {(q2, B)}
(6) δ(q1, c, G) = {(q2, G)}
(7) δ(q2, 0, B) = {(q2, ε)} (12) δ(q2, 1, G) = {(q2, ε)}
(8) δ(q2, ε, R) = {(q2, ε)}

• Notes:
– Rule #8 is used to pop the final stack symbol off at the end of a computation. 12
• Example Computation:

(1) δ(q1, 0, R) = {(q1, BR)} (9) δ(q1, 1, R) = {(q1, GR)}


(2) δ(q1, 0, B) = {(q1, BB)} (10) δ(q1, 1, B) = {(q1, GB)}
(3) δ(q1, 0, G) = {(q1, BG)} (11) δ(q1, 1, G) = {(q1, GG)}
(4) δ(q1, c, R) = {(q2, R)}
(5) δ(q1, c, B) = {(q2, B)}
(6) δ(q1, c, G) = {(q2, G)}
(7) δ(q2, 0, B) = {(q2, ε)} (12) δ(q2, 1, G) = {(q2, ε)}
(8) δ(q2, ε, R) = {(q2, ε)}

State Input Stack Rules Applicable Rule Applied


q1 01c10 R (1) (1)
q1 1c10 BR (10) (10)
q1 c10 GBR (6) (6)
q2 10 GBR (12) (12)
q2 0 BR (7) (7)
q2 ε R (8) (8)
q2 ε ε - -

13
• Example Computation:

(1) δ(q1, 0, R) = {(q1, BR)} (9) δ(q1, 1, R) = {(q1, GR)}


(2) δ(q1, 0, B) = {(q1, BB)} (10) δ(q1, 1, B) = {(q1, GB)}
(3) δ(q1, 0, G) = {(q1, BG)} (11) δ(q1, 1, G) = {(q1, GG)}
(4) δ(q1, c, R) = {(q2, R)}
(5) δ(q1, c, B) = {(q2, B)}
(6) δ(q1, c, G) = {(q2, G)}
(7) δ(q2, 0, B) = {(q2, ε)} (12) δ(q2, 1, G) = {(q2, ε)}
(8) δ(q2, ε, R) = {(q2, ε)}

State Input Stack Rules Applicable Rule Applied


q1 1c1 R (9) (9)
q1 c1 GR (6) (6)
q2 1 GR (12) (12)
q2 ε R (8) (8)
q2 ε ε - -

• Questions:
– Why isn’t δ(q2, 0, G) defined?
– Why isn’t δ(q2, 1, B) defined?

14
• Example PDA #3: For the language {x | x = wwr and w in {0,1}*}

Without the “c” in the middle, switching from LHS processing to RHS processing is a challenge,
because the PDA only “inputs” one symbol at a time.

Assume the string is in the above language, where is the middle?

0….
01…
010…
0101…
01011…
010110…
0101100…

Two adjacent, identical symbols might indicate the middle position, but not necessarily.

The best the PDA can do, is “guess” when it is in the middle.

15
• Example PDA #3: For the language {x | x = wwr and w in {0,1}*}

M = ({q1, q2}, {0, 1}, {R, B, G}, δ, q1, R, Ø)

δ:
(1) δ(q1, 0, R) = {(q1, BR)} (7) δ(q2, 0, B) = {(q2, ε)}
(2) δ(q1, 1, R) = {(q1, GR)} (8) δ(q2, 1, G) = {(q2, ε)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (9) δ(q1, ε, R) = {(q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (10) δ(q2, ε, R) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)}
(6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}

• Notes:
– Rules #3 and #6 are non-deterministic.
– Rules #9 and #10 are used to pop the final stack symbol off at the end of a computation.

16
• Example Computation:

(1) δ(q1, 0, R) = {(q1, BR)} (7) δ(q2, 0, B) = {(q2, ε)}


(2) δ(q1, 1, R) = {(q1, GR)} (8) δ(q2, 1, G) = {(q2, ε)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (9) δ(q1, ε, R) = {(q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (10) δ(q2, ε, R) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)}
(6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}

State Input Stack Rules Applicable Rule Applied


q1 000000 R (1), (9) (1)
q1 00000 BR (3), both options (3), option #1
q1 0000 BBR (3), both options (3), option #1
q1 000 BBBR (3), both options (3), option #2
q2 00 BBR (7) (7)
q2 0 BR (7) (7)
q2 ε R (10) (10)
q2 ε ε - -

• Questions:
– What is rule #10 used for?
– What is rule #9 used for?
– Why do rules #3 and #6 have options?
– Why don’t rules #4 and #5 have similar options?

17
• Example Computation:

(1) δ(q1, 0, R) = {(q1, BR)} (7) δ(q2, 0, B) = {(q2, ε)}


(2) δ(q1, 1, R) = {(q1, GR)} (8) δ(q2, 1, G) = {(q2, ε)}
(3) δ(q1, 0, B) = {(q1, BB), (q2, ε)} (9) δ(q1, ε, R) = {(q2, ε)}
(4) δ(q1, 0, G) = {(q1, BG)} (10) δ(q2, ε, R) = {(q2, ε)}
(5) δ(q1, 1, B) = {(q1, GB)}
(6) δ(q1, 1, G) = {(q1, GG), (q2, ε)}

State Input Stack Rules Applicable Rule Applied


q1 010010 R (1), (9) (1)
q1 10010 BR (5) (5)
q1 0010 GBR (4) (4)
q1 010 BGBR (3), both options (3), option #2
q2 10 GBR (8) (8)
q2 0 BR (7) (7)
q2 ε R (10) (10)
q2 ε ε - -

• Exercises:
– 0011001100
– 011110
– 0111
18
Exercises:

• Develop PDAs for any of the regular or context-free languages that we have discussed.

• Note that for regular languages an NFA that simply “ignores” it’s stack will work.

• For languages which are context-free but not regular, first try to envision a Java (or other high-level
language) program that uses a stack to accept the language, and then convert it to a PDA.

• For example, for the set of all strings of the form aibjck, such that either i ≠ j or j ≠ k. Or the set of all
strings not of the form ww.

19
Formal Definitions for PDAs
• Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA.

• Definition: An instantaneous description (ID) is a triple (q, w, γ), where q is in Q, w is


in Σ* and γ is in Г*.
– q is the current state
– w is the unused input
– γ is the current stack contents

• Example: (for PDA #3)

(q1, 111, GBR) (q1, 11, GGBR)

(q1, 111, GBR) (q2, 11, BR)

(q1, 000, GR) (q2, 00, R)

20
• Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA.

• Intuitively, if I and J are instantaneous descriptions, then I |— J means that J follows


from I by one transition.

• Formally: Let a be in Σ U {ε}, w be in Σ*, z be in Г, and α and β both be in Г*. Then:

(q, aw, zα) |— (p, w, βα)

if δ(q, a, z) contains (p, β).

21
• Examples: (PDA #3)

(q1, 111, GBR) |— (q1, 11, GGBR) (6) option #1, with a=1, z=G, β=GG, w=11, and
α= BR

(q1, 111, GBR) |— (q2, 11, BR) (6) option #2, with a=1, z=G, β= ε, w=11, and
α= BR

(q1, 000, GR) |— (q2, 00, R) Is not true, For any a, z, β, w and α

• Examples: (PDA #1)

(q1, (())), L#) |— (q1, ())),LL#) (3)

22
• A computation by a PDA can be expressed using this notation (PDA #3):

(q1, 010010, R) |— (q1, 10010, BR) (1)


|— (q1, 0010, GBR) (5)
|— (q1, 010, BGBR) (4)
|— (q2, 10, GBR) (3), option #2
|— (q2, 0, BR) (8)
|— (q2, ε, R) (7)
|— (q2, ε, ε) (10)

(q1, ε, R) |— (q2, ε, ε) (9)

23
• Intuitively, if I and J are instantaneous descriptions, then I |—* J means that J follows
from I by zero or more transitions.

• Formally: |—* is the reflexive and transitive closure of |—.


– I |—* I for each instantaneous description I
– If I |— J and J |—* K then I |—* K

• Alternatively:
– I |—* I for each instantaneous description I
– If I |—* J and J |— K then I |—* K

24
• Examples: (PDA #3)

(q1, 010010, R) |—* (q2, 10, GBR)

(q1, 010010, R) |—* (q2, ε, ε)

(q1, 111, GBR) |—* (q1, ε, GGGGBR)

(q1, 01, GR) |—* (q1, 1, BGR)

(q1, 101, GBR) |—* (q1, 101, GBR)

25
• Definition: Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA. The language accepted by empty
stack, denoted LE(M), is the set

{w | (q0, w, z0) |—* (p, ε, ε) for some p in Q}

• Definition: Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA. The language accepted by final
state, denoted LF(M), is the set

{w | (q0, w, z0) |—* (p, ε, γ) for some p in F and γ in Г*}

• Definition: Let M = (Q, Σ, Г, δ, q0, z0, F) be a PDA. The language accepted by empty
stack and final state, denoted L(M), is the set

{w | (q0, w, z0) |—* (p, ε, ε) for some p in F}

• Questions:
– Does the book define string acceptance by empty stack, final state, both, or neither?
– As an exercise, convert the preceding PDAs to other PDAs with different acceptence criteria.

26
• Lemma 1: Let L = LE(M1) for some PDA M1. Then there exists a PDA M2 such that L =
LF(M2).

• Lemma 2: Let L = LF(M1) for some PDA M1. Then there exists a PDA M2 such that L =
LE(M2).

• Theorem: Let L be a language. Then there exists a PDA M1 such that L = LF(M1) if and
only if there exists a PDA M2 such that L = LE(M2).

• Corollary: The PDAs that accept by empty stack and the PDAs that accept by final state
define the same class of languages.

• Notes:
– Similar lemmas and theorems could be stated for PDAs that accept by both final state and empty stack.
– Part of the lesson here is that one can define “acceptance” in many different ways, e.g., a string is accepted by a
DFA if you simply pass through an accepting state, or if you pass through an accepting state exactly twice.

27
The Relationship Between PDAs and CFLs

• Definition: Let G = (V, T, P, S) be a CFG. If every production in P is of the form

A –> aα

Where A is in V, a is in T, and α is in V*, then G is said to be in Greibach Normal Form


(GNF).

• Example:

S –> aAB | bB
A –> aA | a
B –> bB | c

28
• Theorem: Let L be a CFL. Then L – {ε} is a CFL.

• Theorem: Let L be a CFL not containing {ε}. Then there exists a GNF grammar G such
that L = L(G).

29
• Lemma 1: Let L be a CFL. Then there exists a PDA M such that L = LE(M).

• Proof: Assume without loss of generality that ε is not in L. The construction can be modified to include
ε later.

Let G = (V, T, P, S) be a CFG, where L = L(G), and assume without loss of generality that G is in GNF.

Construct M = (Q, Σ, Г, δ, q, z, Ø) where:

Q = {q}
Σ=T
Г=V
z=S

δ: for all a in T, A in V and γ in V*, if A –> aγ is in P then δ(q, a, A) will contain (q, γ)

Stated another way:


δ(q, a, A) = {(q, γ) | A –> aγ is in P}, for all a in T and A in V

• Huh?

• As we will see, for a given string x in Σ*, M will attempt to simulate a leftmost derivation of x with G.

30
• Example #1: Consider the following CFG in GNF.

S –> aS G is in GNF
S –> a L(G) = a+

Construct M as:

Q = {q}
Σ = T = {a}
Г = V = {S}
z=S

δ(q, a, S) = {(q, S), (q, ε)}


δ(q, ε, S) = Ø

• Question: Is that all? Is δ complete? Recall that δ: Q x (Σ U {ε}) x Г –> finite subsets of
Q x Г*

31
• Example #2: Consider the following CFG in GNF.

(1) S –> aA
(2) S –> aB
(3) A –> aA G is in GNF
(4) A –> aB L(G) = a+b+
(5) B –> bB
(6) B –> b

Construct M as:
Q = {q}
Σ = T = {a, b} S -> aγ How many productions are there of this form?
Г = V = {S, A, B}
z=S

(1) δ(q, a, S) = ?
(2) δ(q, a, A) = ?
(3) δ(q, a, B) = ?
(4) δ(q, b, S) = ?
(5) δ(q, b, A) = ?
(6) δ(q, b, B) = ?
(7) δ(q, ε, S) = ?
(8) δ(q, ε, A) = ?
(9) δ(q, ε, B) = ? Why 9? Recall δ: Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

32
• Example #2: Consider the following CFG in GNF.

(1) S –> aA
(2) S –> aB
(3) A –> aA G is in GNF
(4) A –> aB L(G) = a+b+
(5) B –> bB
(6) B –> b

Construct M as:
Q = {q}
Σ = T = {a, b} S -> aγ How many productions are there of this form?
Г = V = {S, A, B}
z=S

(1) δ(q, a, S) = {(q, A), (q, B)} From productions #1 and 2, S->aA, S->aB
(2) δ(q, a, A) = ?
(3) δ(q, a, B) = ?
(4) δ(q, b, S) = ?
(5) δ(q, b, A) = ?
(6) δ(q, b, B) = ?
(7) δ(q, ε, S) = ?
(8) δ(q, ε, A) = ?
(9) δ(q, ε, B) = ? Why 9? Recall δ: Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

33
• Example #2: Consider the following CFG in GNF.

(1) S –> aA
(2) S –> aB
(3) A –> aA G is in GNF
(4) A –> aB L(G) = a+b+
(5) B –> bB
(6) B –> b

Construct M as:
Q = {q}
Σ = T = {a, b}
Г = V = {S, A, B}
z=S

(1) δ(q, a, S) = {(q, A), (q, B)} From productions #1 and 2, S->aA, S->aB
(2) δ(q, a, A) = {(q, A), (q, B)} From productions #3 and 4, A->aA, A->aB
(3) δ(q, a, B) = Ø
(4) δ(q, b, S) = Ø
(5) δ(q, b, A) = Ø
(6) δ(q, b, B) = {(q, B), (q, ε)} From productions #5 and 6, B->bB, B->b
(7) δ(q, ε, S) = Ø
(8) δ(q, ε, A) = Ø
(t9) δ(q, ε, B) = Ø Recall δ: Q x (Σ U {ε}) x Г –> finite subsets of Q x Г*

34
• For a string w in L(G) the PDA M will simulate a leftmost derivation of w.

– If w is in L(G) then (q, w, z0) |—* (q, ε, ε)

– If (q, w, z0) |—* (q, ε, ε) then w is in L(G)

• Consider generating a string using G. Since G is in GNF, each sentential form in a leftmost derivation
has form:

=> t1t2…ti A1A2…Am

terminals non-terminals

• And each step in the derivation (i.e., each application of a production) adds a terminal and some non-
terminals.

A1 –> ti+1α

=> t1t2…ti ti+1 αA2…Am

• Each transition of the PDA simulates one derivation step. Thus, the i th step of the PDAs’ computation
corresponds to the ith step in a corresponding leftmost derivation.

• After the ith step of the computation of the PDA, t1t2…ti are the symbols that have already been read
by the PDA and A1A2…Amare the stack contents.
35
• For each leftmost derivation of a string generated by the grammar, there is an equivalent
accepting computation of that string by the PDA.

• Each sentential form in the leftmost derivation corresponds to an instantaneous


description in the PDA’s corresponding computation.

• For example, the PDA instantaneous description corresponding to the sentential form:

=> t1t2…ti A1A2…Am

would be:

(q, ti+1ti+2…tn , A1A2…Am)

36
• Example: Using the grammar from example #2:

S => aA (p1)
=> aaA (p3)
=> aaaA (p3)
=> aaaaB (p4)
=> aaaabB (p5) (p1) S –> aA
(p2) S –> aB
=> aaaabb (p6) (p3) A –> aA
(p4) A –> aB
(p5) B –> bB
• The corresponding computation of the PDA: (p6) B –> b

(t1) δ(q, a, S) = {(q, A), (q, B)} productions p1 and p2


(t2) δ(q, a, A) = {(q, A), (q, B)} productions p3 and p4
• (q, aaaabb, S) |— ? (t3) δ(q, a, B) = Ø
(t4) δ(q, b, S) = Ø
(t5) δ(q, b, A) = Ø
(t6) δ(q, b, B) = {(q, B), (q, ε)} productions p5 and p6
(t7) δ(q, ε, S) = Ø
(t8) δ(q, ε, A) = Ø
(t9) δ(q, ε, B) = Ø

37
• Example: Using the grammar from example #2:

S => aA (p1)
=> aaA (p3)
=> aaaA (p3)
=> aaaaB (p4)
=> aaaabB (p5) (p1) S –> aA
(p2) S –> aB
=> aaaabb (p6) (p3) A –> aA
(p4) A –> aB
(p5) B –> bB
• The corresponding computation of the PDA: (p6) B –> b

(t1) δ(q, a, S) = {(q, A), (q, B)} productions p1 and p2


(t2) δ(q, a, A) = {(q, A), (q, B)} productions p3 and p4
• (q, aaaabb, S) |— (q, aaabb, A) (t1)/1 (t3) δ(q, a, B) = Ø
(t4) δ(q, b, S) = Ø
|— (q, aabb, A) (t2)/1 (t5) δ(q, b, A) = Ø
|— (q, abb, A) (t2)/1 (t6) δ(q, b, B) = {(q, B), (q, ε)} productions p5 and p6
(t7) δ(q, ε, S) = Ø
|— (q, bb, B) (t2)/2 (t8) δ(q, ε, A) = Ø
(t9) δ(q, ε, B) = Ø
|— (q, b, B) (t6)/1
|— (q, ε, ε) (t6)/2

– String is read
– Stack is emptied
– Therefore the string is accepted by the PDA 38
• Another Example: Using the PDA from example #2:

(q, aabb, S) |— (q, abb, A) (t1)/1


|— (q, bb, B) (t2)/2 (p1) S –> aA
(p2) S –> aB
|— (q, b, B) (t6)/1 (p3) A –> aA
(p4) A –> aB
|— (q, ε, ε) (t6)/2 (p5) B –> bB
(p6) B –> b

(t1) δ(q, a, S) = {(q, A), (q, B)} productions p1 and p2


(t2) δ(q, a, A) = {(q, A), (q, B)} productions p3 and p4
• The corresponding derivation using the grammar: (t3) δ(q, a, B) = Ø
(t4) δ(q, b, S) = Ø
(t5) δ(q, b, A) = Ø
S => ? (t6) δ(q, b, B) = {(q, B), (q, ε)} productions p5 and p6
(t7) δ(q, ε, S) = Ø
(t8) δ(q, ε, A) = Ø
(t9) δ(q, ε, B) = Ø

39
• Another Example: Using the PDA from example #2:

(q, aabb, S) |— (q, abb, A) (t1)/1


|— (q, bb, B) (t2)/2 (p1) S –> aA
(p2) S –> aB
|— (q, b, B) (t6)/1 (p3) A –> aA
(p4) A –> aB
|— (q, ε, ε) (t6)/2 (p5) B –> bB
(p6) B –> b

(t1) δ(q, a, S) = {(q, A), (q, B)} productions p1 and p2


(t2) δ(q, a, A) = {(q, A), (q, B)} productions p3 and p4
• The corresponding derivation using the grammar: (t3) δ(q, a, B) = Ø
(t4) δ(q, b, S) = Ø
(t5) δ(q, b, A) = Ø
S => aA (p1) (t6) δ(q, b, B) = {(q, B), (q, ε)} productions p5 and p6
(t7) δ(q, ε, S) = Ø
=> aaB (p4) (t8) δ(q, ε, A) = Ø
(t9) δ(q, ε, B) = Ø
=> aabB (p5)
=> aabb (p6)

40
• Example #3: Consider the following CFG in GNF.

(1) S –> aABC


(2) A –> a G is in GNF
(3) B –> b
(4) C –> cAB
(5) C –> cC

Construct M as:

Q = {q}
Σ = T = {a, b, c}
Г = V = {S, A, B, C}
z=S

(1) δ(q, a, S) = {(q, ABC)} S->aABC (9) δ(q, c, S) = Ø


(2) δ(q, a, A) = {(q, ε)} A->a (10) δ(q, c, A) = Ø
(3) δ(q, a, B) = Ø (11) δ(q, c, B) = Ø
(4) δ(q, a, C) = Ø (12) δ(q, c, C) = {(q, AB), (q, C)) C->cAB|cC
(5) δ(q, b, S) = Ø (13) δ(q, ε, S) = Ø
(6) δ(q, b, A) = Ø (14) δ(q, ε, A) = Ø
(7) δ(q, b, B) = {(q, ε)} B->b (15) δ(q, ε, B) = Ø
(8) δ(q, b, C) = Ø (16) δ(q, ε, C) = Ø

41
• Notes:
– Recall that the grammar G was required to be in GNF before the construction could be applied.
– As a result, it was assumed that ε was not in the context-free language L.

• Suppose ε is in L:

1) First, let L’ = L – {ε}

By an earlier theorem, if L is a CFL, then L’ = L – {ε} is a CFL.

By another earlier theorem, there is GNF grammar G such that L’ = L(G).

2) Construct a PDA M such that L’ = LE(M)

How do we modify M to accept ε?

Add δ(q, ε, S) = {(q, ε)}? No!

42
• Counter Example:

Consider L = {ε, b, ab, aab, aaab, …} Then L’ = {b, ab, aab, aaab, …}

• The GNF CFG for L’:

(1) S –> aS
(2) S –> b

• The PDA M Accepting L’:

Q = {q}
Σ = T = {a, b}
Г = V = {S}
z=S

δ(q, a, S) = {(q, S)}


δ(q, b, S) = {(q, ε)}
δ(q, ε, S) = Ø

• If δ(q, ε, S) = {(q, ε)} is added then:

L(M) = {ε, a, aa, aaa, …, b, ab, aab, aaab, …}


43
3) Instead, add a new start state q’ with transitions:

δ(q’, ε, S) = {(q’, ε), (q, S)}

where q is the start state of the machine from the initial construction.

• Lemma 1: Let L be a CFL. Then there exists a PDA M such that L = LE(M).

• Lemma 2: Let M be a PDA. Then there exists a CFG grammar G such that LE(M) =
L(G) . -- Note that we did not prove this.

• Theorem: Let L be a language. Then there exists a CFG G such that L = L(G) iff there
exists a PDA M such that L = LE(M).

• Corollary: The PDAs define the CFLs.

44
• A (proposed) PDA for {0i1j2k | i ≠ j or j ≠ k}

• For simplicity assume that i,j,k>=1

• There are four cases i>j, i<j, j>k, j<k

• The PDA uses epsilon transitions to guess which case holds

(1) δ(q0, ε, #) = {(q1, #), (q2, #), (q3, #), (q4, #)} // Guess which of the four cases applies
(4) δ(q1, 0, #) = {(q1, 0#)} // This begins case 1, start by pushing all the 0’s
(5) δ(q1, 0, 0) = {(q1, 00)}
(6) δ(q1, 1, 0) = {(q5, ε)} // Match the 1’s on input with the 0’s on the stack
(7) δ(q5, 1, 0) = {(q5, ε)}
(8) δ(q5, 2, 0) = {(q6, 0)} // 1’s run out first, so look for a 2 and eat them up
(9) δ(q6, 2, 0) = {(q6, 0)}
(10) δ(q6, ε, 0) = {(q7, ε)} // Once 2’s run out, empty the stack, and accept
(11) δ(q7, ε, 0) = {(q7, ε)}
: // Cases 2-4 are similar

45

You might also like