0% found this document useful (0 votes)
46 views76 pages

Context-Free Grammar and Parsing Techniques

The document provides an overview of context-free grammars (CFG), syntax analysis, and parsing techniques, including top-down and bottom-up parsing methods. It discusses the importance of parse trees, syntax trees, error handling, ambiguity, left recursion, and left factoring in grammar transformation. Additionally, it explains predictive parsing and recursive descent parsing, highlighting their applications and limitations in programming language constructs.

Uploaded by

jessilsj139
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views76 pages

Context-Free Grammar and Parsing Techniques

The document provides an overview of context-free grammars (CFG), syntax analysis, and parsing techniques, including top-down and bottom-up parsing methods. It discusses the importance of parse trees, syntax trees, error handling, ambiguity, left recursion, and left factoring in grammar transformation. Additionally, it explains predictive parsing and recursive descent parsing, highlighting their applications and limitations in programming language constructs.

Uploaded by

jessilsj139
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

UNIT – III

1. Context-free Grammars: Definition:

Formally, a context-free grammar G is a 4-tuple G = (V, T, P, S), where:


1. V is a finite set of variables (or nonterminals). These describe sets of “related” strings.
2. T is a finite set of terminals (i.e., tokens).
3. P is a finite set of productions, each of the form
A  
where A  V is a variable, and   (V  T)* is a sequence of terminals and nonterminals. S  V
is the start symbol.
Example of CFG:
E ==>EAE | (E) | -E | id A==> + | - | * | / |
Where E, A are the non-terminals while id, +, *, -, /,(, ) are the terminals. 2. Syntax analysis:

In syntax analysis phase the source program is analyzed to check whether if conforms to the source
language’s syntax, and to determine its phase structure. This phase is often separated into two phases:

 Lexical analysis: which produces a stream of tokens?


 Parser: which determines the phrase structure of the program based on the context-free
grammar for the language?

PARSING:

Parsing is the activity of checking whether a string of symbols is in the language of some grammar,
where this string is usually the stream of tokens produced by the lexical analyzer. If the string is in
the grammar, we want a parse tree, and if it is not, we hope for some kind of error message explaining
why not.

There are two main kinds of parsers in use, named for the way they build the parse trees:
 Top-down: A top-down parser attempts to construct a tree from the root, applying
productions forward to expand non-terminals into strings of symbols.
 Bottom-up: A Bottom-up parser builds the tree starting with the leaves, using productions
in reverse to identify strings of symbols that can be grouped together.

21
In both cases the construction of derivation is directed by scanning the input sequence from left to
right, one symbol at a time.

Parse Tree:

L e xic a l Re s t o
P ars er en
A n a ly z e fro nt

Sy mb o l
T a b le

A parse tree is the graphical representation of the structure of a sentence according to its grammar.

Example:
Let the production P is:

E T | E+T
T F | T*F
F V | (E)
V a | b | c |d

The parse tree may be viewed as a representation for a derivation that filters out the choice regarding
the order of replacement.

Parse tree for a * b + c

E T

T+ F F

F V V

V b

22
Parse tree for (a * b) * (c + d)
E

T * F

F (E

(E E T

E
T F

T
F F V

V d
F V

c
V b
a

SYNTAX TREES:

Parse tree can be presented in a simplified form with only the relevant structure information by:
 Leaving out chains of derivations (whose sole purpose is to give operators difference
precedence).
 Labeling the nodes with the operators in question rather than a non-terminal.

The simplified Parse tree is sometimes called as structural tree or syntax tree.

a*b+c a+ (a + b) * (c + d)

E E
+
+
a (E (E)
* c
b c
a b

a b d

Synt a x T re e s

23
Syntax Error Handling:

If a compiler had to process only correct programs, its design & implementation would be greatly
simplified. But programmers frequently write incorrect programs, and a good compiler should assist
the programmer in identifying and locating errors.The programs contain errors at many different
levels.
For example, errors can be:

1) Lexical – such as misspelling an identifier, keyword or operator


2) Syntactic – such as an arithmetic expression with un-balanced parentheses.
3) Semantic – such as an operator applied to an incompatible operand.
4) Logical – such as an infinitely recursive call.

Much of error detection and recovery in a compiler is centered around the syntax analysis phase. The
goals of error handler in a parser are:
 It should report the presence of errors clearly and accurately.
 It should recover from each error quickly enough to be able to detect subsequent errors.
 It should not significantly slow down the processing of correct programs.

Ambiguity:

Several derivations will generate the same sentence, perhaps by applying the same productions in
a different order. This alone is fine, but a problem arises if the same sentence has two distinct
parse trees. A grammar is ambiguous if there is any sentence with more than one parse tree.
Any parses for an ambiguous grammar has to choose somehow which tree to return. There
are a number of solutions to this; the parser could pick one arbitrarily, or we can provide
some hints about which to choose. Best of all is to rewrite the grammar so that it is not ambiguous.
There is no general method for removing ambiguity. Ambiguity is acceptable inspoken
languages. Ambiguous programming languages are useless unless the ambiguity can be
resolved.

Fixing some simple ambiguities in a grammar:


Ambiguous language unambiguous

(i) A  B | AA Lists of one or more B’s A  BC


CA|E
(ii) A  B | A;A Lists of one or more B’s with punctuation A  BC
C  ;A | E
(iii) A  B | AA | E lists of zero or more B’s A  BA | E

Any sentence with more than two variables, such as (arg, arg, arg) will have multiple parse trees.

Left Recursion:


If there is any non terminal A, such that there is a derivation A the A  for some string, then
24
grammar is left recursive.
Algorithm for eliminating left Recursion:

Group all the A productions together like this: A  A 1 | A 2 | - - - | A m |  1 |  2 | - -- |  n where


A is the left recursive non-terminal,
 is any string of terminals and
 is any string of terminals and non terminals that does not begin with A.

1. Replace the above A productions by the following: A  1 AI | 2 AI | - - - | n AI

AI  1 AI | 2 AI | - - - |m AI |  Where, AI is a new non terminal.

Top down parsers cannot handle left recursive grammars.

If our expression grammar is left recursive:

 This can lead to non termination in a top-down parser.


 for a top-down parser, any recursion must be right recursion.
 we would like to convert the left recursion to right recursion.

Example 1:
Remove the left recursion from the production: A  A  | 


 Left Recursive.
Eliminate
Applying the transformation yields:

A   AI
AI   AI | 

Remaining part after A.

Example 1:
Remove the left recursion from the productions:

25
EE+T|T
TT*F|F
Applying the transformation yields:
E  T EI T  F TI
EI  T EI |  TI  * F TI | 
Example 2:
Remove the left recursion from the productions:
EE+T|E–T|T
T  T * F | T/F | F
Applying the transformation yields:

E  T EI T  F TI
E  + T EI | - T EI |  TI  * F TI | /F TI | 

1. The non terminal S is left recursive because S  A a  S d a But


it is not immediate left recursive.
2. Substitute S-productions in A  S d to obtain:
A  A c | A a d | b d | 
3. Eliminating the immediate left recursion:

Left Factoring:
Left factoring is a grammar transformation that is useful for producing a grammar suitable for
predictive parsing.
When it is not clear which of two alternative productions to use to expand a non-terminal A, we may
be able to rewrite the productions to defer the decision until we have some enough of the input to make
the right choice.

Algorithm:
For all A  non-terminal, find the longest prefix  that occurs in two or more right-hand sides of A.
If    then replace all of the A productions, A   I |  2 | - - - |  n | r
With
A   AI | r
AI  I | 2| - - - | n | 
Where, AI is a new element of non-terminal. Repeat until no common prefixes remain.
It is easy to remove common prefixes by left factoring, creating new non-terminal.
For example consider:
V    |  r Change to:
V   VI VI   | r

Example 1:
Eliminate Left factoring in the grammar: S  V := int
V  alpha ‘[‘ int ’]’ | alpha

26
Becomes:
S  V := int
V  alpha VI
VI  ’[‘ int ’] | 
TOP DOWN PARSING:

Top down parsing is the construction of a Parse tree by starting at start symbol and “guessing” each
derivation until we reach a string that matches input. That is, construct tree from root to leaves.The
advantage of top down parsing in that a parser can directly be written as a program. Table-driven top-
down parsers are of minor practical relevance. Since bottom-up parsers are more powerful than top- down
parsers, bottom-up parsing is practically relevant.
For example, let us consider the grammar to see how top-down parser works:

S  if E then S else S | while E do S | print


E  true | False | id

The input token string is: If id then while true do print else print.
1. Tree:
S

Input: if id then while true do print else print.


Action: Guess for S.
2. Tree:

if E then S e ls e S

Input: if id then while true do print else print.


Action: if matches; guess for E.
3. Tree:

if E then S e ls e S

id

Input: id then while true do print else print. Action: id


matches; then matches; guess for S.

4. Tree:

27
if then S e ls S

w h ile E do S

Input: while true do print else print.


Action: while matches; guess for E.
5. Tree:

if the S e ls S

whi Ed o S

t ru e

Input: true do print else print


Action:true matches; do matches; guess S.

Recursive Descent Parsing:

Top-down parsing can be viewed as an attempt to find a left most derivation for an input string.
Equivalently, it can be viewd as a attempt to construct a parse tree for the input starting from the root
and creating the nodes of the parse tree in preorder.
The special case of recursive –decent parsing, called predictive parsing, where no backtracking
is required. The general form of top-down parsing, called recursive descent, that may involve
backtracking, that is, making repeated scans of the input.
Recursive descent or predictive parsing works only on grammars where the first terminal symbol
of each sub expression provides enough information to choose which production to use.
Recursive descent parser is a top down parser involving backtracking. It makes a repeated scans
of the input. Backtracking parsers are not seen frequently, as backtracking is very needed to parse
programming language constructs.

Example: consider the grammar


S→cAd
A→ab|a
And the input string w=cad. To construct a parse tree for this string top-down, we initially create a tree
28

S S
consisting of a single node labeled scan input pointer points to c, the first symbol of w. we then use the
first production for S to expand tree and obtain the tree of Fig(a).

The left most leaf, labeled c, matches the first symbol of w, so we now advance the input pointer
to a ,the second symbol of w, and consider the next leaf, labeled A. We can then expand A using the
first alternative for A to obtain the tree in Fig (b). we now have a match for the second input symbol
so we advance the input pointer to d, the third, input symbol, and compare d against thenext leaf,
labeled b. since b does not match the d ,we report failure and go back to A to see where there is any
alternative for Ac that we have not tried but that might produce a match.

In going back to A, we must reset the input pointer to position2,we now try second alternative
for A to obtain the tree of Fig(c).The leaf matches second symbol of w and the leaf d matches the third
symbol .
The left recursive grammar can cause a recursive- descent parser, even one with backtracking,
to go into an infinite loop.That is ,when we try to expand A, we may eventually find ourselves again
trying to ecpand A without Having consumed any input.

Predictive Parsing:
Predictive parsing is top-down parsing without backtracking or look a head. For many
languages, make perfect guesses (avoid backtracking) by using 1-symbol look-a-head. i.e., if:
A  I |  2 | - - - | n.
Choose correct i by looking at first symbol it derive. If  is an alternative, choose it last.
This approach is also called as predictive parsing. There must be at most one production in
order to avoid backtracking. If there is no such production then no parse tree exists and an error is
returned.
The crucial property is that, the grammar must not be left-recursive.
Predictive parsing works well on those fragments of programming languages in which keywords
occurs frequently.
For example:
stmt  if exp then stmt else stmt | while expr do stmt
| begin stmt-list end.
then the keywords if, while and begin tell, which alternative is the only one that could possibly
succeed if we are to find a statement.
The model of predictive parser is as follows:

29
A predictive parser has:

 Stack
 Input
 Parsing Table
 Output

The input buffer consists the string to be parsed, followed by $, a symbol used as a right end
marker to indicate the end of the input string.
The stack consists of a sequence of grammar symbols with $ on the bottom, indicating the bottom of
the stack. Initially the stack consists of the start symbol of the grammar on the top of $.
Recursive descent and LL parsers are often called predictive parsers, because they operate by
predicting the next step in a derivation.

The algorithm for the Predictive Parser Program is as follows: Input: A string w and a parsing
table M for grammar G
Output: if w is in L(g),a leftmost derivation of w; otherwise, an error indication.
Method: Initially, the parser has $S on the stack with S, the start symbol of G on top, and w$ in the
input buffer. The program that utilizes the predictive parsing table M to produce a parse for the input
is:

Set ip to point to the first symbol of w$; repeat


let x be the top stack symbol and a the symbol pointed to by ip; if X is a terminal or $
then

30
if X = a then
pop X from the stack and advance ip else error()
else /* X is a non-terminal */

if M[X, a] = X Y1 Y2 ...................... Yk thenbegin

pop X from the stack;


push Yk, Yk-1, ..................... Y1 onto the stack, with Y1 on top; output the
end
production X Y1 Y2 ................ Yk

else error()
until X = $ /*stack is empty*/

FIRST and FOLLOW:


The construction of a predictive parser is aided by two functions with a grammar G. these
functions, FIRST and FOLLOW, allow us to fill in the entries of a predictive parsing table for G,
whenever possible. Sets of tokens yielded by the FOLLOW function can also be used as synchronizing
tokens during pannic-mode error recovery.
If α is any string of grammar symbols, let FIRST (α) be the set of terminals that begin the strings
derived from α. If α=>€,then € is also in FIRST(α).

Define FOLLOW (A), for nonterminals A, to be the set of terminals a that can appear
immediately to the right of A in some sentential form, that is, the set of terminals a such that there exist
a derivation of the form S=>αAaβ for some α and β. If A can be the rightmost symbol in some sentential
form, then $ is in FOLLOW(A).

Computation of FIRST ():


To compute FIRST(X) for all grammar symbols X, apply the following rules until no more
terminals or € can be added to any FIRST set.
 If X is terminal, then FIRST(X) is {X}.
 If X→€ is production, then add € to FIRST(X).
 If X is nonterminal and X→Y1 Y2……Yk is a production, then place a in FIRST(X)
if for some i,a is in FIRST(Yi),and € is in all of FIRST(Yi),and € is inall of
FIRST(Y1),….. FIRST(Yi-1);that is Y1.................................... Yi-1==>€.if € is in
FIRST(Yj), for all j=,2,3…….k, then add € to FIRST(X).for example, everything
in FIRST(Y1) is surely in FIRST(X).if Y1 does not derive €,then we add nothing
more to FIRST(X),but if Y1=>€,then we add FIRST(Y2) and so on.

FIRST (A) = FIRST (I) U FIRST (2) U - - - U FIRST (n) Where, A  1 | 2 |------ |n, are all the productions
for A. FIRST (A) = if   FIRST (A) then FIRST (A)
else (FIRST (A) - {}) U FIRST ()

Computation of FOLLOW ():

31
To compute FOLLOW (A) for all nonterminals A, apply the following rules until nothing can be

32
added to any FOLLOW set.
 Place $ in FOLLOW(s), where S is the start symbol and $ is input right end marker .
 If there is a production A→αBβ,then everything in FIRST(β) except for € is placed in
FOLLOW(B).
 If there is production A→αB, or a production A→αBβ where FIRST (β) contains €
(i.e.,β→€),then everything in FOLLOW(A)is in FOLLOW(B).

Example:
Construct the FIRST and FOLLOW for the grammar:

A  BC | EFGH | H
Bb
Cc|
Ee|
F  CE
Gg
H  h | 

Solution:
1. Finding first () set:
1. first (H) = first (h)  first () = {h, }
2. first (G) = first (g) = {g}
3. first (C) = first (c)  first () = c, }
4. first (E) = first (e)  first () = {e, }
5. first (F) = first (CE) = (first (c) - {})  first (E)
= (c, } {})  {e, } = {c, e, }
6. first (B) = first (b)={b}
7. first (A) = first (BC)  first (EFGH)  first (H)
= first (B)  (first (E) – { })  first (FGH)  {h, }
= {b, h, }  {e}  (first (F) – {})  first (GH)
= {b, e, h, }  {C, e}  first (G)
= {b, c, e, h, }  {g} = {b, c, e, g, h, }

2. Finding follow() sets:

1. follow(A) = {$}
33
2. follow(B) = first(C) – {}  follow(A) = {C, $}
3. follow(G) = first(H) – {}  follow(A)
={h, } – {}  {$} = {h, $}
4. follow(H) = follow(A) = {$}
5. follow(F) = first(GH) – {} = {g}
6. follow(E) = first(FGH) m- {}  follow(F)
= ((first(F) – {})  first(GH)) – {}  follow(F)
= {c, e}  {g}  {g} = {c, e, g}
7. follow(C) = follow(A)  first (E) – {}  follow (F)
={$}  {e, }  {g} = {e, g, $}

Example 1:

Construct a predictive parsing table for the given grammar or Check whether the given grammar is
LL(1) or not.
EE+T|T
T  T * F | F F  (E) | id

Step 1:
Suppose if the given grammar is left Recursive then convert the given grammar (and ) into non-left
Recursive grammar (as it goes to infinite loop).
E  T EI
EI  + T EI |  TI  F TI
TI  * F TI |  F  (E) | id
Step 2:
Find the FIRST(X) and FOLLOW(X) for all the variables.

The variables are: {E, EI, T, TI, F}


Terminals are: {+, *, (, ), id} and $
Computation of FIRST() sets:

FIRST (F) = FIRST ((E)) U FIRST (id) = {(, id}


FIRST (TI) = FIRST (*FTI) U FIRST () = {*, }
FIRST (T) = FIRST (FTI) = FIRST (F) = {(, id}
FIRST (EI) = FIRST (+TEI) U FIRST () = {+, }
FIRST (E) = FIRST (TEI) = FIRST (T) = {(, id}
Computation of FOLLOW () sets:
Relevant production

FOLLOW (E) = {$} U FIRST ( ) ) = {$, )} F  (E)

34
FOLLOW (EI) = FOLLOW (E) = {$, )} E  TEI

FOLLOW (T) = (FIRST (EI) - {}) U FOLLOW (E) U FOLLOW (EI) E  TEI
= {+, EI  +TEI

FOLLOW (TI) = FOLLOW (T) = {+, ), $} T  FTI

FOLLOW (F) = (FIRST (TI) - {}) U FOLLOW (T) U FOLLOW (TI) T  TI


= {*, +,

Step 3:
Construction of parsing table:

Termina
+ ( ) id $
Variables
E E  TE E  TEI
I
E 
EI +TEI EI   E I  

T T  FT T  FTI
TI TI   TI  *F TI   TI  
F F  (E) F  id
Table 3.1. Parsing Table

Fill the table with the production on the basis of the FIRST(). If the input symbol is an  in FIRST(), then goto
FOLLOW() and fill   , in all those input symbols.

Let us start with the non-terminal E, FIRST(E) = {(, id}. So, place the production E  TEI at ( and id.
For the non-terminal EI, FIRST (EI) = {+, }.
So, place the production E  +TE at + and also as there is a  in FIRST(EI), see
I I

FOLLOW(EI) = {$, )}. So write the production EI   at the place $ and ).

Similarly:

For the non-terminal T, FIRST(T) = {(, id}. So place the production T  FTI at ( and id.
For the non-terminal TI, FIRST (TI) = {*, }
So place the production T  *FT at * and also as there is a  in FIRST (TI), see
I I

FOLLOW (TI) = {+, $, )}, so write the production TI   at +, $ and ).


For the non-terminal F, FIRST (F) = {(, id}.
So place the production F  id at id location and F  (E) at ( as it has two productions.

35
Finally, make all undefined entries as error.
As these were no multiple entries in the table, hence the given grammar is LL(1).
Step 4:
Moves made by predictive parser on the input id + id * id is:

STACK INPUT REMARKS


E and id are not identical; so see E on id in parse table, the
I I
$E id + id * id $ production is ETE ; pop E, push E and T i.e., move in
reverse order.
See T on id the production is T  F TI ;
$ EI T id + id * id $ Pop T, push TI and F; Proceed until both are identical.

$ E I TI F id + id * id $ F  id
$ EI TI id id + id * id $ Identical; pop id and remove id from input symbol.
$ EI TI + id * See TI on +; TI   so, pop TI
$ EI + id * See EI on +; EI  +T EI; push EI , + and T
$ EI T + + id * Identical; pop + and remove + from input symbol.
$ EI T id *
$ E I TI F id * T  F TI
$ EI TI id id * F  id
$ EI TI *
$ E I TI F * * TI  * F TI
$ E I TI F
$ EI TI id F  id
$ EI TI TI  
I
$E EI  
$ Accept.
Table 3.2 Moves made by the parser on input id + id * id

Predictive parser accepts the given input string. We can notice that $ in input and stuck, i.e., both are
empty, hence accepted.

LL (1) Grammar:

The first L stands for “Left-to-right scan of input”. The second L stands for “Left-most derivation”. The ‘1’ stands
for “1 token of look ahead”.
No LL (1) grammar can be ambiguous or left recursive.

36
If there were no multiple entries in the Recursive decent parser table, the given grammar is LL (1).

If the grammar G is ambiguous, left recursive then the recursive decent table will have atleast one
multiply defined entry.

The weakness of LL(1) (Top-down, predictive) parsing is that, must predict which production to use.

Error Recovery in Predictive Parser:


Error recovery is based on the idea of skipping symbols on the input until a token in a selected
set of synchronizing tokens appear. Its effectiveness depends on the choice of synchronizing set. The
Usage of FOLLOW and FIRST symbols as synchronizing tokens works reasonably well when
expressions are parsed.

For the constructed table., fill with synch for rest of the input symbols of FOLLOW set and then fill
the rest of the columns with error term.

Terminal
+ * ( ) id $
Variables
E error error E  TE synch E  TEI synch
EI 
EI +TEI error error EI   error E I 
T synch error T  FT synch T  FTI synch
TI TI   I
T  *F error TI   error TI  
F synch synch F  (E) synch F  id synch
Table3.3 :Synchronizing tokens added to parsing table for table 3.1.

If the parser looks up entry in the table as synch, then the non terminal on top of the stack is popped
in an attempt to resume parsing. If the token on top of the stack does not match the input symbol, then
pop the token from the stack.

The moves of a parser and error recovery on the erroneous input) id*+id is as follows:

STACK IN REMARKS
$E ) id * + Error, skip )
$E id * +
$ EI T id * +
I I
$E T F id * +
I I
$ E T id id * +
I I
$E T *+
I I
$E T F* *+
37
$ E I TI F + Error; F on + is synch; F has been popped.
I I
$E T +
I
$E +
I
$E T+ +
I
$E T
$ E I TI F
$ EI TI id
$ EI TI
$ EI
$ Accept.

Example 2:
Table 3.4. Parsing and error recovery moves made by predictive parser

Construct a predictive parsing table for the given grammar or Check whether the given grammar is
LL(1) or not.

S  iEtSSI | a
SI  eS | 
Eb

Solution:
1. Computation of First () set:

1. First (E) = first (b) = {b}

2. First (SI) = first (eS)  first () = {e, }


3. first (S) = first (iEtSSI)  first (a) = {i, a}

2. Computation of follow() set:


1. follow (S) = {$}  first (SI) – {}  follow (S)  follow (SI)
= {$}  {e} = {e, $}

2. follow (SI) = follow (S) = {e, $}


3. follow (E) = first (tSSI) = {t}

3. The parsing table for this grammar is:

a b e i t
37
S Sa S 
iEtSSI
SI SI  SI 
I








As the table multiply defined entry. The given grammar is not LL(1).

Example 3:

Construct the FIRST and FOLLOW and predictive parse table for the grammar:

S  AC$
C  c | 
A  aBCd | BQ | 
B  bB | d
Qq
Solution:
1. Finding the first () sets: First (Q) = {q}
First (B) = {b, d}

First (C) = {c, }


First (A) = First (aBCd)  First (BQ)  First ()
= {a}  First (B)  First (d) {}
= {a}  First (bB)  First (d)  {}
= {a}  {b}  {d}  {}

= {a, b, d, } First (S) = First (AC$)


= (First (A) – {})  (First (C) – {})  First ()
= ({a, b, d, } – {})  ({c, } – {})  {}
= {a, b, d, c, }

2. Finding Follow () sets: Follow (S) = {#}

Follow (A) = (First (C) – {})  First ($) = ({c, } – {})  {$} Follow (A) = {c, $}
Follow (B) = (First (C) – {})  First (d)  First (Q)

= {c}  {d}  {q} = {c, d, q} Follow (C) = (First ($)  First (d) = {d, $}

38
Follow (Q) = (First (A) = {c, $}

3. The parsing table for this grammar is:

a b c D q $
    
S S AC$ S AC S AC S AC S AC
$ $ $ $
    
A A aBCd A BQ A  A BQ A 
 
B B bB B d
  
C C c C  C 

Q Q q

4. Moves made by predictive parser on the input abdcdc$ is:

Stack symbol Input Remarks



#S abdcdc$# S AC$

#$CA abdcdc$# A aBCd
#$CdCBa abdcdc$# Pop a

#$CdCB bdcdc$# B bB
#$CdCBb bdcdc$# Pop b

#$CdCB dcdc$# B d
#$CdCd dcdc$# Pop d

#$CdC cdc$# C c
#$Cdc cdc$# Pop C
#$Cd dc$# Pop d

#$C c$# C c
#$c c$# Pop c
#$ $# Pop $
# # Accepted

BOTTOM UP PARSING

1. BOTTOM UP PARSING:

Bottom-up parser builds a derivation by working from the input sentence back towards the start
symbol S. Right most derivation in reverse order is done in bottom-up parsing.

39
(The point of parsing is to construct a derivation. A derivation consists of a series of rewrite steps)

Sr0r1r2- - - rn-1rnsentence

Bottom-up

Assuming the production A, to reduce ri ri-1 match some RHS  against ri then replace  with its
corresponding LHS, A.

In terms of the parse tree, this is working from leaves to root.

Example – 1:
Sif E then S else S/while E do S/ print
E true/ False/id
Input: if id then while true do print else print.
Parse tree:
Basic idea: Given input string a, “reduce” it to the goal (start) symbol, by looking for
substring that match production RHS.
S

if then S Clse S

I
While E do S Pri
S
I I

tru

 if E then S else S
lm
 if id then S else S
lm
 if id then while E do S else S
lm
 if id then while true do S else S
lm
 if id then while true do print else S
lm

40
 if id then while true do print else print
lm
 if E then while true do print else print
rm
 if E then while E do print else print
rm
 if E then while E do S else print
rm
 if E then S else print
rm
 if E then S else S
rm
 S
rm

Topdown Vs Bottom-up parsing:


Top-down Bottom-up
1. Construct tree from root to leaves 1. Construct tree from leaves to root
2. “Guers” which RHS to substitute for 2. “Guers” which rule to “reduce”
nonterminal terminals
3. Produces left-most derivation 3. Produces reverse right-most derivation.
4. Recursive descent, LL parsers 4. Shift-reduce, LR, LALR, etc.
5. Recursive descent, LL parsers 5. “Harder” for humans.
6. Easy for humans

 Bottom-up can parse a larger set of languages than topdown.

 Both work for most (but not all) features of most computer languages.
Example – 2:
Right-most derivation
SaAcBe llp: abbcde/ SaAcBe
AAb/b  aAcde
Bd aAbcde
 abbcde

41
Bottom-up approach
“Right sentential form” Reduction
abbcde
aAbcde Ab
Aacde AAb
AacBe Bd
S SaAcBe

Steps correspond to a right-most derivation in reverse.

(must choose RHS wisely)


Example – 3:
SaABe
AAbc/b
Bd
1/p: abbcde
Right most derivation:
aABe
aAde Since ( ) Bd
aAbcde Since ( ) AAbc
abbcde Since ( ) Ab

Parsing using Bottom-up approach:


Input Production used
abbcde
aAbcde Ab
AAde AAbc
AABe Bd

42
S parsing is completed as we got a start symbol

Hence the 1/p string is acceptable.


Example – 4
EE+E
EE*E
E(E)
Eid
1/p: id1+id2+id3

Right most derivation


E E+E
E+E*E

E+E*id3 E+id2*id3
id1+id2*id3
Parsing using Bottom-up approach:
Go from left to right
id1+id2*id3
E+id2 *id3 Eid
E+E*id3 Eid
E*id3 EE+E
E*E Eid
E

= start symbol, Hence acceptable.

43
2. HANDLES:

Always making progress by replacing a substring with LHS of a matching production will not lead to
the goal/start symbol.

For example:

abbcde

aAbcde Ab

aAAcde Ab

struck

Informally, A Handle of a string is a substring that matches the right side of a production, and whose
reduction to the non-terminal on the left side of the production represents one step along the reverse
of a right most derivation.

If the grammar is unambiguous, every right sentential form has exactly one handle.

More formally, A handle is a production A and a position in the current right-sentential form
 such that:

SA/

For example grammar, if current right-sentential form is

a/Abcde

Then the handle is AAb at the marked position. ‘a’ never contains non-terminals.

HANDLE PRUNING:

Keep removing handles, replacing them with corresponding LHS of production, until we reach S.

Example:

EE+E/E*E/(E)/id

Right-sentential form Handle Reducing production

a+b*c a Eid

E+b*c b Eid

44
E+E*C C Eid

E+E*E E*E EE*E

E+E E+E EE+E

The grammar is ambiguous, so there are actually two handles at next-to-last step. We can use
parser-generators that compute the handles for us.

3. SHIFT- REDUCE PARSING:

Shift Reduce Parsing uses a stuck to hold grammar symbols and input buffer to hold string to be parsed,
because handles always appear at the top of the stack i.e., there’s no need to look deeper into the state.
A shift-reduce parser has just four actions:
1. Shift-next word is shifted onto the stack (input symbols) until a handle is formed.
2. Reduce – right end of handle is at top of stack, locate left end of handle within the stack. Pop
handle off stack and push appropriate LHS.

3. Accept – stop parsing on successful completion of parse and report success.


4. Error – call an error reporting/recovery routine.
Possible Conflicts:
Ambiguous grammars lead to parsing conflicts.

1. Shift-reduce: Both a shift action and a reduce action are possible in the same state (should we
shift or reduce)
Example: dangling-else problem

2. Reduce-reduce: Two or more distinct reduce actions are possible in the same state. (Which
production should we reduce with 2).

45
Example:
Stmt id (param) (a(i) is procedure call)
Param id
Expr  id (expr) /id (a(i) is array subscript)
Stack input buffer action
$…aa (i ) ….$ Reduce by ?

Should we reduce to param or to expr? Need to know the type of a: is it an array or a function. This
information must flow from declaration of a to this use, typically via a symbol table.
Shift – reduce parsing example: (Stack implementation)

Grammar: EE+E/E*E/(E)/id Input: id1+id2+id3


One Scheme to implement a handle-pruning, bottom-up parser is called a shift-reduce parser. Shift
reduce parsers use stack and an input buffer.
The sequence of steps is as follows:
1. initialize stack with $.
2. Repeat until the top of the stack is the goal symbol and the input token is “end of life”. a. Find
the handle

If we don’t have a handle on top of stack, shift an input symbol onto the stack.
b. Prune the handle

if we have a handle (A) on the stack, reduce

(i) pop // symbols off the stack (ii)push A onto the stack.

Stack input Action


$ id1+id2*id3$ Shift
$ id1 +id2*id3$ Reduce by Eid
$E +id2*id3$ Shift
$E+ id2*id3$ Shift
$E+ id2 *id3$ Reduce by Eid

46
$E+E *id3$ Shift
$E+E* id3$ Shift
$E+E* id3 $ Reduce by Eid
$E+E*E $ Reduce by EE*E
$E+E $ Reduce by EE+E
$E $ Accept

Example 2:


Goal Expr

Expr Expr + term | Expr – Term | Term

Term Tem & Factor | Term | factor | Factor

Factor number | id | (Expr)
The expression grammar : x – z * y

Stack Input Action

$ Id - num * id Shift


$ id - num * id Reduce factor id

$ Factor - num * id Reduce Term Factor

$ Term - num * id Reduce Expr Term
$ Expr - num * id Shift

$ Expr - num * id Shift



$ Expr – num * id Reduce Factor num

$ Expr – Factor * id Reduce Term Factor
$ Expr – Term * id Shift

$ Expr – Term * id Shift

47

$ Expr – Term * id Reduce Factor id

$ Expr – Term & Factor Reduce Term Term * Factor

$ Expr – Term Reduce Expr Expr – Term

$ Expr Reduce Goal Expr
$ Goal Accept

1. shift until the top of the stack is the right end of a handle
2. Find the left end of the handle & reduce.

Procedure:

1. Shift until top of stack is the right end of a handle.


2. Find the left end of the handle and reduce.
* Dangling-else problem:

stmtif expr then stmt/if expr then stmt/other then example string is: if E 1 then if E2 then S1 else S2
has two parse trees (ambiguity) and so this grammar is not of LR(k) type.
Stmt

If expr then stmt

E if expr then stmt else stmt.

Stmt

If expr then stmt else stmt

EI if expr then stmt S2

E2 S1

48
3. OPERATOR – PRECEDENCE PARSING:
Precedence/ Operator grammar: The grammars having the property:
1. No production right side is should contain .
2. No production sight side should contain two adjacent non-terminals.
Is called an operator grammar.
Operator – precedence parsing has three disjoint precedence relations, <.,=and .> between certain pairs
of terminals. These precedence relations guide the selection of handles and have the following
meanings:

RELATION MEANING
a<.b ‘a’ yields precedence to ‘b’.
a=b ‘a’ has the same precedence ‘b’
a.>b ‘a’ takes precedence over ‘b’.

Operator precedence parsing has a number of disadvantages:

1. It is hard to handle tokens like the minus sign, which has two different precedences.
2. Only a small class of grammars can be parsed.
3. The relationship between a grammar for the language being parsed and the operator- precedence
parser itself is tenuous, one cannot always be sure the parser accepts exactly the desired language.
Disadvantages:
1. L(G) L(parser)
2. error detection
3. usage is limited
4. They are easy to analyse manually Example:
Grammar: EEAE|(E)|-E/id
A+|-|*|/|
Input string: id+id*id
The operator – precedence relations are:

49
Id + * $
Id .> .> .>
+ <. .> <. .>
* <. .> .> .>
$ <. <. <.
Solution: This is not operator grammar, so first reduce it to operator grammar form, by
eliminating adjacent non-terminals.
Operator grammar is:
EE+E|E-E|E*E|E/E|EE|(E)|-E|id
The input string with precedence relations interested is:
$<.id.> + <.id.> * <.id.> $
Scan the string the from left end until first .> is encounted.
$<.id.>+<.id.>*<.id.<$
This occurs between the first id and +.

Scan backwards (to the left) over any =’s until a <. Is encounted. We scan backwards to $.
$<.id.>+<.id.>*<.id.>$
 
Everything to the left of the first .> and to the right of <. Is called handle. Here, the handle is the first
id.
Then reduce id to E. At this point we have: E+id*id

By repeating the process and proceding in the same way: $+<.id.>*<.id.>$


substitute Eid,
After reducing the other id to E by the same process, we obtain the right-sentential form

E+E*E
Now, the 1/p string afte detecting the non-terminals sis:
 $+*$

50
Inserting the precedence relations, we get: $<.+<.*.>$
 
The left end of the handle lies between + and * and the right end between * and $. It indicates that, in
the right sentential form E+E*E, the handle is E*E.
Reducing by EE*E, we get:

E+E

Now the input string is: $<.+$


Again inserting the precedence relations, we get:
$<.+.>$
 
reducing by EE+E, we get,
$$
and finally we are left with:

E
Hence accepted.
Input string Precedence relations Action
inserted
id+id*id $<.id.>+<.id.>*<.id.>$
E+id*id $+<.id.>*<.id.>$ Eid
E+E*id $+*<.id.>$ Eid
E+E*E $+*$
E+E*E $<.+<.*.>$ EE*E
E+E $<.+$
E+E $<.+.>$ EE+E
E $$ Accepted

51
5. LR PARSING INTRODUCTION:
The "L" is for left-to-right scanning of the input and the "R" is for constructing a
rightmost derivation in reverse.

WHY LR PARSING:
1. LR parsers can be constructed to recognize virtually all programming-language
constructs for which context-free grammars can be written.

2. The LR parsing method is the most general non-backtracking shift-reduce parsing


method known, yet it can be implemented as
efficiently as other shift-reduce methods.

3. The class of grammars that can be parsed using LR methods is a proper subset of the
class of grammars that can be parsed with predictive parsers.
4. An LR parser can detect a syntactic error as soon as it is possible to do so on a left-to-
right scan of the input.

The disadvantage is that it takes too much work to constuct an LR parser by hand for a typical
programming-language grammar. But there are lots of LR parser generators available to make this
task easy.

52
LR PARSERS:

LR(k) parsers are most general non-backtracking shift-reduce parsers. Two cases of interest are k=0
and k=1. LR(1) is of practical relevance

‘L’ stands for “Left-to-right” scan of input.


‘R’ stands for “Rightmost derivation (in reverse)”.

‘K’ stands for number of input symbols of look-a-head that are used in making parsing decisions.
When (K) is omitted, ‘K’ is assumed to be 1.
LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for
handle recognition.
LR(1) parsers recognize languages that have an LR(1) grammar. A grammar is LR(1) if, given a
right-most derivation

Sr0r1r2- - - rn-1rnsentence.
We can isolate the handle of each right-sentential form ri and determine the production by which to
reduce, by scanning ri from left-to-right, going atmost 1 symbol beyond the right end of the handle of
ri.
Parser accepts input when stack contains only the start symbol and no remaining input symbol are
left.
LR(0) item: (no lookahead)
Grammar rule combined with a dot that indicates a position in its RHS.

Ex– 1: SI  .S$ S.x S.(L)

Ex-2: AXYZ generates 4LR(0) items –

A.XYZ

AX.YZ
AXY.Z
AXYZ.

The ‘.’ Indicates how much of an item we have seen at a given state in the parse.
A.XYZ indicates that the parser is looking for a string that can be derived from XYZ.

53
AXY.Z indicates that the parser has seen a string derived from XY and is looking for one
derivable from Z.
 LR(0) items play a key role in the SLR(1) table construction algorithm.

 LR(1) items play a key role in the LR(1) and LALR(1) table construction algorithms. LR
parsers have more information available than LL parsers when choosing a production:
* LR knows everything derived from RHS plus ‘K’ lookahead symbols.
* LL just knows ‘K’ lookahead symbols into what’s derived from RHS.
Deterministic context free languages:

LR (1) languages

Preccdence
Languages LL
languages

LR PARSING ALGORITHM:
The schematic form of an LR parser is shown below:
INPUT a1 …… ai …… an

STACK
LR Out put
Parsing Program

Action goto

54
It consists of an input, an output, a stack, a driver program, and a parsing table that has two parts:
action and goto.
The LR parser program determines Sm, the current state on the top of the stack, and a i, the current
input symbol. It then consults action [Sm, ai], which can have one of four values:
1. Shift S, where S is a state.
2. reduce by a grammar production A
3. accept and
4. error
The function goes to takes a state and grammar symbol as arguments and produces a state. The
goto function of a parsing table constructed from a grammar G using the SLR, canonical LR or LALR
method is the transition function of DFA that recognizes the viable prefixes of G. (Viable prefixes of
G are those prefixes of right-sentential forms that can appear on the stack of a shift-reduce parser,
because they do not extend past the right-most handle)

5.6 AUGMENTED GRAMMAR:

If G is a grammar with start symbol S, then GI, the augmented grammar for G with a new start
symbol SI and production SIS.
The purpose of this new start stating production is to indicate to the parser when it should stop
parsing and announce acceptance of the input i.e., acceptance occurs when and only when the parser is
about to reduce by SIS.

CONSTRUCTION OF SLR PARSING TABLE:


Example:
The given grammar is:
1. EE+T
2. E T
3. T T*F
4. TF
5. F(E)
6. Fid Step I: The Augmented grammar is:

55
EIE
EE+T
ET
TT*F
TF
F(E)
Fid

Step II: The collection of LR (0) items are:


I0 : EI.E
E.E+T
E.T
T.T*F
T.F
F.(E)
F.id
Start with start symbol after since ( ) there is E, start writing all productions of E.
Start writing ‘T’ productions
Start writing F productions
Goto (I0,E): States have successor states formed by advancing the marker over the symbol it
preceeds. For state 1 there are successor states reached by advancing the masks over
the

symbols E,T,F,C or id. Consider, first, the


E IE. -
I1: reduced Item(RI)

EE.+T
Goto (I0,T):
I2 : ET. - reduced Item (RI)

56
TT.*F
Goto (I0,F):
I2 : ET. - reduced item (RI)

TT.*F
Goto (I0,C):

I4: F(.E)

E.E+T

E.T
T.T*F
T.F
F.(E)
F.id

If ‘.’ Precedes non-terminal start writing its corresponding production. Here first E then T after that
F.
Start writing F productions.
Goto (I0,id):
I5 : F id. - reduced item.

E successor (I, state), it contains two items derived from state 1 and the closure operation adds no
more (since neither marker precedes a non-terminal). The state I2 is thus:
Goto (I1,+):
I6 : EE+.T start writing T productions

T.T*F

T.F start writing F productions


F.(E)
F.id

57
Goto (I2,*):
I7 : TT*.F start writing F productions

F.(E)

F.id
Goto (I4,E):

I8: F(E.)

EE.+T
Goto (I4,T):
I2 : ET. these are same as I2.

TT.*F
Goto (I4,C):

I4: F(.E)

E.E+T

E.T
T.T*F
T.F
F.(E)
F.id

goto (I4,id):
I5 : Fid. - reduced item
Goto (I6,T):
I9 : EE+T. - reduced item

58
TT.*F
Goto (I6,F):

I3: TF. - reduced item Goto (I6,C):

I4: F(.E)

E.E+T

E.T
T.T*F
T.F
F.(E)
F.id

Goto (I6,id):
I5 : Fid. reduced item.
Goto (I7,F):
I10: TT*F reduced item
Goto (I7,C):

I4: F(.E)

E.E+T

E.T
T.T*F
T.F
F.(E)
F.id

Goto (I7,id):
I5 : Fid. - reduced item

59
Goto (I8,)):
I11: F(E). reduced item
Goto (I8,+):
I11: F(E). reduced item
Goto (I8,+):

I6: EE+.T

T.T*F

T.F
F.(E)
F.id

Goto (I9,+):
I7 : TT*.f

F.(E)

F.id
Step IV: Construction of Parse table:

Construction must proceed according to the algorithm 4.8

Sshift items
Rreduce items

Initially EIE. is in I1 so, I = 1.


Set action [I, $] to accept i.e., action [1, $] to Acc
Action Goto
State Id + * ( ) $ E T F
I0 S5 S4 1 2 3
1 S6 Accept
2 r2 S7 R2 R2

60
3 R4 R4 R4 R4
4 S5 S4 3
5 R6 R6 R6 R6
6 S5 S4 3
7 S5 S4 10
8 S6 S11
9 R1 S7 r1 r1
10 R3 R3 R3 R3
11 R5 R5 R5 R5

As there are no multiply defined entries, the grammar is SLR®.

STEP – III Finding FOLLOW ( ) set for all non-terminals.


Relevant production
FOLLOW (E) = {$} U FIRST (+T) U FIRST ( ) ) EE/B + T/B
= {+, ), $} F(E)
B
FOLLOW (T) = FOLLOW (E) U ET
FIRST (*F) U TT*F
FOLLOW (E) EE+T
B
= {+,*,),$}
FOLLOW (F) = FOLLOW (T)
= {*,*,),$}
Step – V:
1. Consider I0:
1. The item F.(E) gives rise to goto (I0,C) = I4, then action [0,C] = shift 4
2. The item F.id gies rise goto (I0,id) = I4, then action [0,id] = shift 5

the other items in I0 yield no actions. Goto (I0,E) = I1 then goto [0,E] = 1

61
Goto (I0,T) = I2 then goto [0,T] = 2
Goto (I0,F) = I3 then goto [0,F] = 3
2. Consider I1:

1. The item EIE. is the reduced item, so I = 1 This gives rise to

action [1,$] to accept.

2. The item EE.+T gives rise to

goto (I1,+)=I6, then action [1,+] = shift 6.

3. Consider I2:

1. The item ET. is the reduced item, so take FOLLOW (E),

FOLLOW (E) = {+,),$}

The first item +, makes action [Z,+] = reduce ET. ET is production
rule no.2. So action [Z,+] = reduce 2.
The second item, makes action [Z,)] = reduce 2 The third item $, makes
action [Z,$] = reduce 2
2. The item TT.*F gives rise to

goto [I2,*]=I7, then action [Z,*] = shift 7.

4. Consider I3:

1. TF. is the reduced item, so take FOLLOW (T).

FOLLOW (T) = {+,*,),$}

So, make action [3,+] = reduce 4

Action [3,*] = reduce 4


Action [3,)] = reduce 4

62
Action [3,$] = reduce 4

In forming item sets a closure operation must be performed to ensure that whenever the marker in
an item of a set precedes a non-terminal, say E, then initial items must be included in the set for all
productions with E on the left hand side.
The first item set is formed by taking initial item for the start state and then performing the
closure operation, giving the item set;
We construct the action and goto as follows:

1. If there is a transition from state I to state J under the terminal symbol K, then set
action [I,k] to SJ.
2. If there is a transition under a non-terminal symbol a, say from state ‘i’ to state ‘J’,
set goto [I,A] to SJ.
3. If state I contains a transition under $ set action [I,$] to accept.

4. If there is a reduce transition #p from state I, set action [I,k] to reduce #p for all
terminals k belonging to FOLLOW (A) where A is the subject to production #P.
If any entry is multiply defined then the grammar is not SLR(1). Blank entries are represented by
dash (-).

5. Consider I4 items:
The item Fid gives rise to goto [I4,id] = I5 so,
Action (4,id)  shift 5
The item F.E action (4,c) shift 4
The item goto (I4,F)  I3, so goto [4,F] = 3
The item goto (I4,T)  I2, so goto [4,F] = 2
The item goto (I4,E)  I8, so goto [4,F] = 8

6. Consider I5 items:
Fid. Is the reduced item, so take FOLLOW (F).

FOLLOW (F) = {+,*,),$}

63
Fid is rule no.6 so reduce 6
Action (5,+) = reduce 6
Action (5,*) = reduce 6
Action (5,)) = reduce 6
Action (5,)) = reduce 6
Action (5,$) = reduce 6

7. Consider I6 items:
goto (I6,T) = I9, then goto [6,T] = 9 goto (I6,F) = I3, then goto
[6,F] = 3 goto (I6,C) = I4, then goto [6,C] = 4 goto (I6,id) = I5,
then goto [6,id] = 5

8. Consider I7 items:
1. goto (I7,F) = I10, then goto [7,F] = 10
2. goto (I7,C) = I4, then action [7,C] = shift 4
3. goto (I7,id) = I5, then goto [7,id] = shift 5

9. Consider I8 items:
1. goto (I8,)) = I11, then action [8,)] = shift 11
2. goto (I8,+) = I6, then action [8,+] = shift 6

10. Consider I9 items:


1. EE+T. is the reduced item, so take FOLLOW (E).

FOLLOW (E) = {+,),$}

EE+T is the production no.1., so


Action [9,+] = reduce 1
Action [9,)] = reduce 1
Action [9,$] = reduce 1

2. goto [I5,*] = I7, then acgtion [9,*] = shift 7.

64
11. Consider I10 items:
1. TT*F. is the reduced item, so take

FOLLOW (T) = {+,*,),$}

TT*F is production no.3., so


Action [10,+] = reduce 3
Action [10,*] = reduce 3
Action [10,)] = reduce 3
Action [10,$] = reduce 3

12. Consider I11 items:


1. F(E). is the reduced item, so take

FOLLOW (F) = {+,*,),$}

F(E) is production no.5., so


Action [11,+] = reduce 5
Action [11,*] = reduce 5
Action [11,)] = reduce 5
Action [11,$] = reduce 5

VI MOVES OF LR PARSER ON id*id+id:


STACK INPUT ACTION
1. 0 id*id+id$ shift by S5
2. 0id5 *id+id$ sec 5 on *
reduce by Fid
If A
Pop 2*| | symbols.
=2*1=2 symbols.
Pop 2 symbols off the stack
State 0 is then exposed on F.

65
Since goto of state 0 on F is
3, F and 3 are pushed onto
the stack
3. 0F3 *id+id$ reduce by T F
pop 2 symbols push T. Since
goto of state 0 on T is 2, T
and 2, T and 2 are pushed
onto the stack.
4. 0T2 *id+id$ shift by S7
5. 0T2*7 id+id$ shift by S5
6. 0T2*7id5 +id$ reduce by r6 i.e.
F id
Pop 2 symbols,
Append F,
Secn 7 on F, it is 10
7. 0T2*7F10 +id$ reduce by r3, i.e.,
T T*F
Pop 6 symbols, push T
Sec 0 on T, it is 2
Push 2 on stack.
8. 0T2 +id$ reduce by r2, i.e.,
E T
Pop two symbols,
Push E
See 0 on E. It 10 1
Push 1 on stack
9. 0E1 +id$ shift by S6.
10. 0E1+6 id$ shift by S5
11. 0E1+6id5 $ reduce by r6 i.e.,

66
F id
Pop 2 symbols, push F, see 6
on F
It is 3, push 3
0E1+6F3 $ reduce by r4, i.e.,
T F
Pop2 symbols,
Push T, see 6 on T
It is 9, push 9.
0E1+6T9 $ reduce by r1, i.e.,
E E+T
Pop 6 symbols, push E
See 0 on E, it is 1
Push 1.
0E1 $ Accept

Procedure for Step-V

The parsing algorithm used for all LR methods uses a stack that contains alternatively state
numbers and symbols from the grammar and a list of input terminal symbols terminated by $. For
example:

AAbBcCdDeEf/uvwxyz$
Where, a ...... f are state numbers
A . . .. E are grammar symbols (either terminal or non-terminals) u ...... z are the terminal symbols of
the text still to be parsed. The parsing algorithm starts in state I0 with the configuration –

67
0 / whole program upto $.

Repeatedly apply the following rules until either a syntactic error is found or the parse is complete.
(i) If action [f,4] = Si then transform aAbBcCdDeEf / uvwxyz$
to aAbBcCdDeEfui / vwxyz$ This is called a SHIFT transition
(ii) If action [f,4] = #P and production # P is of length 3, say, then it will be of the form P
 CDE where CDE exactly matches the top three symbols on the stack, and P is some non-
terminal, then assuming goto [C,P] = g
aAbBcCdDEfui / vwxyz$ will transform to
aAbBcPg / vwxyz$
The symbols in the stack corresponding to the right hand side of the production have been replaced by
the subject of the production and a new state chosen using the goto table. This is called a REDUCE
transition.
(iii) If action [f,u] = accept. Parsing is completed
(iv) If action [f,u] = - then the text parsed is syntactically in-correct.
Canonical LR(O) collection for a grammar can be constructed by augmented grammar and two
functions, closure and goto.
The closure operation:
If I is the set of items for a grammar G, then closure (I) is the set of items constructed from I by the
two rules:
i) initially, every item in I is added to closure (I).

68
CANONICAL LR PARSING:
Example:

S  CC

C CC/d.
1. Number the grammar productions:
1. S CC
2. C CC
3. C d

2. The Augmented grammar is:

SI S
S CC
C CC
C d.

Constructing the sets of LR(1) items:


We begin with:

SI .S,$ begin with look-a-head (LAH) as $.

We match the item [SI .S,$] with the term [A .B,a]


In the procedure closure, i.e.,

I
A=S
 = 

B=S

=a=$

Function closure tells us to add [B.r,b] for each production Br and terminal b in FIRST (a).
Now r must be SCC, and since  is  and a is $, b may only be $. Thus,

69
S.CC,$

We continue to compute the closure by adding all items [C.r,b] for b in FIRST [C$] i.e., matching
[S.CC,$] against [A.B,a] we have, A=S, =, B=C and a=$. FIRST (C$) = FIRST ©
FIRST© = {c,d} We add items:
C.cC,C
CcC,d
C.d,c
C.d,d

None of the new items have a non-terminal immediately to the right of the dot, so we have completed
our first set of LR(1) items. The initial I0 items are:

I0 : SI.S,$ S.CC,$ C.CC,c/d C.d.c/d

Now we start computing goto (I0,X) for various non-terminals i.e., Goto (I0,S):

I1 : SIS.,$  reduced item.


Goto (I0,C
I2 : SC.C, $
C.cC,$
C.d,$
Goto (I0,C :
I2 : Cc.C,c/d
C.cC,c/d
C.d,c/d

Goto (I0,d)
I4 Cd., c/d reduced item.
Goto (I2,C) I5
SCC.,$  reduced item.
Goto (I2,C) I6

70
Cc.C,$
C.cC,$
C.d,$
Goto (I2,d) I7
Cd.,$  reduced item.
Goto (I3,C) I8
CcC.,c/d  reduced item.
Goto (I3,C) I3
Cc.C, c/d
C.cC,c/d
C.d,c/d
Goto (I3,d) I4
Cd.,c/d.  reduced item.
Goto (I6,C) I9
CcC.,$  reduced item.
Goto (I6,C) I6
Cc.C,$
C,cC,$
C.d,$
Goto (I6,d) I7
Cd.,$  reduced item.
All are completely reduced. So now we construct the canonical LR(1) parsing table –
Here there is no neet to find FOLLOW ( ) set, as we have already taken look-a-head for each
set of productions while constructing the states.

Constructing LR(1) Parsing table:


Action goto
State C D $ S C
I0 S3 S4 1 2
1 Accept

71
2 S6 S7 5
3 S3 S4 8
4 R3 R3
5 R1
6 S6 S7 9
7 R3
8 R2 R2
9 R2

1. Consider I0 items:

The item S.S.$ gives rise to goto [I0,S] = I1 so goto [0,s] = 1.


The item S.CC, $ gives rise to goto [I0,C] = I2 so goto [0,C] = 2.
The item C.cC, c/d gives rise to goto [I0,C] = I3 so goto [0,C] = shift 3
The item C.d, c/d gives rise to goto [I0,d] = I4 so goto [0,d] = shift 4
2. Consider I0 items:

The item SIS.,$ is in I1, then set action [1,$] = accept


3. Consider I2 items:
The item SC.C,$ gives rise to goto [I2,C] = I5. so goto [2,C] = 5

The item C.cC, $ gives rise to goto [I2,C] = I6. so action [0,C] = shift The item C.d,$ gives rise
to goto [I2,d] = I7. so action [2,d] = shift 7
4. Consider I3 items:
The item C.cC, c/d gives rise to goto [I3,C] = I8. so goto [3,C] = 8

The item C.cC, c/d gives rise to goto [I3,C] = I3. so action [3,C] = shift 3. The item C.d, c/d
gives rise to goto [I3,d] = I4. so action [3,d] = shift 4.

5. Consider I4 items:

The item C.d, c/d is the reduced item, it is in I4 so set action [4,c/d] to reduce cd. (production
rule no.3)
6. Consider I5 items:

The item SCC.,$ is the reduced item, it is in I5 so set action [5,$] to SCC (production rule no.1)
7. Consider I6 items:

72
The item Cc.C,$ gives rise to goto [I6 ,C] = I9. so goto [6,C] = 9
The item C.cC,$ gives rise to goto [I6 ,C] = I6. so action [6,C] = shift 6
The item C.d,$ gives rise to goto [I6 ,d] = I7. so action [6,d] = shift 7
8. Consider I7 items:
The item Cd., $ is the reduced item, it is in I7.

So set action [7,$] to reduce Cd (production no.3)

9. Consider I8 items:

The item CCC.c/d in the reduced item, It is in Is, so set action[8,c/d] to reduce Ccd
(production rale no .2)

10. Consider I9 items:

The item C cC, $ is the reduced item, It is in I9, so set action [9,$] to reduce CcC
(Production rale no.2)

If the Parsing action table has no multiply –defined entries, then the given grammar is called as
LR(1) grammar

LALR PARSING:

Example:

1. Construct C={I0,I1,… ....... ,In} The collection of sets of LR(1) items

2. For each core present among the set of LR (1) items, find all sets having that core, and
replace there sets by their Union# (clus them into a single term)

I0 same as previous

I1  “

I2  “

I36 – Clubbing item I3 and I6 into one I36 item.

C cC,c/d/$

CcC,c/d/$

73
Cd,c/d/$

I5 some as previous

I47 Cd,c/d/$

I89 CcC, c/d/$

LALR Parsing table construction:

Action Goto
State
c d C

Io S36 S47 2

1 Accept

2 S36 S47 5

36 S36 S47 89

47 r3 r3

5 r1

89 r2 r2 r2

74
SEMANTIC ANALYSIS

Intermediate Code Generation

1. Intermediate code forms:

An intermediate code form of source program is an internal form of a program created by the
compiler while translating the program created by the compiler while translating the program from
a high –level language to assembly code(or)object code(machine code).an intermediate source form
represents a more attractive form of target code than does assembly. An optimizing Compiler
performs optimizations on the intermediate source form and produces an object module.

Analysis + syntheses=translation

Creates an generatetarge
code Intermediate code

In the analysis –synthesis model of a compiler, the front-end translates a source program
into an intermediate representation from which the back-end generates target code, in many
compilers the source code is translated into a language which is intermediate in complexity between
a HLL and machine code .the usual intermediate code introduces symbols to stand for various
temporary quantities.

Parser Static Intermediate Code


checker code generator generator

position of intermediate code generator

We assume that the source program has already been parsed and statically checked.. the various
intermediate code forms are:

a) Polish notation
b) Abstract syntax trees(or)syntax trees
c) Quadruples
d) Triples three address code
e) Indirect triples
f) Abstract machine code(or)pseudocopde a. postfix notation:

75
The ordinary (infix) way of writing the sum of a and b is with the operator in the middle: a+b. the
postfix (or postfix polish)notation for the same expression places the operator at the right end, as ab+.

In general, if e1 and e2 are any postfix expressions, and Ø to the values denoted by e1 and e2 is
indicated in postfix notation nby e1e2Ø.no parentheses are needed in postfix notation because the
position and priority (number of arguments) of the operators permits only one way to decode a postfix
expression.

Example:

1. (a+b)*c in postfix notation is ab+c*,since ab+ represents the infix expression(a+b).

2. a*(b+c)is abc+* in postfix.

3. (a+b)*(c+d) is ab+cd+* in postfix.

Postfix notation can be generalized to k-ary operators for any k>=1.if k-ary operator Ø is applied to
postfix expression e1,e2,……….ek, then the result is denoted by e1e2…….ek Ø. if we know the
priority of each operator then we can uniquely decipher any postfix expression by scanning it from
either end.

Example:

Consider the postfix string ab+c*.

The right hand * says that there are two arguments to its left. since the next –to-rightmost symbol is
c, simple operand, we know c must be the second operand of *.continuing to the left, we encounter the
operator +.we know the sub expression ending in + makes up the first operand of
*.continuing in this way ,we deduce that ab+c* is “parsed” as (((a,b)+),c)*.

b. syntax tree:

The parse tree itself is a useful intermediate-language representation for a source program,
especially in optimizing compilers where the intermediate code needs to extensively restructure.

A parse tree, however, often contains redundant information which can be eliminated, Thus
producing a more economical representation of the source program. One such variant of a parse tree
is what is called an (abstract) syntax tree, a tree in which each leaf represents an operand and each
interior node an operator.

76
Exmples:

1) Syntax tree for the expression a*(b+c)/d

* d

a +

b c

2) syntax tree for if a=b then a:=c+d else b:=c-d

If---then---else

= :=
a b a + -

c d d

Three-Address Code:
• In three-address code, there is at most one operator on the right side of aninstruction; that is, no
built-up arithmetic expressions are permitted.
x+y*z t1 = y * z t2 = x + t1
• Example

77
Problems:
Write the 3-address code for the following expression
1. if(x + y * z > x * y +z) a=0;
2. (2 + a * (b – c / d)) / e
3. A :=b * -c + b * -c

Address and Instructions



• Example Three-address code is built from two concepts: addresses and instructions.
• An address can be one of the following:
– A name: A source name is replaced by a pointer to its symbol table entry.
• A name: For convenience, allow source-program names to Appear as addresses in three-address
code. In an Implementation, a source name is replaced by a pointer to
its symbol-table entry, where all information about the name is kept.
– A constant
• A constant: In practice, a compiler must deal with many different types of constants and variables
– A compiler-generated temporary
• A compiler-generated temporary. It is useful, especially in optimizing compilers, to create a
distinct name each time a temporary is needed. These temporaries can be combined, if possible, when
registers are allocated to variables.
A list of common three-address instruction forms: Assignment statements
– x= y op z, where op is a binary operation
– x= op y, where op is a unary operation
– Copy statement: x=y
– Indexed assignments: x=y[i] and x[i]=y
– Pointer assignments: x=&y, *x=y and x=*y
Control flow statements
– Unconditional jump: goto L
– Conditional jump: if x relop y goto L ; if x goto L; if False x goto L
– Procedure calls: call procedure p with n parameters and return y, is Optional
param x1 param x2

param xn call p, n

– do i = i +1; while (a[i]<v);

78
The multiplication i * 8 is appropriate for an array of elements that each take 8 units of space.

C. quadruples:
• Three-address instructions can be implemented as objects or as record with fields for the operator
and operands.
• Three such representations
– Quadruple, triples, and indirect triples
• A quadruple (or quad) has four fields: op, arg1, arg2, and result.
Example D. Triples
• A triple has only three fields: op, arg1, and arg2
• Using triples, we refer to the result of an operation x op y by its position, rather by an explicit
temporary name.
Example

d. Triples:
• A triple has only three fields: op, arg1, and arg2

79
• Using triples, we refer to the result of an operation x op y by its position, rather by an explicit
temporary name.
Example

Fig: Representations of a = b * - c + b * - c

Fig: Indirect triples representation of 3-address code


-> The benefit of Quadruples over Triples can be seen in an optimizing compiler, where
instructions are often moved around.
->With quadruples, if we move an instruction that computes a temporary t, then the instructions
that use t require no change. With triples, the result of an operation is referred to by its position, so
moving an instruction may require changing all references to that result. This problem does not occur
with indirect triples.

Single-Assignment Static Form


Static single assignment form (SSA) is an intermediate representation that facilitates certain code
optimization.
• Two distinct aspects distinguish SSA from three –address code.

80
– All assignments in SSA are to variables with distinct names; hence the term static single-
assignment.

2. Type Checking:
•A compiler has to do semantic checks in addition to syntactic checks. •Semantic Checks

–Static –done during compilation

–Dynamic –done during run-time

•Type checking is one of these static checking operations.

–we may not do all type checking at compile-time.

–Some systems also use dynamic type checking too.

•A type system is a collection of rules for assigning type expressions to the parts of a program.

•A type checker implements a type system.

•A sound type system eliminates run-time type checking for type errors.

81
•A programming language is strongly-typed, if every program its compiler accepts will execute
without type errors.

In practice, some of type checking operations is done at run-time (so, most of the programming
languages are not strongly yped).

Type Expression:
The type of a language construct is denoted by a type expression.

A type expression can be:

A basic type a primitive data type such as integer, real, char, Boolean, …

type-error to signal a type error void: no type

A type name a name can be used to denote a type expression.

A type constructor applies to other type expressions.

•arrays: If T is a type expression, then array (I,T)is a type expression where I denotes index range.
Ex: array (0..99,int)

•products: If T1and T2 are type expressions, then their Cartesian product T1 x T2 is a type
expression. Ex: int x int

•pointers: If T is a type expression, then pointer (T) is a type expression. Ex: pointer (int)

•functions: We may treat functions in a programming language as mapping from a domain type D to
a range type R. So, the type of a function can be denoted by the type expression D→R where D are R
type expressions. Ex: int→int represents the type of a function which takes an int value as parameter,
and its return type is also int.

Type Checking of Statements:

S ->d= E { if (id.type=E.type then S.type=void

else S.type=type-error }

82
S ->if E then S1 { if (E.type=boolean then S.type=S1.type

else S.type=type-error }

S->while E do S1 { if (E.type=boolean then S.type=S1.type

else S.type=type-error }

Type Checking of Functions:

E->E1( E2) {

else E.type=type-error }

Ex: int f(double x, char y) { ... }

f: double x char->int

argume types return type

Structural Equivalence of Type Expressions:

•How do we know that two type expressions are equal?

•As long as type expressions are built from basic types (no type names), we may use structural
equivalence between two type expressions

Structural Equivalence Algorithm (sequin):

if (s and t are same basic types) then return true

else if (s=array(s1,s2) and t=array(t1,t2)) then return (sequiv(s1,t1) and sequiv(s2,t2)) else if (s = s1 x

s2and t = t1 x t2) then return (sequiv(s1,t1) and sequiv(s2,t2))

else if (s=pointer(s1) and t=pointer(t1)) then return (sequiv(s1,t1))

else if (s = s1 else return false

83
Names for Type Expressions:

•In some programming languages, we give a name to a type expression, and we use that name as a
type expression afterwards.

type link = ↑cell; ? p,q,r,s have same types ? var p,q : link;

var r,s : ↑cell

•How do we treat type names?

–Get equivalent type expression for a type name (then use structural equivalence), or

–Treat a type name as a basic type

3. Syntax Directed Translation:

A formalist called as syntax directed definition is used fort specifying translations for
programming language constructs.
A syntax directed definition is a generalization of a context free grammar in which each
grammar symbol has associated set of attributes and each and each productions is associated
with a set of semantic rules

Definition of (syntax Directed definition ) SDD :

SDD is a generalization of CFG in which each grammar productions X->α is associated with it a set
of semantic rules of the form

a: = f(b1,b2…..bk)

Where a is an attributes obtained from the function f.

• A syntax-directed definition is a generalization of a context-free grammar in which:

– Each grammar symbol is associated with a set of attributes.

– This set of attributes for a grammar symbol is partitioned into two subsets called synthesized and
inherited attributes of that grammar symbol.

– Each production rule is associated with a set of semantic rules.

• Semantic rules set up dependencies between attributes which can be represented by a dependency
graph.

84
• This dependency graph determines the evaluation order of these semantic rules.

• Evaluation of a semantic rule defines the value of an attribute. But a semantic rule may also have
some side effects such as printing a value.

The two attributes for non terminal are :

1) Synthesized attribute (S-attribute) : (↑)

An attribute is said to be synthesized attribute if its value at a parse tree node is determined from
attribute values at the children of the node

2) Inherited attribute: (↑,→)

An inherited attribute is one whose value at parse tree node is determined in terms of attributes at the
parent and | or siblings of that node.

 The attribute can be string, a number, a type, a, memory location or anything else.
 The parse tree showing the value of attributes at each node is called an annotated parse tree.

The process of computing the attribute values at the node is called annotating or decorating the parse
tree.Terminals can have synthesized attributes, but not inherited attributes.

Annotated Parse Tree

• A parse tree showing the values of attributes at each node is called an Annotated parse tree.

• The process of computing the attributes values at the nodes is called annotating (or decorating) of
the parse tree.

• Of course, the order of these computations depends on the dependency graph induced by the
semantic rules.
Ex1:1) Synthesized Attributes : Ex: Consider the CFG :
S→ EN E→ E+T E→E-T E→ T T→ T*F T→T/F T→F F→ (E) F→digit N→;

Solution: The syntax directed definition can be written for the above grammar by using semantic
actions for each production.

85
Production rule Semantic actions

S →EN S.val=E.val
E →E1+T E.val =E1.val + T.val
E →E1-T E.val = E1.val – T.val
E →T E.val =T.val
T →T*F T.val = T.val * F.val
T →T|F T.val =T.val | F.val
F → (E) F.val =E.val
T →F T.val =F.val
F →digit F.val =digit.lexval
N →; can be ignored by lexical Analyzer as; I
is terminating symbol

For the Non-terminals E,T and F the values can be obtained using the attribute “Val”.

The taken digit has synthesized attribute “lexval”.

In S→EN, symbol S is the start symbol. This rule is to print the final answer of expressed.

Following steps are followed to Compute S attributed definition

1. Write the SDD using the appropriate semantic actions for corresponding production rule of the
given Grammar.

2. The annotated parse tree is generated and attribute values are computed. The Computation is done
in bottom up manner.

3. The value obtained at the node is supposed to be final output.

PROBLEM 1:

Consider the string 5*6+7; Construct Syntax tree, parse tree and annotated tree.

Solution:

The corresponding annotated parse tree is shown below for the string 5*6+7;

86
Syntax tree:

Annotated parse tree :

Advantages: SDDs are more readable and hence useful for specifications

Disadvantages: not very efficient.

Ex2:

PROBLEM : Consider the grammar that is used for Simple desk calculator. Obtain the
Semantic action and also the annotated parse tree for the string

3*5+4n. L→En E→E1+T

87
E→T

T→T1*F

T→F

F→ (E)

F→digit

Solution :

Production rule Semantic actions

L→En L.val=E.val

E→E1+T E.val=E1.val + T.val

E→T E.val=T.val

T→T1*F T.val=T1.val*F.val

T→F T.val=F.val

F→(E) F.val=E.val

F→digit F.val=digit.lexval

The corresponding annotated parse tree U shown below, for the string 3*5+4n.

88
Dependency Graphs:

Dependency graph and topological sort:


 For each parse-tree node, say a node labeled by grammar symbol X, the dependency graph
has a node for each attribute associated with X.
 If a semantic rule associated with a production p defines the value of synthesized attribute A.b
in terms of the value of X.c. Then the dependency graph has an edge from X.c to A.b
 If a semantic rule associated with a production p defines the value of inherited attribute B.c in
terms of the value X.a. Then , the dependency graph has an edge from X.a to B.c.

Applications of Syntax-Directed Translation


• Construction of syntax Trees
– The nodes of the syntax tree are represented by objects with a suitable number of fields.
– Each object will have an op field that is the label of the node.
– The objects will have additional fields as follows
• If the node is a leaf, an additional field holds the lexical value for the leaf. A constructor function
Leaf (op, val) creates a leaf object.
• If nodes are viewed as records, the Leaf returns a pointer to a new record for a leaf.
• If the node is an interior node, there are as many additional fields as the node has children in the
syntax tree. A constructor function
Node takes two or more arguments:
Node (op , c1,c2,…..ck) creates an object with first field op and k additional fields for the k children
c1,c2,…..ck

Syntax-Directed Translation Schemes


A SDT scheme is a context-free grammar with program fragments embedded within production bodies
.The program fragments are called semantic actions and can appear at any position within the
production body.
Any SDT can be implemented by first building a parse tree and then pre-forming the actions in a left-
to-right depth first order. i.e during preorder traversal.
The use of SDT’s to implement two important classes of SDD’s
1. If the grammar is LR parsable, then SDD is S-attributed.
2. If the grammar is LL parsable, then SDD is L-attributed.

89
Postfix Translation Schemes
The postfix SDT implements the desk calculator SDD with one change: the action for the first
production prints the value. As the grammar is LR, and the SDD is S-attributed.
L →E n {print(E.val);}
E → E1 + T { E.val = E1.val + T.val }
E → E1 - T { E.val = E1.val - T.val }
E → T { E.val = T.val }
T → T1 * F { T.val = T1.val * F.val } T → F { T.val = F.val }
F → ( E ) { F.val = E.val }
F → digit { F.val = digit.lexval }

Symbol table:

A symbol table is a major data structure used in a compiler:

Associates attributes with identifiers used in a program.


For instance, a type attribute is usually associated with each identifier.

A symbol table is a necessary component.

Definition (declaration) of identifiers appears once in a 
program Use of identifiers may appear in


many places of the program text Identifiers and attributes are entered by the analysis phases
When processing a definition (declaration) of an identifier
In simple languages with only global variables and implicit declarations:
 
The scanner can enter an identifier into a symbol table if it is not already there In block-

structured languages with scopes and explicit declarations:


The parser and/or semantic analyzer enter identifiers and corresponding
attributes
Symbol table information is used by the analysis and synthesis phases
To verify that used identifiers have been defined (declared)
90
 To verify that expressions and assignments are semantically correct – type checking
To generate intermediate or target code

Symbol Table Interface:


The basic operations defined on a symbol table include:

allocate – to allocate a new empty symbol table
 free – to remove all entries and free the storage of a symbol table
 insert – to insert a name in a symbol table and return a pointer to its entry
 lookup – to search for a name and return a pointer to its entry
 set_attribute – to associate an attribute with a given entry
 get_attribute – to get an attribute associated with a given entry
 Other operations can be added depending on requirement

For example, a delete operation removes a name previously inserted Some identifiers

become invisible (out of scope) after exiting a block

 This interface provides an abstract view of a symbol table.


 Supports the simultaneous existence of multiple tables
 Implementation can vary without modifying the interface

Basic Implementation Techniques:

First consideration is how to insert and lookup names

Variety of implementation techniques

Unordered List

Simplest to implement

Implemented as an array or a linked list

Linked list can grow dynamically – alleviates problem of a fixed size array

Insertion is fast O(1), but lookup is slow for large tables – O(n) on average

Ordered List

If an array is sorted, it can be searched using binary search – O(log2 n)

Insertion into a sorted array is expensive – O(n) on average

Useful when set of names is known in advance – table of reserved words


91
Binary Search Tree

Can grow dynamically

Insertion and lookup are O(log2 n) on average

Hash Tables and Hash Functions:


 A hash table is an array with index range: 0 to TableSize – 1
 Most commonly used data structure to implement symbol tables
 Insertion and lookup can be made very fast – O(1)
 A hash function maps an identifier name into a table index
A hash function, h(name), should depend solely on name
h(name) should be computed quickly
h should be uniform and randomizing in distributing names
All table indices should be mapped with equal probability
Similar names should not cluster to the same table index.

Storage Allocation:

 Compiler must do the storage allocation and provide access to variables and data
 Memory management
Stack allocation
Heap management
Garbage collection

Storage Organization:

• Assumes a logical address space


92
 Operating system will later map it to physical addresses, decide how touse cache memory, etc.
• Memory typically divided into areas for

Program code
 Other static data storage, including global constants and compilergenerated data
 Stack to support call/return policy for procedures
 Heap to store data that can outlive a call to a procedure

Static vs. Dynamic Allocation:

Static: Compile time, Dynamic: Runtime allocation


Many compilers use some combination of following
Stack storage: for local variables, parameters and so on
Heap storage: Data that may outlive the call to the procedure that created it

Stack allocation is a valid allocation for procedures since procedure calls are nest

Example:

Consider the quick sort program

93
Activation for Quicksort:

Activation tree representing calls during an execution of quicksort:

94
95

You might also like