0% found this document useful (0 votes)
37 views24 pages

Unit 4-Static and Dynamic Memory Allocation and Memory Management

Uploaded by

btechgaming05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views24 pages

Unit 4-Static and Dynamic Memory Allocation and Memory Management

Uploaded by

btechgaming05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

CE442: DESIGN OF LANGUAGE PROCESSOR

Unit-4

Static & Dynamic Memory Allocation &


Memory management
Allocating memory
• There are two ways that memory gets allocated for data
storage: Compile Time (or static) Allocation
– Memory for named variables is allocated by the compiler
– Exact size and type of storage must be known at compile time
– For standard array declarations, this is why the size has to be
constant
• Dynamic Memory Allocation
– Memory allocated "on the fly" during run time
– dynamically allocated space usually placed in a program segment
known as the heap or the free store
– Exact amount of space or number of items does not have to be
known by the compiler in advance.
– For dynamic memory allocation, pointers are crucial

Unit-4
Dynamic Memory Allocation
▪ For this reason, dynamic allocation requires two steps:
1.Creating the dynamic space.
2.Storing its address in a pointer (so that the space can be accessed)

▪ To dynamically allocate memory in C++, we use the new operator.

▪ De-allocation:

▪ We can dynamically allocate storage space while the program is


running, but we cannot create new variable names "on the fly"
▪ Deallocation is the "clean-up" of space being used for variables or other
data storage
▪ Compile time variables are automatically deallocated based on their
known extent (this is the same as scope for "automatic" variables)
▪ It is the programmer's job to deallocate dynamically created space
▪ To de-allocate dynamic memory, we use the delete operator

Unit-4
Allocating space with new
• To allocate space dynamically, use the unary
operator new, followed by the type being allocated.
– new int; // dynamically allocates an int
– new double; // dynamically allocates a double
• If creating an array dynamically, use the same form,
but put brackets with a size after the type:
– new int[40]; // dynamically allocates an array of 40
ints
– new double[size]; // dynamically allocates an array of
size doubles // note that the size can be a variable

Unit-4
Allocating space with new(Cont…)
These statements above are not very useful by themselves, because the allocated spaces have
no names! BUT, the new operator returns the starting address of the allocated space, and this
address can be stored in a pointer:

int * p; // declare a pointer p


p = new int; // dynamically allocate an int and load address into p

double * d; // declare a pointer d


d = new double; // dynamically allocate a double and load address into d

// we can also do these in single line statements


int x = 40;
int * list = new int[x];
float * numbers = new float[x+10];

Unit-4
Accessing dynamically created space
▪ So once the space has been dynamically allocated, how do we
use it?

▪ For single items, we go through the pointer. Dereference the


pointer to reach the dynamically created target:

int * p = new int; // dynamic integer, pointed to by p

*p = 10; // assigns 10 to the dynamic integer


cout << *p; // prints 10

Unit-4
Accessing dynamically created space
• For dynamically created arrays, you can use either pointer-offset
notation, or treat the pointer as the array name and use the
standard bracket notation:

double * numList = new double[size]; // dynamic array

for (int i = 0; i < size; i++)


numList[i] = 0; // initialize array elements to 0

numList[5] = 20; // bracket notation


*(numList + 7) = 15; // pointer-offset notation
// means same as numList[7]

Unit-4
Deallocation of dynamic memory

• To deallocate memory that was created with new, we use


the unary operator delete. The one operand should be a
pointer that stores the address of the space to be
deallocated:
int * ptr = new int; // dynamically created int
delete ptr; / deletes the space that ptr points to

• Note that the pointer ptr still exists in this example. That's a
named variable subject to scope and extent determined at
compile time. It can be reused:
ptr = new int[10]; // point p to a brand new array
Unit-4
Storage Allocation

Storage Allocation
The different ways to allocate memory are:

1. Static storage allocation


2. Stack storage allocation
3. Heap storage allocation

Static storage allocation

● In static allocation, names are bound to storage locations.


● If memory is created at compile time then the memory will be created in static area and only once.
● Static allocation supports the dynamic data structure that means memory is created only at compile
time and deallocated after program completion.
● The drawback with static storage allocation is that the size and position of data objects should be
known at compile time.
● Another drawback is restriction of the recursion procedure.

Unit-4
Stack Storage Allocation
● In static storage allocation, storage is organized as a stack.
● An activation record is pushed into the stack when activation begins and it is popped when the
activation end.
● Activation record contains the locals so that they are bound to fresh storage in each activation
record. The value of locals is deleted when the activation ends.
● It works on the basis of last-in-first-out (LIFO) and this allocation supports the recursion process.

Heap Storage Allocation


● Heap allocation is the most flexible allocation scheme.
● Allocation and deallocation of memory can be done at any time and at any place depending upon
the user's requirement.
● Heap allocation is used to allocate memory to the variables dynamically and when the variables are
no more used then claim it back.
● Heap storage allocation supports the recursion process.

Unit-4
Access in block structured programming
languages
Organization for Block Structured Languages:
● The block structured language is a kind of language in which
sections of source code is within some matching pair of delimiters
such as “{“ and “}” or begin and end
● Such a section gets executed as one unit or one procedure or a
function or it may be controlled by some conditional statements
(if, while, do-while)
● Normally, block structured languages support structured
programming approach

Example: C, C++, JAVA, PASCAL, FORTRAM, LISP and SNOBOL


● Non-block structured languages are LISP, FORTRAN and SNOBOL

Unit-4
Regular Expression- Compilation of expressions

A regular expression is a set of patterns that can match a character or string. It can also match
alternative characters or strings. The grammar defined by the regular expression is known as
regular grammar, and the language is known as regular language. Any string matched by the
regular expression is a set of symbols over an alphabet. The repetition and alternation in any
string are expressed using *, +, and |.
In any regular expression, a* means a can occurs zero or more times. It can generate (e, aa, aaa,
aaaa …).
In any regular expression, a+ means a can occurs one or more times. It can generate (a, aa, aaa,
aaaa …).
Here are the rules that define regular expression over some alphabet and the language those
expressions denote.
Let a and b are regular expressions expressing the language L(a) and L(b).

1. (a)|(b) is a regular expression representing the language L(a) union L(b).

2. (a)(b) is regular expression representing the language L(a)L(b).

3. (a)* is regular expression representing (L(a))*

4. (a) is a regular expression representing L(r).

Unit-4
Operation on Regular Language
The various operations on the regular language are discussed below:
Union: If X and Y are regular expressions, L union M is also union.
X U Y = {a | a is in X or a is in Y}
Intersection: If X and Y are regular expressions, their intersection is also an intersection.
X ? Y = {ap | a is in X and p is in Y}
Kleene closure: If X is a regular language, its Kleene closure X1* will also be a regular
language.
X* = the language L can occur zero or more times.
Precedence and Associativity
● Unary operator * is left-associative and with the highest precedence.
● Concatenation is the left-associative and has the second-highest precedence.
● | (pipe sign) is also left-associative with the lowest precedence amongst all of them.

Unit-4
Example –
Let X = (a, b)
● The regular expression a|b denote the language {a, b}.
● (a|b)(a|b) represent {aa, ab, ba, bb} the language of all strings having length two over the alphabet X. One
more regular expressions that can accept the same language is aa|ab|ba|bb.
● a* represents the group of all strings that have zero or more a’s, i.e. (e, aa, aaa, aaaa, ……).
● (a|b)* represent the group of all strings having zero or more times of a or b, i.e., all string that contains a’s
and b’s: {e, a, b, aa, ab, ba, bb, aaa,}. One more regular expression that accepts the same language is
(a*b*)*.
● a|a*b denotes the language {a, b, ab, aab, aaan …}, i.e., the string a, and all strings have zero or more a’s
and ending with b.
Extensions of Regular Expressions
Kleene introduced regular expression in the 1950s with the basic operation for a union, concatenation, and Kleene
closure.
Here are the few notational extension mentioned that are currently in use:
1. One or more instance: Unary postfix operator + shows positive closure of a regular expression and its
language. It stated that if a is the regular expression, then (a)+ denotes the language (L(a)+. Two algebraic
laws r* = r+|e and r+ =rr* = r*r relate the positive closure and Kleene closure.
2. Zero or one instance: Unary postfix operator? means zero or one occurrence. It means that r? is
equivalent to r|e or L(r?) = L(r) U {e}. This operator has the same precedence and associativity as * and +.

Unit-4
Handling operator priorities
Operator precedence grammar is kinds of shift reduce parsing method. It
is applied to a small class of operator grammars.
A grammar is said to be operator precedence grammar if it has two
properties:

● No R.H.S. of any production has a∈.


● No two non-terminals are adjacent.
Operator precedence can only established between the terminals of the
grammar. It ignores the non-terminal.

There are the three operator precedence relations:


a ⋗ b means that terminal "a" has the higher precedence than terminal
"b".
a ⋖ b means that terminal "a" has the lower precedence than terminal "b".
a ≐ b means that the terminal "a" and "b" both have same precedence.

Unit-4
Precedence table:

Parsing Action

● Both end of the given input string, add the $ symbol.


● Now scan the input string from left right until the ⋗ is encountered.
● Scan towards left over all the equal precedence until the first left most ⋖ is
encountered.
● Everything between left most ⋖ and right most ⋗ is a handle.
● $ on $ means parsing is successful.

Unit-4
Example
Grammar:

1. E → E+T/T
2. T → T*F/F
3. F → id

Given string:
Now let us process the string with
4. w = id + id * id the help of the above precedence
table:
Let us consider a parse tree for it as follows:

On the basis of above tree, we can design following operator precedence table:

Unit-4
Intermediate code forms for expressions
An intermediate source form is an internal form of a program created by the
compiler while translating the program from a high-level language to assembly-level
or machine-level code. There are a number of advantages to using intermediate
source forms. An intermediate source form, represents a more attractive form of
target code than does assembly or machine code. For example, machine
idiosyncrasies, such as requiring certain operands to be in even- or odd-numbered
registers, can be ignored. Also, bookkeeping tasks, such as keeping track of
operand stacks, can be avoided.

In this topic, we discuss five types of intermediate source forms:


Polish notation,
n-tuple notation,
abstract syntax trees,
threaded code,
pseudo or abstract machine code.

Unit-4
What Does Polish Notation (PN) Mean?
Polish notation is a notation form for expressing arithmetic, logic and algebraic equations. Its most basic
distinguishing feature is that operators are placed on the left of their operands. If the operator has a
defined fixed number of operands, the syntax does not require brackets or parenthesis to lessen
ambiguity.

Polish notation is also known as prefix notation, prefix Polish notation, normal Polish notation, Warsaw
notation and Lukasiewicz notation.

N-tuple Notation

In this scheme of Intermediate code representation, we have a sequence of N-tuples,


where N = 3 or 4. The first field of the N-tuple is an operator and the rest N – 1 fields are
operands. If N = 3, it is called Triple notation and if N = 4, it is called Quadruple notation,
these two being the most popular ones.

Triple notation

The triple notation is also called “two-address code” (TAC). An expression a + b is


represented as An original expression a * b + c * d would be represented as a sequence
of such triples:

Unit-4
A parse tree is another popular intermediate form of source code. Because a tree structure can be easily
restructured, it is a suitable intermediate form for optimization compilers. A parse tree can be stripped of unnecessary
information to produce a more efficient representation of the source program. Such a trans- formed parse tree is
sometimes called an abstract syntax tree.

Unit-4
What Does Threaded Code Mean?
Threaded code is a compiler implementation technique that is used to implement virtual machine
interpreters. The code that is generated by the threaded code mostly contains calls to subroutines. This
code could also be a simple sequence of machine call instructions or perhaps a code that needs to be
processed by a machine interpreter. Threaded code is the implemented method in programming
languages like FORTH, most implementations of BASIC and some versions of COBOL. One of the
prominent features of threaded code is that compared to other code generation methods, it has a higher
code density. At the same time, the execution speed is slightly slower than the codes generated by
alternative methods.

Register Allocation
What Does Register Allocation Mean?
Register allocation refers to the practice of assigning variables to registers as well as handling transfer of
data into and out of registers. Register allocation may occur:
● On a basic block, known as local register allocation
● Over an entire function or procedure, known as global register allocation
● Over function boundaries traversed by means of a call graph, known as inter-procedural register
allocation

Unit-4
Parameter passing disciplines
1. Call By Value
2. Call By Reference

Actual parameter: the value passed to any function within main() function block is called that is called actual parameter.
Formal Parameter: the value passed in function which is defined outside of main block is called formal parameter.
Example:
Call By Value : In this technique Actual parameter and Formal parameter have different memory location .e.g: below given
example i and j variable define in main function whereas a,b,and c define in swap function.we call swap function to swap
value

Unit-4
Call By Reference : In this technique actual parameter and formal parameter have same memory location .

Unit-4
Position-Independent Code

The code within a dynamic executable is typically position-dependent, and is tied to a fixed address
in memory. Shared objects, on the other hand, can be loaded at different addresses in different
processes. Position-independent code is not tied to a specific address. This independence allows
the code to execute efficiently at a different address in each process that uses the code. Position-
independent code is recommended for the creation of shared objects

In computing, position-independent code (PIC) or position-independent executable (PIE) is a


body of machine code that, being placed somewhere in the primary memory, executes
properly regardless of its absolute address. ... Position-independent code can be executed
at any memory address without modification.

Unit-4

You might also like