0% found this document useful (0 votes)
14 views

09 Memory

Uploaded by

Balaram Pal
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

09 Memory

Uploaded by

Balaram Pal
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 105

Dynamic Memory Allocation

Alan L. Cox
[email protected]

Some slides adapted from CMU 15.213 slides


Objectives

Be able to analyze a dynamic memory


allocator’s key characteristics
 Memory usage efficiency (fragmentation)
 Speed of allocation and deallocation operations
 Locality of allocations
 Robustness

Be able to implement your own efficient


dynamic memory allocator (Malloc Project)
Be able to analyze the advantages and
disadvantages of different garbage collector
designs

Cox Dynamic Memory Allocation 2


Harsh Reality: Memory Matters

Memory is not unbounded


 It must be allocated and freed
 Many applications are memory dominated
• E.g., applications based on complex graph algorithms
• sample.c in this week’s lab spent >50% of its time
allocating and freeing memory
Memory referencing bugs especially pernicious
 Effects can be distant in both time and space

Memory performance is not uniform


 Cache and virtual memory effects can greatly affect
program performance
 Adapting program to characteristics of memory
system can lead to major speed improvements

Cox Dynamic Memory Allocation 3


Memory Allocation
Static size, static allocation
 Global and static variables
 Linker allocates final addresses
 Executable stores these allocated addresses

Static size, dynamic allocation


 Local variables
 Compiler directs stack allocation
 Frame pointer (%rbp) offsets stored in the code

Dynamic size, dynamic allocation


 Programmer controlled
 Allocated in the heap – how?

Cox Dynamic Memory Allocation 4


Dynamic Memory Allocation
Application

Dynamic Memory Allocator

Heap Memory

Explicit vs. implicit memory allocator


 Explicit: application allocates and frees space
• e.g., malloc and free in C
 Implicit: application allocates, but does not free space
• e.g., garbage collection in Java or Python
Allocation
 In both cases the memory allocator provides an
abstraction of memory as a set of blocks
 Doles out free memory blocks to application
We will first discuss simple explicit memory allocation

Cox Dynamic Memory Allocation 5


Process Memory Image
void *sbrk(intptr_t incr)
User Stack
%rsp  Easiest to use function
for allocators to request
additional heap memory
from the OS
Shared Libraries  brk initially set to the
end of the data section
 Calls to sbrk increment
brk brk by incr bytes (new
Heap memory is zero filled)
 incr can be negative to
Read/Write Data reduce the heap size
Read-only Code and Data

Unused
0x00000000
Cox Dynamic Memory Allocation 6
Malloc Package
#include <stdlib.h>
void *malloc(size_t size)
 If successful:
• Returns a pointer to a memory block of at least size bytes,
aligned to at least an 8-byte boundary
• If size == 0, returns NULL (or a unique pointer)
 If unsuccessful: returns NULL (0)
void free(void *ptr)
 Returns the block pointed at by ptr to pool of available
memory
 ptr must come from a previous call to malloc or realloc
void *realloc(void *ptr, size_t size)
 Changes size of block pointed at by ptr and returns
pointer to new block
 Contents of new block unchanged up to the minimum of
the old and new sizes

Cox Dynamic Memory Allocation 7


Malloc/realloc/free Example
void
foo(int n, int m)
{
int i, *p;

/* Allocate a block of n ints. */


if ((p = malloc(n * sizeof(int))) == NULL)
unix_error("malloc");
for (i = 0; i < n; i++)
p[i] = i;
/* Add m bytes to end of p block. */
if ((p = realloc(p, (n + m) * sizeof(int))) == NULL)
unix_error("realloc");
for (i = n; i < n + m; i++)
p[i] = i;
/* Print new array. */
for (i = 0; i < n + m; i++)
printf("%d\n", p[i]);
/* Return p to available memory pool. */
free(p);
}

Cox Dynamic Memory Allocation 8


Assumptions

Conventions used in these lectures


 Each “box” in a figure represents a word
 Memory is word addressed
 Each word is 4 bytes in size and can hold an
integer or a pointer

Free word
Allocated block Free block
(4 words) (3 words) Allocated word

Cox Dynamic Memory Allocation 9


Allocation Examples

p1 = malloc(4*sizeof(int))

p2 = malloc(5*sizeof(int))

p3 = malloc(6*sizeof(int))

free(p2)

p4 = malloc(2*sizeof(int))

Cox Dynamic Memory Allocation 10


Governing Rules
Applications:
 Can issue arbitrary sequence of malloc and free requests
 Free requests must correspond to an allocated block

Allocators:
 Can’t control number or size of allocated blocks
 Must respond immediately to all allocation requests
• i.e., can’t reorder or buffer requests
 Must allocate blocks from free memory
• i.e., can only place allocated blocks in free memory
 Must align blocks so they satisfy all alignment
requirements
• e.g., 8-byte alignment for libc malloc on some systems
 Can only manipulate and modify free memory
 Can’t move the allocated blocks once they are allocated
• i.e., compaction is not allowed

Cox Dynamic Memory Allocation 11


Goals of Good malloc/free
Primary goals
 Good time performance for malloc and free
• Ideally should take constant time (not always possible)
• Should certainly not take linear time in the number of blocks
 Good space utilization
• User allocated structures should use most of the heap
• Want to minimize “fragmentation”

Some other goals


 Good locality properties
• Structures allocated close in time should be close in space
• “Similar” types of objects should be allocated close in space
 Robust
• Can check that free(p1) is on a valid allocated object p1
• Can check that memory references are to allocated space

Cox Dynamic Memory Allocation 12


Maximizing Throughput

Given some sequence of n malloc, realloc, and


free requests:
 R0, R1, ..., Rk, ... , Rn-1
Want to maximize throughput and peak
memory utilization
 These goals are often conflicting

Throughput:
 Number of completed requests per unit time
 Example:
• 5,000 malloc calls and 5,000 free calls in 10 seconds
• Throughput is 1,000 operations/second

Cox Dynamic Memory Allocation 13


Maximizing Memory Utilization
Given some sequence of malloc and free
requests:
 R0, R1, ..., Rk, ... , Rn-1
Def: Aggregate payload Pk:
 malloc(p) results in a block with a payload of p
bytes
 After request R has completed, the aggregate
k
payload Pk is the sum of currently allocated
payloads
Def: Current heap size is denoted by Hk
 Assume that H is monotonically increasing
k

Def: Peak memory utilization Uk:


 After k requests, peak memory utilization is:
• Uk = ( maxi<k Pi ) / Hk

Cox Dynamic Memory Allocation 14


Internal Fragmentation
Poor memory utilization caused by fragmentation
 Comes in two forms: internal and external fragmentation
Internal fragmentation
 For some block, internal fragmentation is the difference
between the block size and the payload size
block

Internal Internal
payload fragmentation
fragmentation

 Caused by overhead of maintaining heap data


structures, padding for alignment purposes, or explicit
policy decisions (e.g., not to split the block)
 Depends only on the pattern of previous requests, and
thus is easy to measure

Cox Dynamic Memory Allocation 15


External Fragmentation
Occurs when there is enough aggregate heap memory, but no
single free block is large enough
p1 = malloc(4*sizeof(int))

p2 = malloc(5*sizeof(int))

p3 = malloc(6*sizeof(int))

free(p2)

p4 = malloc(7*sizeof(int))
oops!
External fragmentation depends on the pattern of future
requests, and thus is difficult to measure
Cox Dynamic Memory Allocation 16
Dynamic Memory Allocation
(Video #1)

Alan L. Cox
[email protected]

Some slides adapted from CMU 15.213 slides


Assumptions

Conventions used in these lectures


 Each “box” in a figure represents a word
 Memory is word addressed
 Each word is 4 bytes in size and can hold an
integer or a pointer

Free word
Allocated block Free block
(4 words) (3 words) Allocated word

Cox Dynamic Memory Allocation 18


Implementation Issues
 How do we know how much memory to free just
given a pointer?
 How do we keep track of the free blocks?
 How do we pick a block to use for an allocation?
 What do we do with the extra space when
allocating memory for an object that is smaller
than the free block it is placed in?
 How do we reinsert a freed block?

p0

free(p0)
p1 = malloc(1)

Cox Dynamic Memory Allocation 19


Knowing How Much to Free

Standard method
 Keep the length of a block in the word preceding
the block.
• This word is often called the header field or header
 Requires an extra word for every allocated block

p0 = malloc(4 * sizeof(int)) p0

free(p0) Block size data

Cox Dynamic Memory Allocation 20


Keeping Track of Free Blocks
Method 1: Implicit list using lengths – links all blocks

5 4 6 2

Method 2: Explicit list among the free blocks using


pointers within the free blocks

5 4 6 2

Method 3: Segregated free list


 Different free lists for different size ranges
Method 4: Blocks sorted by size
 Can use a balanced binary search tree where a node
contains the head of an explicit list of free blocks of a
single size and that size is used as the key

Cox Dynamic Memory Allocation 21


Method 1: Implicit List

Need to identify whether each block is free or


allocated
 Can use a single bit
 Bit can be put in the same word as the size if block
sizes are always multiples of two (mask out low
order bit when reading size)
1 word

size a a = 1: allocated block


a = 0: free block
Format of
size: block size
allocated and payload
free blocks
payload: application data
(allocated blocks only)
optional
padding

Cox Dynamic Memory Allocation 22


Implicit List: Finding a Free Block
First fit:
 Search list from beginning, choose first free block that fits
p = start;
while (p < end && // not past end
((*p & 0x1) != 0 || // already allocated
*p <= len)) // too small
p = p + (*p & ~0x1); // advance to next block
 Can take linear time in total number of blocks (allocated/free)
 Can cause “splinters” (small free blocks) at beginning of list
Next fit:
 Like first-fit, but search list from end of previous search
 Research suggests that fragmentation is worse
Best fit:
 Choose the free block with the closest size that fits (requires
complete search of the list)
 Keeps fragments small – usually helps fragmentation
 Will typically run slower than first-fit

Cox Dynamic Memory Allocation 23


Implicit List: Allocating in Free Block

Allocating in a free block – splitting


 Since allocated space might be smaller than free
space, we might want to split the block

4 4 6 2

void place_block(ptr p, int len) {


int newsize = ((len + 1) / 2) * 2; // add 1 and round up
int oldsize = *p & ~0x1; // mask out low bit
*p = newsize | 0x1; // set new length
if (newsize < oldsize)
*(p + newsize) = oldsize - newsize; // set length in remaining
} // part of block
place_block(p, 4)

4 4 4 2 2
Cox Dynamic Memory Allocation 24
Implicit List: Freeing a Block

Simplest implementation:
 Only need to clear allocated flag
• void free_block(ptr p) { *p = *p & ~0x1; }
 But can lead to “false fragmentation”

4 4 4 2 2

free_block(p) p

4 4 4 2 2

malloc(5 * sizeof(int))
Oops!
 There is enough free space, but the allocator won’t
be able to find it!

Cox Dynamic Memory Allocation 25


Implicit List: Coalescing

Join (coalesce) with next and/or previous block


if they are free
 Coalescing with next block

void free_block(ptr p) {
*p = *p & ~0x1; // clear allocated flag
next = p + *p; // find next block
if ((*next & 0x1) == 0)
*p = *p + *next; // add to this block if
} // not allocated

4 4 4 2 2

free_block(p) p

4 4 6 2
 But how do we coalesce with previous block?

Cox Dynamic Memory Allocation 26


Implicit List: Bidirectional Coalescing

Boundary tags [Knuth73]


 Replicate header word at end of block
 Allows us to traverse the “list” backwards, but
requires extra space
 Important and general technique!
Header size a
a = 1: allocated block
a = 0: free block
Format of
allocated and payload and
padding size: total block size
free blocks
payload: application data
Boundary tag size a (allocated blocks only)
(footer)

4 4 4 4 6 6 4 4

Cox Dynamic Memory Allocation 27


Constant Time Coalescing

Case 1 Case 2 Case 3 Case 4

allocated allocated free free


block being
freed
allocated free allocated free

Cox Dynamic Memory Allocation 28


Constant Time Coalescing (Case 1)

m1 1 m1 1

m1 1 m1 1
n 1 n 0

n 1 n 0
m2 1 m2 1

m2 1 m2 1

Cox Dynamic Memory Allocation 29


Constant Time Coalescing (Case 2)

m1 1 m1 1

m1 1 m1 1
n 1 n+m2 0

n 1
m2 0

m2 0 n+m2 0

Cox Dynamic Memory Allocation 30


Constant Time Coalescing (Case 3)

m1 0 n+m1 0

m1 0
n 1

n 1 n+m1 0
m2 1 m2 1

m2 1 m2 1

Cox Dynamic Memory Allocation 31


Constant Time Coalescing (Case 4)

m1 0 n+m1+m2 0

m1 0
n 1

n 1
m2 0

m2 0 n+m1+m2 0

Cox Dynamic Memory Allocation 32


Implicit Lists: +’s and –’s
Implementation: very simple (“+”)
Allocate: linear time worst case (“-”)
Free: constant time worst case – even with
coalescing (“+”)
Memory usage: will depend on placement
policy
 First fit, next fit, or best fit

Cox Dynamic Memory Allocation 33


Implicit Lists: Summary
Not used in practice for malloc/free because
of linear time allocate
 Used in some special-purpose applications
However, the concepts of placement,
splitting, and boundary tag coalescing are
general to all allocators

Cox Dynamic Memory Allocation 34


Summary of Key Allocator Policies
Placement policy:
 First fit, next fit, or best fit?
 Different throughput versus fragmentation trade offs
Splitting policy:
 When do we go ahead and split free blocks?
 How much internal fragmentation are we willing to
tolerate?
Coalescing policy:
 Immediate coalescing: coalesce adjacent blocks each
time free is called
 Deferred coalescing: try to improve performance of free
by deferring coalescing until needed, e.g.,
• Coalesce as you scan the free list for malloc
• Coalesce when the amount of external fragmentation
reaches some threshold

Cox Dynamic Memory Allocation 35


Dynamic Memory Allocation
(video #2)

Alan L. Cox
[email protected]

Some slides adapted from CMU 15.213 slides


Keeping Track of Free Blocks
Method 1: Implicit list using lengths – links all blocks

5 4 6 2

Method 2: Explicit list among the free blocks using


pointers within the free blocks

5 4 6 2

Method 3: Segregated free list


 Different free lists for different size ranges
Method 4: Blocks sorted by size
 Can use a balanced binary search tree where a node
contains the head of an explicit list of free blocks of a
single size and that size is used as the key

Cox Dynamic Memory Allocation 37


Explicit Free Lists
A B C

Use data space for link pointers


 Typically doubly linked
 Still need boundary tags for coalescing

Forward links
A B
4 4 4 4 6 6 4 4 4 4
C
Back links

 It is important to realize that blocks are not


necessarily in the same order in the free list as in
the heap

Cox Dynamic Memory Allocation 38


Allocating From Explicit Free Lists

pred succ

Before: free block

pred succ

After:
(with splitting) free block

Cox Dynamic Memory Allocation 39


Freeing With Explicit Free Lists

Insertion policy: Where in the free list do you


put a newly freed block?
 LIFO (last-in-first-out) policy
• Insert freed block at the beginning of the free list
• Pro: simple and constant time
• Con: studies suggest fragmentation is worse than
address ordered
 Address-ordered policy
• Insert freed blocks so that free list blocks are always
in address order
– i.e. addr(pred) < addr(curr) < addr(succ)
• Con: requires search
• Pro: studies suggest fragmentation is better than
LIFO

Cox Dynamic Memory Allocation 40


Freeing With a LIFO Policy
Case 1: a-a-a h
 Insert self at beginning of
free list
a self a

Case 2: a-a-f p s
 Splice out next, coalesce
self and next, and add to before:
beginning of free list a self f

p s h
after:
a f

Cox Dynamic Memory Allocation 41


Freeing With a LIFO Policy (cont)
p s
Case 3: f-a-a
 Splice out prev, coalesce before:
with self, and add to
f self a
beginning of free list

p s h
after:
f a

p1 s1 p2 s2
Case 4: f-a-f
 Splice out prev and next, before:
coalesce with self, and add
f self f
to beginning of list

p1 s1 p2 s2 h
after:
f
Cox Dynamic Memory Allocation 42
Explicit List: Summary

Comparison to implicit list:


 Allocate is linear time in number of free blocks
instead of total blocks – much faster allocates
when most of the memory is full
 Slightly more complicated allocate and free since
needs to splice blocks in and out of the list
 Requires space for the links (potentially increases
the minimum block size)
Main use of linked lists is in conjunction with
segregated free lists
 Keep multiple linked lists of different size classes,
or possibly for different types of objects

Cox Dynamic Memory Allocation 43


Keeping Track of Free Blocks
Method 1: Implicit list using lengths – links all blocks

5 4 6 2

Method 2: Explicit list among the free blocks using


pointers within the free blocks

5 4 6 2

Method 3: Segregated free list


 Different free lists for different size ranges
Method 4: Blocks sorted by size
 Can use a balanced binary search tree where a node
contains the head of an explicit list of free blocks of a
single size and that size is used as the key

Cox Dynamic Memory Allocation 44


Segregated Storage

Each size range has its own free list of blocks


1-2

5-8

9-16

 Often separate lists for small sizes (3,4)


 Larger sizes typically grouped into powers of 2

Cox Dynamic Memory Allocation 45


Simple Segregated Storage
A free list and collection of pages for each size range
To allocate a block of size n:
 If free list for size n is not empty,
• Allocate first block on list (list can be implicit or explicit)
– Requirement: The size of every block in the free list is equal to the upper
end of the range
 If free list is empty,
• Get a new page
• Create new free list from all blocks in page
• Allocate first block on list
 No splitting
To free a block:
 Add to free list
 Instead of coalescing, if page is empty, return the page for use
by another size
Tradeoffs:
 Fast (constant time allocate/free), but can fragment badly
 Interesting observation: approximates a best fit placement
policy without having the search entire free list

Cox Dynamic Memory Allocation 46


Segregated Fits
Array of free lists, each one for some size range
To allocate a block of size n:
 Search appropriate free list for block of size m >= n
 If an appropriate block is found:
• Split block and place fragment on appropriate list (optional)
 If no block is found, try next larger class
 Repeat until block is found
To free a block:
 Coalesce (optional) and place on appropriate free list
Tradeoffs
 Faster search than all previous methods except simple
segregated storage
• Can guarantee log time for power of two size ranges
 Controls fragmentation of simple segregated storage
 Coalescing can increase search times
• Deferred coalescing can help

Cox Dynamic Memory Allocation 47


Keeping Track of Free Blocks
Method 1: Implicit list using lengths – links all blocks

5 4 6 2

Method 2: Explicit list among the free blocks using


pointers within the free blocks

5 4 6 2

Method 3: Segregated free list


 Different free lists for different size ranges
Method 4: Blocks sorted by size
 Can use a balanced binary search tree where a node
contains the head of an explicit list of free blocks of a
single size and that size is used as the key

Cox Dynamic Memory Allocation 48


Blocks Sorted by Size

Use a balanced binary search tree with the


block size as the search key
• The data stored in each tree node is the head of a
free list containing blocks of a single size
Allocate and free search for the tree node
holding blocks of the required size
• If that node doesn’t exist ...
• Allocate searches for the node whose free list holds
blocks of the next larger size
• Free creates a new tree node whose free list can
hold the block

Cox Dynamic Memory Allocation 49


Blocks Sorted by Size: +’s and –’s

Inherently achieves best fit (“+”)

Relies on programs only allocating objects of


a few different sizes (“-”)
• Otherwise, even logarithmic time operations will
take longer than segregated fits

Implementation complexity (“-”)

Cox Dynamic Memory Allocation 50


Other Goals: Spatial Locality
Most techniques give little control over spatial locality
 Sequentially-allocated blocks not necessarily adjacent
 Similarly-sized blocks (e.g., for same data type) not
necessarily adjacent

Would like a series of similar-sized allocations and


deallocations to reuse same blocks
 Splitting & coalescing tend to reduce locality

?Of techniques seen, which best for spatial locality??


Simple segregated lists
Each page only has similar-sized blocks

Cox Dynamic Memory Allocation 51


Spatial Locality: Regions

One technique to improve spatial locality

Dynamically divide heap into mini-heaps


 Programmer-determined

Allocate data within appropriate region


 Data that is logically used together
 Increase locality
 Can quickly deallocate an entire region at once

Changes API
malloc() and free()
must take a region
as an argument
Cox Dynamic Memory Allocation 52
Implementation Summary

Many options:
 Data structures for keeping track of free blocks
 Block choice policy
 Splitting & coalescing policies

No universal best option


 Many tradeoffs
 Depends on a program’s pattern of allocation and
deallocation

Cox Dynamic Memory Allocation 53


Dynamic Memory Allocation
(video #3)

Alan L. Cox
[email protected]

Some slides adapted from CMU 15.213 slides


Explicit Memory Allocation/Deallocation

+ Usually low time- and space-overhead

- Challenging to use correctly by programmers


- Lead to crashes, memory leaks, etc.

Cox Dynamic Memory Allocation 55


Implicit Memory Deallocation

+ Programmers don’t need to free data


explicitly, easy to use

+ Some implementations could achieve better


spatial locality and less fragmentation in the
hands of your average programmer

- Price to pay: depends on implementation

But HOW could a memory manager know


when to deallocate data without instruction
from programmer?
Cox Dynamic Memory Allocation 56
Implicit Memory Management:
Garbage Collection
Garbage collection: automatic reclamation of
heap-allocated storage – application never
has to free
void foo() {
int *p = malloc(128);
return; /* p block is now garbage */
}

Common in functional languages and modern


object oriented languages:
 C#, Go, Java, Lisp, Python, Scala, Swift
Variants (conservative garbage collectors)
exist for C and C++
 Cannot collect all garbage

Cox Dynamic Memory Allocation 57


Garbage Collection

How does the memory manager know when


memory can be freed?
 In general we cannot know what is going to be
used in the future since it depends on conditionals
 But we can tell that certain blocks cannot be used
if there are no pointers to them
Need to make certain assumptions about
pointers
 Memory manager can distinguish pointers from
non-pointers
 All pointers point to the start of a block
 Cannot hide pointers (e.g., by coercing them to an
int, and then back again)

Cox Dynamic Memory Allocation 58


Classical GC algorithms

Reference counting (Collins, 1960)


 Does not move blocks

Mark and sweep collection (McCarthy, 1960)


 Does not move blocks (unless you also “compact”)

Copying collection (Minsky, 1963)


 Moves blocks (compacts memory)

For more information, see Jones and Lin,


“Garbage Collection: Algorithms for
Automatic Dynamic Memory”, John Wiley &
Sons, 1996.

Cox Dynamic Memory Allocation 59


Memory as a Graph
 Each data block is a node in the graph
 Each pointer is an edge in the graph
 Root nodes: locations not in the heap that contain
pointers into the heap (e.g. registers, locations on
the stack, global variables)

Root nodes

Heap nodes reachable

unreachable
(garbage)

Cox Dynamic Memory Allocation 60


Reference Counting

Overall idea
 Maintain a free list of unallocated blocks
 Maintain a count of the number of references to
each allocated block
 To allocate, grab a sufficiently large block from the
free list
 When a count goes to zero, deallocate it

Cox Dynamic Memory Allocation 61


Reference Counting: More Details
Each allocated block keeps a count of
references to the block
 Reachable  count is positive
 Compiler inserts counter increments and
decrements as necessary
 Deallocate when count goes to zero

Typically built on top of an explicit


deallocation memory manager
 All the same implementation decisions as before
 E.g., splitting & coalescing

Cox Dynamic Memory Allocation 62


Reference Counting: Example

a = cons(10,empty)
b = cons(20,a)
a = b
b = …
a = …

Cox Dynamic Memory Allocation 63


Reference Counting: Example

a = cons(10,empty) a 1 10
b = cons(20,a)
a = b
b = …
a = …

Cox Dynamic Memory Allocation 64


Reference Counting: Example

a = cons(10,empty) a 2 10
b = cons(20,a)
a = b
b = …
a = … b 1 20

Cox Dynamic Memory Allocation 65


Reference Counting: Example

a = cons(10,empty) 1 10
b = cons(20,a)
a = b
b = … a
a = … b 2 20

Cox Dynamic Memory Allocation 66


Reference Counting: Example

a = cons(10,empty) 1 10
b = cons(20,a)
a = b
b = … a
a = … 1 20

Cox Dynamic Memory Allocation 67


Reference Counting: Example

a = cons(10,empty) 1 10
b = cons(20,a)
a = b
b = …
a = … 0 20

Cox Dynamic Memory Allocation 68


Reference Counting: Example

a = cons(10,empty) 0 10
b = cons(20,a)
a = b
b = …
a = …

Cox Dynamic Memory Allocation 69


Reference Counting: Example

a = cons(10,empty)
b = cons(20,a)
a = b
b = …
a = …

Cox Dynamic Memory Allocation 70


Reference Counting: Problem

? What’s the problem? ?

No other pointer to this data, so can’t refer to it


Count not zero, so never deallocated
Following does NOT hold: Count is positive 
reachable
Can occur with any cycle
Cox Dynamic Memory Allocation 71
Reference Counting: Summary

Disadvantages:
 Managing & testing counts is generally expensive
• Can optimize
 Doesn’t work with cycles!
• Approach can be modified to work, with difficulty

Advantage:
 Simple
• Easily adapted, e.g., for parallel or distributed GC

Useful when cycles can’t happen


 E.g., UNIX hard links

Cox Dynamic Memory Allocation 72


GC Without Reference Counts

If don’t have counts, how to deallocate?

Determine reachability by traversing pointer


graph directly
 Stop user’s computation periodically to compute
reachability
 Deallocate anything unreachable

Cox Dynamic Memory Allocation 73


Mark & Sweep

Overall idea
 Maintain a free list of unallocated blocks
 To allocate, grab a sufficiently large block from
free list
 When no such block exists, GC
• Should find blocks & put them on free list

Cox Dynamic Memory Allocation 74


Mark & Sweep: GC

Follow all pointers, marking all reachable data


 Use depth-first search
 Data must be tagged with info about its type, so
GC knows its size and can identify pointers
 Each piece of data must have a mark bit
• Can alternate meaning of mark bit on each GC to
avoid erasing mark bits

Sweep over all heap, putting all unmarked


data into a free list
 Again, same implementation issues for the free
list

Cox Dynamic Memory Allocation 75


Mark & Sweep: GC Example

Assume fixed-sized, single-pointer data blocks, for simplicity.

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 76


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 77


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 78


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 79


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 80


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 81


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 82


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 83


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Cox Dynamic Memory Allocation 84


Mark & Sweep: GC Example

Unmarked= Marked=

Root pointers:

Heap:

Free list:

Cox Dynamic Memory Allocation 85


Mark & Sweep: Summary

Advantages:
 No space overhead for reference counts
 No time overhead for reference counts
 Handles cycles

Disadvantage:
 Noticeable pauses for GC

Cox Dynamic Memory Allocation 86


Stop & Copy

Overall idea:
 Maintain From and To spaces in heap
 To allocate, get sequentially next block in From
space
• No free list!
 When From space full, GC into To space
• Swap From & To names

Cox Dynamic Memory Allocation 87


Stop & Copy: GC

Follow all From-space pointers, copying all


reachable data into To-space
 Use depth-first search
 Data must be tagged with info about its type, so
GC knows its size and can identify pointers

Swap From-space and To-space names

Cox Dynamic Memory Allocation 88


Stop & Copy: GC Example

Assume fixed-sized, single-pointer data blocks, for simplicity.

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 89


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 90


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 91


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 92


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 93


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 94


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 95


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 96


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 97


Stop & Copy: GC Example

Uncopied= Copied=

Root pointers:

From:

To:

Cox Dynamic Memory Allocation 98


Stop & Copy: GC Example

Root pointers:

To:

Next block to allocate

From:

Cox Dynamic Memory Allocation 99


Stop & Copy

Advantages:
 Only one pass over data
 Only touches reachable data
 Little space overhead per data item
 Very simple allocation
 “Compacts” data
 Handles cycles

Disadvantages:
 Noticeable pauses for GC
 Double the basic heap size

Cox Dynamic Memory Allocation 100


Compaction

Moving allocated data into contiguous


memory
Eliminates fragmentation
Tends to increase spatial locality
Must be able to reassociate data & locations
 Not possible if C-like pointers in source language

Cox Dynamic Memory Allocation 101


GC Variations

Many variations on these three main themes


 Concurrent GC, which does not stop the
computation during GC
 Generational GC, which exploits the observation
that most objects have a short lifetime
 Conservative GC

Combinations of these three main themes are


common
 Java uses both Copying and Mark-and-Sweep
within a Generational GC

Cox Dynamic Memory Allocation 102


Conservative GC

Goal
 Allow GC in C-like languages

Usually a variation on Mark & Sweep

Must conservatively assume that integers and


other data can be cast to pointers
 Compile-time analysis to see when this is definitely
not the case
 Code style heavily influences effectiveness

Cox Dynamic Memory Allocation 103


GC vs. malloc/free Summary

Safety is not programmer-dependent


Compaction generally improves locality

Higher or lower time overhead


 Generally less predictable time overhead

Generally higher space overhead

Cox Dynamic Memory Allocation 104


Next Time

Virtual Memory

Cox Dynamic Memory Allocation 105

You might also like