Distributed Shared
Memory Systems
-by Ankit Gupta
What is a Distributed System?
What is DSM?
 The distributed shared memory (DSM) implements the
shared memory model in Distributed Systems, which have
no physical shared memory.
 The shared memory model provides a virtual address
space shared between all nodes.
 This overcomes the high cost of communication in
distributed systems. DSM systems move data to the
location of Access.
Memory
Mapping
Manager
Memory
Mapping
Manager
Memory
Mapping
Manager
Shared Memory
NODE 1 NODE 2 NODE 3
Purpose Of DSM Research
 Building less expensive parallel machines
 Building larger parallel machines
 Eliminating the programming difficulty of MPP and Cluster
architectures
 Generally break new ground:
 New network architectures and algorithms
 New compiler techniques
 Better understanding of performance in distributed systems
Distributed Shared Memory Models
 Object Based DSM
 Variable Based DSM
 Page Based DSM
 Structured DSM
 Hardware Supported DSM
Object Based
 Object based DSM
 Probably the simplest way to implement DSM
 Shared data must be encapsulated in an object
 Shared data may only be accessed via the methods in the object
Variable Based
 Delivers the lowest distribution granularity
 Closely integrated in the compiler
 May be hardware supported
Hardware Based DSM
 Uses hardware to eliminate software overhead
 May be hidden even from the operating system
 Usually provides sequential consistency
 May limit the size of the DSM system
Advantages of DSM
(Distributed Shared Memory)
 Data sharing is implicit, hiding data movement (as opposed to
‘Send’/‘Receive’ in message passing model)
 Passing data structures containing pointers is easier (in message passing
model data moves between different address spaces)
 Moving entire object to user takes advantage of locality difference
 Less expensive to build than tightly coupled multiprocessor system: off-the-
shelf hardware, no expensive interface to shared physical memory
 Very large total physical memory for all nodes: Large programs can run more
efficiently
 No serial access to common bus for shared physical memory like in
multiprocessor systems
 Programs written for shared memory multiprocessors can be run on DSM
systems with minimum changes
Issues faced in development of DSM
 Granularity
 Structure of Shared memory
 Memory coherence and access synchronization
 Data location and access
 Replacement strategy
 Thrashing
 Heterogeneity
Granularity
 Granularity is the amount of data sent with each
update
 If granularity is too small and a large amount of
contiguous data is updated, the overhead of
sending many small messages leads to less
efficiency
 If granularity is too large, a whole page (or more)
would be sent for an update to a single byte, thus
reducing efficiency
Structure of Shared Memory
 Structure refers to the layout of the shared data in
memory.
 Dependent on the type of applications that the DSM
system is intended to support.
Replacement Strategy
 If the local memory of a node is full, a cache miss at that
node implies not only a fetch of accessed data block from
a remote node but also a replacement.
 Data block must be replaced by the new data block.
- Example: LRU with access modes
Private (local) pages to be replaced before shared
ones
Private pages swapped to disk
Shared pages sent over network to owner
Read-only pages may be discarded (owners have a
copy)
Trashing
 Thrashing occurs when network resources are
exhausted, and more time is spent invalidating
data and sending updates than is used doing
actual work.
 Based on system specifics, one should choose
write-update or write-invalidate to avoid
thrashing.
Memory Coherence and Access
Synchronization
 In a DSM system that allows replication of shared data item,
copies of shared data item may simultaneously be available in
the main memories of a number of nodes.
 To solve the memory coherence problem that deal with the
consistency of a piece of shared data lying in the main
memories of two or more nodes.
 DSM are based on
- Replicated shared data objects
- Concurrent access of data objects at many nodes
 Coherent memory: when value returned by read operation is
the expected value (e.g., value of most recent write)
 Mechanism that control/synchronizes accesses is needed
to maintain memory coherence
 Sequential consistency: A system is sequentially consistent
if
- The result of any execution of operations of all processors is the
same as if they were executed in sequential order, and
- The operations of each processor appear in this sequence in the
order specified by its program
 General consistency:
- All copies of a memory location (replicas) eventually contain same
data when all writes issued by every processor have completed
Algorithms for implementing DSM
 The Central Server Algorithm
 The Migration Algorithm
 The Read-Replication Algorithm
 The full-Replication Algorithm
The Central Server Algorithm
- Central server maintains all shared data
 Read request: returns data item
 Write request: updates data and returns acknowledgement message
- Implementation
 A timeout is used to resend a request if acknowledgment fails
 Associated sequence numbers can be used to detect duplicate write
requests
 If an application’s request to access shared data fails repeatedly, a
failure condition is sent to the application
- Issues: performance and reliability
- Possible solutions
 Partition shared data between several servers
 Use a mapping function to distribute/locate data
The Migration Algorithm
- Operation
 Ship (migrate) entire data object (page, block) containing data item to
requesting location
 Allow only one node to access a shared data at a time
- Advantages
 Takes advantage of the locality of reference
 DSM can be integrated with VM at each node
- Make DSM page multiple of VM page size
- A locally held shared memory can be mapped into the VM page address
space
- If page not local, fault-handler migrates page and removes it from
address space at remote node
- To locate a remote data object:
 Use a location server
 Maintain hints at each node
 Broadcast query
- Issues
 Only one node can access a data object at a time
 Thrashing can occur: to minimize it, set minimum time data object resides at
a node
The Read-Replication Algorithm
 Replicates data objects to multiple nodes
 DSM keeps track of location of data objects
 Multiple nodes can have read access or one node write access
(multiple readers-one writer protocol)
 After a write, all copies are invalidated or updated
 DSM has to keep track of locations of all copies of data objects.
Examples of implementations:
 IVY: owner node of data object knows all nodes that have
copies
 PLUS: distributed linked-list tracks all nodes that have copies
 Advantage
 The read-replication can lead to substantial performance
improvements if the ratio of reads to writes is large
The Full-Replication Algorithm
- Extension of read-replication algorithm: multiple nodes can read and
multiple nodes can write (multiple-readers, multiple-writers protocol)
- Issue: consistency of data for multiple writers
- Solution: use of gap-free sequencer
• All writes sent to sequencer
• Sequencer assigns sequence number and sends write request to all
sites that have copies
• Each node performs writes according to sequence numbers
• A gap in sequence numbers indicates a missing write request: node
asks for retransmission of missing write requests
Any Questions?

Distributed Shared Memory Systems

  • 2.
  • 3.
    What is aDistributed System?
  • 4.
    What is DSM? The distributed shared memory (DSM) implements the shared memory model in Distributed Systems, which have no physical shared memory.  The shared memory model provides a virtual address space shared between all nodes.  This overcomes the high cost of communication in distributed systems. DSM systems move data to the location of Access.
  • 5.
  • 6.
    Purpose Of DSMResearch  Building less expensive parallel machines  Building larger parallel machines  Eliminating the programming difficulty of MPP and Cluster architectures  Generally break new ground:  New network architectures and algorithms  New compiler techniques  Better understanding of performance in distributed systems
  • 7.
    Distributed Shared MemoryModels  Object Based DSM  Variable Based DSM  Page Based DSM  Structured DSM  Hardware Supported DSM
  • 8.
    Object Based  Objectbased DSM  Probably the simplest way to implement DSM  Shared data must be encapsulated in an object  Shared data may only be accessed via the methods in the object Variable Based  Delivers the lowest distribution granularity  Closely integrated in the compiler  May be hardware supported
  • 9.
    Hardware Based DSM Uses hardware to eliminate software overhead  May be hidden even from the operating system  Usually provides sequential consistency  May limit the size of the DSM system
  • 10.
    Advantages of DSM (DistributedShared Memory)  Data sharing is implicit, hiding data movement (as opposed to ‘Send’/‘Receive’ in message passing model)  Passing data structures containing pointers is easier (in message passing model data moves between different address spaces)  Moving entire object to user takes advantage of locality difference  Less expensive to build than tightly coupled multiprocessor system: off-the- shelf hardware, no expensive interface to shared physical memory  Very large total physical memory for all nodes: Large programs can run more efficiently  No serial access to common bus for shared physical memory like in multiprocessor systems  Programs written for shared memory multiprocessors can be run on DSM systems with minimum changes
  • 11.
    Issues faced indevelopment of DSM  Granularity  Structure of Shared memory  Memory coherence and access synchronization  Data location and access  Replacement strategy  Thrashing  Heterogeneity
  • 12.
    Granularity  Granularity isthe amount of data sent with each update  If granularity is too small and a large amount of contiguous data is updated, the overhead of sending many small messages leads to less efficiency  If granularity is too large, a whole page (or more) would be sent for an update to a single byte, thus reducing efficiency
  • 13.
    Structure of SharedMemory  Structure refers to the layout of the shared data in memory.  Dependent on the type of applications that the DSM system is intended to support.
  • 14.
    Replacement Strategy  Ifthe local memory of a node is full, a cache miss at that node implies not only a fetch of accessed data block from a remote node but also a replacement.  Data block must be replaced by the new data block. - Example: LRU with access modes Private (local) pages to be replaced before shared ones Private pages swapped to disk Shared pages sent over network to owner Read-only pages may be discarded (owners have a copy)
  • 15.
    Trashing  Thrashing occurswhen network resources are exhausted, and more time is spent invalidating data and sending updates than is used doing actual work.  Based on system specifics, one should choose write-update or write-invalidate to avoid thrashing.
  • 16.
    Memory Coherence andAccess Synchronization  In a DSM system that allows replication of shared data item, copies of shared data item may simultaneously be available in the main memories of a number of nodes.  To solve the memory coherence problem that deal with the consistency of a piece of shared data lying in the main memories of two or more nodes.  DSM are based on - Replicated shared data objects - Concurrent access of data objects at many nodes  Coherent memory: when value returned by read operation is the expected value (e.g., value of most recent write)
  • 17.
     Mechanism thatcontrol/synchronizes accesses is needed to maintain memory coherence  Sequential consistency: A system is sequentially consistent if - The result of any execution of operations of all processors is the same as if they were executed in sequential order, and - The operations of each processor appear in this sequence in the order specified by its program  General consistency: - All copies of a memory location (replicas) eventually contain same data when all writes issued by every processor have completed
  • 18.
    Algorithms for implementingDSM  The Central Server Algorithm  The Migration Algorithm  The Read-Replication Algorithm  The full-Replication Algorithm
  • 19.
    The Central ServerAlgorithm - Central server maintains all shared data  Read request: returns data item  Write request: updates data and returns acknowledgement message - Implementation  A timeout is used to resend a request if acknowledgment fails  Associated sequence numbers can be used to detect duplicate write requests  If an application’s request to access shared data fails repeatedly, a failure condition is sent to the application - Issues: performance and reliability - Possible solutions  Partition shared data between several servers  Use a mapping function to distribute/locate data
  • 20.
    The Migration Algorithm -Operation  Ship (migrate) entire data object (page, block) containing data item to requesting location  Allow only one node to access a shared data at a time - Advantages  Takes advantage of the locality of reference  DSM can be integrated with VM at each node - Make DSM page multiple of VM page size - A locally held shared memory can be mapped into the VM page address space - If page not local, fault-handler migrates page and removes it from address space at remote node - To locate a remote data object:  Use a location server  Maintain hints at each node  Broadcast query - Issues  Only one node can access a data object at a time  Thrashing can occur: to minimize it, set minimum time data object resides at a node
  • 21.
    The Read-Replication Algorithm Replicates data objects to multiple nodes  DSM keeps track of location of data objects  Multiple nodes can have read access or one node write access (multiple readers-one writer protocol)  After a write, all copies are invalidated or updated  DSM has to keep track of locations of all copies of data objects. Examples of implementations:  IVY: owner node of data object knows all nodes that have copies  PLUS: distributed linked-list tracks all nodes that have copies  Advantage  The read-replication can lead to substantial performance improvements if the ratio of reads to writes is large
  • 22.
    The Full-Replication Algorithm -Extension of read-replication algorithm: multiple nodes can read and multiple nodes can write (multiple-readers, multiple-writers protocol) - Issue: consistency of data for multiple writers - Solution: use of gap-free sequencer • All writes sent to sequencer • Sequencer assigns sequence number and sends write request to all sites that have copies • Each node performs writes according to sequence numbers • A gap in sequence numbers indicates a missing write request: node asks for retransmission of missing write requests
  • 23.