Week 2 - Study of Memory Organization and Multiprocessor System
Week 2 - Study of Memory Organization and Multiprocessor System
4406
ISSN No. 0976-5697
Volume 8, No. 7, July – August 2017
International Journal of Advanced Research in Computer Science
RESEARCH PAPER
Available Online at www.ijarcs.info
STUDY OF MEMORY ORGANIZATION AND MULTIPROCESSOR SYSTEM -
USING THE CONCEPT OF DISTRIBUTED SHARED MEMORY, MEMORY
CONSISTENCY MODEL AND SOFTWARE BASED DSM.
Dhara Kumari Dr. Rajni Sharma
Mphil Scholar Assistant Professor (Computer Science)
PT.J.L.N. Govt P.G College Himalayan University
Arunachal Pradesh (India) Faridabad (India)
Abstract: In current trend, performance and efficiency is the big issue of the memory organization and multiprocessor system whereas, A
Memory Organization and Multiprocessor uses multiple modalities to capture different types of DSM (Software based, Hardware Based and it
may be combine both Software & Hardware etc) because IT technology is greatly advanced and lot's of information is shared via the internet. To
improve the performance and efficiency of that multiprocessor system and memory organization, we can use different type of techniques that is
based on the concept and implementation of Hardware, Software, and Hybrid DSM. This paper provides an almost exhaustive survey of the
existing problem and solutions in a uniform manner, presenting their memory organization, shared memory, distributed memory, distributed
shared memory, Memory Consistency Model and software based DSM mechanisms and issues of importance for various DSM systems and
approaches.
Keywords: Performance, Efficiency, Memory, DSM, Shared Memory, Software Based DSM, Multiprocessor System. Memory Consistency
Model
Figure 3: distributed shared Memory System operations from multiple processors. According to the
system designer’s point of view, the model specifies
In this figure 3, distributed-memory, is not symmetric. A acceptable memory behaviors for the system. Thus, the
scalable interconnect is located between processing nodes or memory consistency model influences many aspects of
data, but each node or data has its own local portion of the system design, including the design of programming
global main memory to which it has faster access. During languages, compilers, and the underlying hardware.
this step, processes running on separate hosts can access a A memory model can be defined at any interface between
shared address space. The underlying DSM system provides the programmer and the system whereas the system consists
its clients with a shared, coherent memory address space. of the base hardware and programmers express their
Each client can access any memory location in the shared programs in machine-level instructions. There are two type
address space at any time and see the value last written by of interface:
any client. So the main advantage of DSM is the simpler At the machine code interface, the memory model
abstraction it provides to the application programmer. specification affects the designer of the machine
hardware and the programmer who writes or reasons
about machine code.
IMPLEMENTATION OF DISTRIBUTED SHARED At the high level language interface, the specification
MEMORY affects the programmers who use the high level
language and the designers of both the software that
DSM can be implemented in hardware DSM as well as converts high-level language code into machine code
software DSM. and the hardware that executes this code.
According to Hardware implementation, it requires The computer researcher proposed different memory models
addition of special network interfaces and cache to enhance distributed shared memory systems (like
coherence circuits to the system to make remote sequential consistency model, processor consistency model,
memory access look like local memory access. So, weak consistency model, release consistency model etc.).
Hardware DSM is very expensive. These models, to increase the memory access latency, the
According to Software implementation, a software layer bandwidth requirements, and simplify the programming. It
is added between the OS and application layers and also provides better performance, at the expense of a higher
kernel of OS may or may not be modified. Software involvement of the programmer in synchronizing the
DSM is more widely used as it is cheaper and easier to accesses to shared data.
implement than Hardware DSM. E. Software Based DSM
Design issues of DSM A distributed shared memory is a simple yet powerful
The distributed shared memory is to present the global view paradigm for structuring multiprocessor systems. It can be
of the entire address space to a program executing on any designed using hardware and/or software methodologies
machine [6]. A DSM manager on a particular machine based on various considerations of data being shared in
would capture all the remote data accesses made by any multiprocessor environments but it is better to design DSM
process running on that machine. An implementation of a in software because sharing data becomes a problem which
DSM would involve various choices. Some of them are as has to be easily tackled in software and not in hardware as in
below [7]. multiprocessor systems. The memory organization of a
DSM Algorithm software DSM system determines the way shared virtual
Implementation level of DSM Mechanism memory is organized on top of the isolated memory address
Semantics for concurrent access space. There are various advantages of programming
Semantics (Replication/Partial/ Full/R/W) distributed shared memory for multiprocessor environment
Naming scheme has to be used to access remote data as stated below:
Locations for replication (for optimization) Sharing data becomes a problem which has to be
System consistency model & Granularity of data tackled in the software and not in hardware as in
Data is replicated or cached multiprocessor systems.
Remote access by HW or SW Shared memory programs are usually shorter and easier
Caching/replication controlled by HW or SW to understand.
The value of distributed shared memory depends upon the Large or complex data structures may easily be
performance of Memory Consistency Model. The communicated.
consistency model is responsible for managing the state of Programming with shared memory is a well-understood
shared data for distributed shared memory systems. Lots of problem.
consistency model defined by a wide variety of source Shared memory gives transparent process-to-process
including architecture system, application programmer etc. communication.
D. Memory Consistency Model Compact design and easy implementation and
Although, shared-memory systems allow multiple expansion.
processors to simultaneously read and write the same Software based DSM provide many advantages in design of
memory locations and programmers require a conceptual multiprocessor systems. A distributed shared memory
model for the semantics of memory operations to allow mechanism allowing user’s multiple processors to access
them to correctly use the shared memory. This model is shared data efficiently. A DSM having no memory access
generally referred to as a memory consistency model or bottleneck and large virtual memory space can
memory model. So we can say that the memory consistency accommodate more no of processors. Its programs are
model for a shared-memory multiprocessor specifies the portable as they use common DSM programming interface,
behavior of memory with respect to read and write
© 2015-19, IJARCS All Rights Reserved 1071
Dhara Kumari et a,l International Journal of Advanced Research in Computer Science, 8 (7), July-August 2017,1069-1073
consistency. A program with a data race condition might get [6] Song Li, Yu Lin, and Michael Walker, “Region-based
results which programmers do not expect. However, a Software Distributed Shared Memory,” CS 656 Operating
program without a data race condition runs as if in a Systems, May 5, 2000.
sequentially consistent memory model. Unlike Munin, [7] Ajay Kshemkalyani and Mukesh Singhal, Ch12: Distributed
Computing: Principles, Algorithms, and Systems, Cambridge
TreadMarks does not have different types of shared University Press, CUP 2008.
variables. All of shared memory follows lazy release [8] S. V. Adve and M. D. Hill. A Unified Formalization of Four
consistency. TreadMarks supports two synchronization Shared-Memory Models. IEEE Trans. on Paralleland
primitives, locks and barriers. Distributed Systems, 4(6):613–624, June 1993.
[9] P. S. Sindhu, J-M. Frailong, and M. Cekleov. Formal
IV CONCLUSIONS AND FUTURE WORK
Specification of Memory Models. In M. Dubois and S. S.
Thakkar, editors, Scalable Shared Memory Multiprocessors,
In this paper according to analysis, we found that modern pages 25–41. Kluwer Academic Publishers, 1992.
software distributed shared memory systems have some [10] M. Raynal and A. Schiper. A Suite of Formal Definitions for
weaknesses that do not increase performance and efficiency Consistency Criteria in Shared Memories. In Proc. of the 9th
during the implementation of software based distributed Int’l Conf. on Parallel and Distributed Computing Systems
shared memory system. These weaknesses are: (PDCS’96), pages 125–131, September 1996.
[11] K. Li and P. Hudak. Memory coherence in shared virtual
There are no high level synchronization primitives memory systems. ACM Transactions on Computer Systems,
7(4):321– 359, November 1989.
provided. In this case, Programmers have to use
[12] K. Li and R. Schaefer. Shiva: An Operating System
basic synchronization primitives for example, Transforming A Hypercube into a Shared-Memory Machine.
barriers and locks, to solve synchronization Technical Report CS-TR-217-89, Dept. of Computer
problems. Science,Princeton University, April 1989.
If many writers write to the page and read the page [13] J. B. Carter, J. K. Bennett, and W. Zwaenepoel. Techniques
then current multiple writer protocols suffer from for reducing consistency-related communication in
the high cost of making a stale page current. distributed shared memory systems. ACM Transactions on
Thus in future work, these two type of weaknesses can be Computer Systems, 13(3):205–243, August 1995.
solve by using different type of methodology and [14] J. B. Carter. Design of the Munin distributed shared memory
system. Journal of Parallel and Distributed Computing on
implementation that to provide a strong guarantee of
Distributed Shared Memory, 1995.
Performance, Persistence, Interoperability, Security, [15] P. Keleher, A. L. Cox, S. Dwarkadas, and W. Zwaenepoel.
Resource Management, Scalability, and Fault Tolerance An Evaluation of Software-Based Release Consistent
during the read and write operation onto memory Protocols. Journal of Parallel and Distributed Computing,
organization and multiprocessor system. 29(2):126–141, September 1995.
[16] W. G. Levelt, M. F. Kaashoek, H. E. Bal, and A. S.
V REFERENCES Tanenbaum. A Comparison of Two Paradigms for
Distributed Shared Memory. Software—Practice and
[1] Stanek, William R. (2009). Windows Server 2008 Inside Experience, 22(11):985–1010, November 1992. Also
Out. O'Reilly Media, Inc. p. 1520. ISBN 978-07356-3806-8. available as Free University of the Netherlands, Computer
Retrieved 2012-08-20. [...] Windows Server Enterprise Science Department technical report IR-221.
supports clustering with up to eight-node clusters and very [17] X-H. Sun and J. Zhu. Performance Considerations of Shared
large memory (VLM) configurations of up to 32 GB on 32- Virtual Memory Machines. IEEE Trans. OnParallel and
bit systems and 2 TB on 64-bit systems. Distributed Systems, 6(11):1185–1194, November 1995.
[2] H. Amano, Parallel Computer. Shoukoudou, June 1996 [18] R. G. Minnich and D. V. Pryor. A Radiative Heat Transfer
[3] N. Suzuki, S. Shimizu, and N. Yamanouchi, An Simulation on a SPARCStation Farm. In Proc. of the First
Implemantation of a Shared Memory Multiprocessor. IEEE Int’l Symp. on High Performance Distributed
Koronasha, Mar. 1993. Computing (HPDC-1), pages 124–132, September 1992.
[4] M. J. Flynn, Computer Architecture: Pipelined and Parallel [19] P. Keleher, A. L. Cox, S. Dwarkadas, and W. Zwaenepoel.
Processor Design, Jones and Barlett, Boston, 1995. TreadMarks: Distributed shared memory on standard
[5] Kai Li, “Shared Virtual Memory on Loosely Coupled workstations and operating systems. In the 1994 Winter
Microprocessors” PhD Thesis, Yale University, September USENIX Conference, 1994.
1986.