0% found this document useful (0 votes)
26 views54 pages

CS621 Week 03

The document discusses shared and distributed memory architectures in parallel and distributed computing, highlighting their characteristics and applications. It also covers Flynn's classification of computer architectures, detailing SISD, SIMD, MISD, and MIMD types. Additionally, it compares SIMD and MIMD architectures in terms of processing, memory access, and performance efficiency.

Uploaded by

memona raza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views54 pages

CS621 Week 03

The document discusses shared and distributed memory architectures in parallel and distributed computing, highlighting their characteristics and applications. It also covers Flynn's classification of computer architectures, detailing SISD, SIMD, MISD, and MIMD types. Additionally, it compares SIMD and MIMD architectures in terms of processing, memory access, and performance efficiency.

Uploaded by

memona raza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 54

Dr.

Muhammad Anwaar Saeed


Dr. Said Nabi
Ms. Hina Ishaq

CS621 Parallel and Distributed


Computing
Shared Memory

CS621 Parallel and Distributed


Computing
Introduction of Shared Memory.

Objective
s
Architecture of Shared Memory.
Shared Memory:

“Shared memory is a type of memory architecture that allows


multiple processors or threads to access the same memory
space. In the context of distributed and parallel computing,
shared memory can be used to facilitate communication and
synchronization between different processes or threads.”
Shared Memory:

Processors have
Cache Coherency
direct access to
Protocol guarantees Each processor also
global memory and
consistency has its own memory
I/O
of memory and I/O (cache)
through bus or fast
accesses
switching network
Shared Memory Cont…

Concurrent access
Data structures are
to shared memory Programming
shared in global
must be Models
address space
coordinated

Multithreading
OpenMP
(Thread Libraries)
Architecture of Shared Memory:
Distributed Memory

CS621 Parallel and Distributed


Computing
Introduction of Distributed
Memory.

Objective
s
Architecture of Distributed
Memory.
Distributed Memory:

“Distributed memory refers to a type of parallel computing


architecture where each processor has its own private memory,
and communication between processors happens through
message passing. In this architecture, the memory of one
processor is not directly accessible by other processors, and
communication between processors occurs explicitly through
messages that are sent and received.”
Distributed Memory:

Each Processor Processors are


Data structures
has direct connected via
must be
access only to high-speed
distributed
its local memory interconnect
Distributed Memory Cont…

Data exchange is done


via explicit processor-to-
processor Programming Models
communication:
send/receive messages

Others: PVM, PARMACS,


Widely used standard:
Express, P4, Chameleon,
MPI
etc.
Architecture of Distributed
Memory:
Flynn’s classification of
computer architectures

CS621 Parallel and Distributed


Computing
What is Flynn’s classification of
computer architectures?

Objective
Basis for Flynn’s classification.
s

Types of Flynn’s classification.


Flynn’s classification of computer
architectures (1966):

“Michael J Flynn classified computers on the basis of


multiplicity of instruction stream and data streams in a
computer system.”
Instructio
n stream
Flynn’s
classification of
computer
architectures n gle
D
Si vs. l st ata
Cont… u
m e
ltip re
am
Flynn’s classification of computer architectures
Cont…

The four • Single Instruction Single Data


classifications (SISD)
defined by Flynn • Single Instruction Multiple Data
are based upon the
number of (SIMD)
concurrent • Multiple Instruction Multiple Data
instruction (or (MIMD)
control) and data • Multiple Instruction Single Data
streams available in (MISD)
the architecture.
Flynn’s classification of computer
architectures Cont…

Single Multiple
instruction instruction

Single data SISD MISD

Multiple data SIMD MIMD


SISD (Single-Instruction
Single-Data)

CS621 Parallel and Distributed


Computing
Introduction of SISD.

Objective
s
Architecture of SISD.
SISD (Single-Instruction Single-Data)

“Refers to the traditional von Neumann architecture


where a single sequential processing element (PE)
operates on a single stream of data.”
SISD (Single-Instruction Single-Data)
Architecture
SISD (Single-Instruction Single-Data)
Cont…

Conventional single-
processor von
Neumann computers
are classified as SISD
systems.
A typical non-pipelined • Program Counter (PC),
architecture with • Memory data registers
general purpose (MDR),
registers as well as • Memory address
some dedicated special registers (MAR) and
registers like: • Instruction registers
(IR).
SISD (Single-Instruction Single-Data)
Cont…
Perform the same
operation on
multiple data
operands
concurrently. Serial computer

Concurrency of Example: A
processing rather personal computer
than concurrency of processing
execution. instructions and
data on single
processor.
SIMD (Single-Instruction
Multi-Data)

CS621 Parallel and Distributed


Computing
Introduction of SIMD.

Objective
SIMD Architecture.
s

SIMD Schemes.
SIMD (Single-Instruction Multi-Data)

“SIMD is a multiple-processing system that performs


one operation simultaneously on more than one piece
of data.”
SIMD (Single-Instruction Multi-Data) Cont…

Only one program can be run at a time.


All processors in a parallel computer execute the same
instructions but operate on different data at the same time.
Processors run in synchronous, lockstep function.

Shared or distributed memory


SIMD instructions give very speedups in things like linear
algebra, or image/video manipulation/encoding/decoding, etc.
Examples: Wireless MMX unit, CM-1, CM-2, DAP, MasPar MP-1
SIMD (Single-Instruction Multi-Data) Architecture

Con • A front-end Von Neumann computer.


sist • A processor array: connected to the memory bus of
.
s of the front end

2
part
s:
SIMD (Single-Instruction Multi-Data)
Architecture Cont…
SIMD (Single-Instruction Multi-Data) Schemes

Classif
• Scheme 1 – Each processor has its own local memory.
ied • Scheme 2 – Processors and memory modules
into communicate with each other via interconnection
two network.
config
uratio
n
schem
es
SIMD (Single-Instruction Multi-Data) Scheme
Cont…
• SIMD Scheme 1
SIMD (Single-Instruction Multi-Data) Scheme
Cont…

• SIMD Scheme 2
MISD (Multiple-Instruction
Single-Data)

CS621 Parallel and Distributed


Computing
Introduction of MISD.

Objective
s
MISD Architecture.
MISD (Multiple-Instruction Single-
Data)

“A pipeline of multiple independently executing


functional units operating on a single stream of data,
forwarding results from one functional unit to the next."
MISD (Multiple-Instruction Single-Data)
Architecture
MISD (Multiple-Instruction Single-Data) Cont…

Special purpose computer

Excellent for situation where fault tolerance is critical

Heterogeneous systems operate on the same data stream and must agree
on the result

Rarely used; some specific use systems (space flight control computers)

Example: Systolic array


MIMD (Multi-Instruction
Multi-data)

CS621 Parallel and Distributed


Computing
Introduction of MIMD.

Objective
MIMD Architecture.
s

MIMD Shared Memory &


Message Passing Architectures.
MIMD (Multi-Instruction Multi-data)

“In MIMD all processors in a parallel computer can


execute different instructions and operate on various data
at the same time.”
MIMD (Multi-Instruction Multi-data) Architecture
MIMD (Multi-Instruction Multi-data) Cont…
MIMD processors can execute different programs on different
processors.
Each processor has a separate program, and an instruction
stream is generated from each program.
Parallelism achieved by connecting multiple processors
together.
Different programs can be run simultaneously.
Each processor can perform any operation regardless of what
other processors are doing.
Examples: S-1, Cray-3, Cray T90, Cray T3E, Multiprocessor PCs
and 370/168 MP
MIMD (Multi-Instruction Multi-data) Architecture
Cont…
Made of multiple
processors and
multiple memory
Special purpose Shared or distributed
modules connected
computer memory
via some
interconnection
network.

Classified into two


configuration
schemes:
• Shared memory
• Message passing
MIMD (Multi-Instruction Multi-data) Cont…

• Inter-processor coordination is accomplished by


Share reading and writing in a global memory shared by
all processors.
d • Any processor can access then local memory of
any other processor.
Mem • Typically consists of servers that communicate
ory through a bus and cache memory controller.

Orga
nizati
on
MIMD (Multi-Instruction Multi-data) Cont…
Shared Memory Architecture
MIMD (Multi-Instruction Multi-data) Cont…
• Point-to-Point
• Each processor has access to its own local
memory.
Mess • Communications are performed via send and
receive operations.
age • Message passing multiprocessors employ a
Passi variety of static networks in local
ng communications.
Orga • Must be synchronized among different
processors.
nizati
on
MIMD (Multi-Instruction Multi-data) Cont…
Message Passing Architecture
SIMD-MIMD Comparison

CS621 Parallel and Distributed


Computing
Objective
Comparison of SIMD-MIMD.
s
SIMD-MIMD Comparison

SIMD
SIMD follows
computers
synchronous
require less SIMD is less
processing
hardware than expensive as
where as, MIMD
MIMD compared to
incorporates an
computers MIMD.
asynchronous
(single control
processing.
unit).
SIMD-MIMD Comparison Cont…

In MIMD each processing


elements stores its individual
copy of the program which
MIMD is more
increases the memory
complex and efficient
requirements. Conversely,
as compared to SIMD.
SIMD requires less memory as
it stores only one copy of the
program.
SIMD-MIMD Comparison Cont…

Single Instruction Multiple Multiple Instruction Multiple


Architecture Data (SIMD) Data (MIMD)
Same instruction on multiple Different instructions on multiple
Type of processing
data sets data sets
Flexibility Limited High
Memory access Shared Separate
Applications with irregular data
Applications with regular data
Best suited for dependencies or complex
parallelism
computations
High efficiency for parallelizable Flexible, but may be less
Performance
tasks efficient for parallelizable tasks

You might also like