4 Underlying Principles of Parallel
4 Underlying Principles of Parallel
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Course Outcome
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Syllabus
UNIT I INTRODUCTION
Introduction to Cloud Computing – Definition of Cloud – Evolution
of Cloud Computing –Underlying Principles of Parallel and
Distributed Computing – Cloud Characteristics – Elasticity in Cloud
– On Demand Provisioning
CS8791/CC/IVCSE/VIISEM/KG-KiTE
UNDERLYING PRINCIPLES OF
PARALLEL COMPUTING
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Parallel Computer Memory
Architectures
• Shared Memory
– Uniform Memory Access (UMA)
– Non-Uniform Memory Access (NUMA)
• Distributed Memory
• Hybrid Distributed-Shared Memory
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Distributed Memory
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Parallel Processing
• The term “parallel” implies a tightly coupled system while
“distributed” refers to a wider class of system including those
who are tightly coupled.
• “Parallel computing” refers to a model where the computation is
divided among several processors sharing the same memory.
• The shared memory has a single address space, which is
accessible to all the processors.
• Parallel programs are then broken down into several units of
executions that can be allocated to different processors, and can
communicate with each other by means of the shared memory.
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Parallel Computing
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Approaches to Parallel Programming
• A sequential program is one which runs on a single processor and has a
single line of control.
• To make many processors collectively work on a single program, the
program must be divided into smaller independent chunks so that each
processor can work on separate chunks of the problem.
• The program decomposed in this way is a parallel program.
• The most prominent parallel programming approaches are the following:
● Data Parallelism
● Process Parallelism
● Farmer and Worker Mode
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Approaches to Parallel Programming
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Levels of Parallelism
Levels of parallelism are decided based on the lumps of code (grain
size)
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Parallelism Levels
Parallelism within an application can be detected at several levels:
● Large-grain (or task-level)
● Medium-grain (or control-level)
● Fine-grain (data-level)
● Very-fine grain (multiple instruction issue)
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Parallelism levels
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Types of Parallelism
• Bit-level Parallelism
• Instructional Parallelism
• Data Parallelism
• Task Parallelism
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Speed Vs No. of Processors
Speed-up by a parallel computer increases as the logarithm of
the number of processors; (i.e., y = k*log(N)).
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Why use Parallel Computing?
• Save time and/or money
• Solve larger problems
• Provide concurrency
• Use of non-local resources
• Limits to serial computing
CS8791/CC/IVCSE/VIISEM/KG-KiTE
SIMD
(Single Instruction Multiple Data)
CS8791/CC/IVCSE/VIISEM/KG-KiTE
MISD
(Multiple Instruction Single Data)
CS8791/CC/IVCSE/VIISEM/KG-KiTE
MIMD
(Multiple Instruction Multiple Data)
CS8791/CC/IVCSE/VIISEM/KG-KiTE
Time To Think (T2T)
A term for simultaneous access to a resource, physical or logical.
(BT Level- Remember)
a) Multiprogramming
b) Multitasking
c) Threads
d) Concurrency
Ans: d
CS8791/CC/IVCSE/VIISEM/KG-KiTE
More to Know
https://www.youtube.com/watch?v=1XGo8K1boH4
CS8791/CC/IVCSE/VIISEM/KG-KiTE