COMPUTING
Parallel Computing
Need
• In Parallel Computing,
• Additional computational resources are added to
the system to improve the processing capability
of system
• Helps in dividing complex computations
into subtasks, which can be handled individually
by processing units that are running in parallel
• Multiple computing systems are running parallel
• Concept is processing capability will
increase with the increase in the level of
parallelism
Parallel processing uses two or more processors or
CPUs simultaneously to handle various
components of a single activity.
Example - Supercomputers for use in astronomy
Distributed Computing
Need
• In distributed computing,
• Multiple computing resources are connected in network
and computing tasks are distributed across these
resources
• Results into increase in speed and efficiency of system
• Faster and more efficient than traditional methods of
computing
The •method
More suitable
of makingto process
multiple huge amounts
computers of data
work together in
to solve
limited
a common time
problem.
Client-Server Architecture
Examples and Use Cases -
Artificial Intelligence and Machine Learning
Features -
Scalable
Comparing Parallel and Distributed
Systems Distributed Databases
Distributed Syste Parallel System
Deal with tables Generate notions of transaction
m
and relations
• Independent, • Computer
autonomous system with Must have a Implement ACID transaction properties (atomicity,
system connecte several processi schema for data consistency, isolation, and durability)
d in a network ng units
accomplishing sp attached to it Implements data Allow distributed transactions
ecific tasks partitioning
• Coordination is • A common
possible shared memory
between connect can be directly
ed computers accessed by
with every
own memory and processing unit
CPU in a network
• Loose coupling of • Tight coupling of
computers processing
connected in resources that
network, providin are used for
g access to data solving single,
and remotely complex problem
located
resources
Hadoop
• Deals with flat files in any format
• Operates on no schema for data
• Divides files automatically in blocks
• Generates notions of a job divided
into tasks
• Implements MapReduce computing
model
• Considers every task either as a Map
or a Reduce