Introduction to Parallel Computing
Last Updated :
04 Jun, 2021
Before taking a toll on Parallel Computing, first, let’s take a look at the background of computations of computer software and why it failed for the modern era.
Computer software was written conventionally for serial computing. This meant that to solve a problem, an algorithm divides the problem into smaller instructions. These discrete instructions are then executed on the Central Processing Unit of a computer one by one. Only after one instruction is finished, next one starts.
A real-life example of this would be people standing in a queue waiting for a movie ticket and there is only a cashier. The cashier is giving tickets one by one to the persons. The complexity of this situation increases when there are 2 queues and only one cashier.
So, in short, Serial Computing is following:
- In this, a problem statement is broken into discrete instructions.
- Then the instructions are executed one by one.
- Only one instruction is executed at any moment of time.
Look at point 3. This was causing a huge problem in the computing industry as only one instruction was getting executed at any moment of time. This was a huge waste of hardware resources as only one part of the hardware will be running for particular instruction and of time. As problem statements were getting heavier and bulkier, so does the amount of time in execution of those statements. Examples of processors are Pentium 3 and Pentium 4.
Now let’s come back to our real-life problem. We could definitely say that complexity will decrease when there are 2 queues and 2 cashiers giving tickets to 2 persons simultaneously. This is an example of Parallel Computing.
Parallel Computing :
It is the use of multiple processing elements simultaneously for solving any problem. Problems are broken down into instructions and are solved concurrently as each resource that has been applied to work is working at the same time.
Advantages of Parallel Computing over Serial Computing are as follows:
- It saves time and money as many resources working together will reduce the time and cut potential costs.
- It can be impractical to solve larger problems on Serial Computing.
- It can take advantage of non-local resources when the local resources are finite.
- Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes better work of the hardware.
Types of Parallelism:
- Bit-level parallelism –
It is the form of parallel computing which is based on the increasing processor’s size. It reduces the number of instructions that the system must execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of two 16-bit integers. It must first sum up the 8 lower-order bits, then add the 8 higher-order bits, thus requiring two instructions to perform the operation. A 16-bit processor can perform the operation with just one instruction.
- Instruction-level parallelism –
A processor can only address less than one instruction for each clock cycle phase. These instructions can be re-ordered and grouped which are later on executed concurrently without affecting the result of the program. This is called instruction-level parallelism.
- Task Parallelism –
Task parallelism employs the decomposition of a task into subtasks and then allocating each of the subtasks for execution. The processors perform the execution of sub-tasks concurrently.
4. Data-level parallelism (DLP) –
Instructions from a single stream operate concurrently on several data – Limited by non-regular data manipulation patterns and by memory bandwidth
Why parallel computing?
- The whole real-world runs in dynamic nature i.e. many things happen at a certain time but at different places concurrently. This data is extensively huge to manage.
- Real-world data needs more dynamic simulation and modeling, and for achieving the same, parallel computing is the key.
- Parallel computing provides concurrency and saves time and money.
- Complex, large datasets, and their management can be organized only and only using parallel computing’s approach.
- Ensures the effective utilization of the resources. The hardware is guaranteed to be used effectively whereas in serial computation only some part of the hardware was used and the rest rendered idle.
- Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
- Databases and Data mining.
- Real-time simulation of systems.
- Science and Engineering.
- Advanced graphics, augmented reality, and virtual reality.
Limitations of Parallel Computing:
- It addresses such as communication and synchronization between multiple sub-tasks and processes which is difficult to achieve.
- The algorithms must be managed in such a way that they can be handled in a parallel mechanism.
- The algorithms or programs must have low coupling and high cohesion. But it’s difficult to create such programs.
- More technically skilled and expert programmers can code a parallelism-based program well.
Future of Parallel Computing: The computational graph has undergone a great transition from serial computing to parallel computing. Tech giant such as Intel has already taken a step towards parallel computing by employing multicore processors. Parallel computation will revolutionize the way computers work in the future, for the better good. With all the world connecting to each other even more than before, Parallel Computing does a better role in helping us stay that way. With faster networks, distributed systems, and multi-processor computers, it becomes even more necessary.
Similar Reads
Computer Fundamental Tutorial
This Computer Fundamental Tutorial covers everything from basic to advanced concepts, including computer hardware, software, operating systems, peripherals, etc. Whether you're a beginner or an experienced professional, this tutorial is designed to enhance your computer skills and take them to the n
7 min read
Classification of Computers
Computers come in a wide variety of forms and serve countless purposes. They can range from tiny embedded systems to massive supercomputers, each designed to perform specific tasks. By organizing them based on factors such as size, function, and intended use, we can better understand the unique role
11 min read
Collision Resolution Techniques
In Hashing, hash functions were used to generate hash values. The hash value is used to create an index for the keys in the hash table. The hash function may return the same hash value for two or more keys. When two or more keys have the same hash value, a collision happens. To handle this collision
5 min read
How to Use Algorithms to Solve Problems?
An algorithm is a process or set of rules which must be followed to complete a particular task. This is basically the step-by-step procedure to complete any task. All the tasks are followed a particular algorithm, from making a cup of tea to make high scalable software. This is the way to divide a t
4 min read
CBSE Class 12 Computer Science Syllabus 2024-25
CBSE 12 Class Computer Science Syllabus covers a wide range of topics, from basic computational thinking to more advanced concepts such as Database administration, computer networks, and computational thinking and programming. The syllabus is designed to give students a solid foundation in computer
3 min read
Must do Math for Competitive Programming
Competitive Programming (CP) doesnât typically require one to know high-level calculus or some rocket science. But there are some concepts and tricks which are sufficient most of the time. You can definitely start competitive coding without any mathematical background, but maths becomes essential as
15+ min read
Addition and Subtraction of Matrix using pthreads
Addition or Subtraction of matrices takes O(n^2) time without threads but using threads we don't reduce the time complexity of the program we divide the task into core like if we have 4 core then divide the matrix into 4 part and each core take one part of the matrix and compute the operations and f
13 min read
Classification of Algorithms with Examples
There are many ways of classifying algorithms and a few of them are shown below: Implementation MethodDesign MethodOther Classifications Classification by Implementation Method: 1. Recursion or Iteration A recursive algorithm is one that calls itself repeatedly until a base condition is satisfied. I
4 min read
Mathematical and Geometric Algorithms - Data Structure and Algorithm Tutorials
Mathematical and geometric algorithms stand as powerful tools for solving complex problems. These algorithms use mathematical and geometric principles to efficiently and elegantly manipulate and analyze data. In this tutorial, we will discuss in detail What are Mathematical and Geometrical Algorithm
15+ min read
Hardware architecture (parallel computing)
Let's discuss about parallel computing and hardware architecture of parallel computing in this post. Note that there are two types of computing but we only learn parallel computing here. As we are going to learn parallel computing for that we should know following terms. Era of computing - The two f
3 min read