0% found this document useful (0 votes)
30 views

Lab12 Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Lab12 Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

EXPERIMENT NO 12

Lab Title: Parallel and Distributed Computing: MP Programs

Student Name: Muhammad Burhan Ahmed Reg. No: 210287


Objective: Implement and analyze various MP Programs.
LAB ASSESSMENT:
Attributes Excellent Good Average Satisfactory (2) Unsatisfactory (1)
(5) (4) (3)

Ability to
Conduct
Experiment
Ability to assimilate the
results

Effective use of lab


equipment and
follows the lab
safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:


Attributes Excellent Good Average Satisfactory Unsatisfactory
(5) (4) (3) (2) (1)

Data presentation

Experimental results

Conclusion

Total Marks: Obtained Marks:

Date: 12/23/2024 Signature:


Air University

DEPARTMENT OF ELECTRICAL AND


COMPUTER ENGINEERING

LAB REPORT 12

SUBMITTED TO: Miss Sidrish Ehsan

SUBMITTED BY: Muhammad Burhan Ahmed

Date: 12/23/2024
LAB TASKS
Lab Task 1: Write an MPI program that divides an array of integers
among several processes, and each process computes the sum of its
subarray. Finally, process 0 should collect and print the total sum.

Setup:

Code:
Output:
Lab Task 02:
Implement a parallel matrix addition program in C using MPI, where each
process adds a submatrix and then gathers the result on process 0.

Code:
Output:
Lab Task 3: Implement an MPI program in C that parallelizes the
computation of the sum of an array by distributing the elements across
multiple processes and combining the results.

Code:
Output:
Lab Task 4: Write an MPI program in C to compute the sum of an array
by distributing the array elements across multiple processes and then
combining the partial sums to get the total sum.

Code:
Output:
Conclusion:

In this lab, we focused on the fundamentals of parallel and distributed computing using
MPI. We gained practical insights into the benefits and challenges of parallel computing. I
used MPI communication primitives such as MPI_Scatter and MPI_Reduce to efficiently
distribute data and aggregate results, ensuring effective collaboration between processes
and reducing redundancy. These skills are critical for solving real-world problems in high
performance computing and distributed systems.

You might also like