0% found this document useful (0 votes)
5 views

Lab11 Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Lab11 Parallel and Distributed Computing

Uploaded by

210287
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

AIR UNIVERSITY

DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

EXPERIMENT NO 11

Lab Title: __Parallel and Distributed Computing: MP Programs____________

Student Name: Muhammad Burhan Ahmed Reg. No: 210287 ___


Objective: Implement and analyze various MP Programs.
LAB ASSESSMENT:
Attributes Excellent Good Average Satisfactory (2) Unsatisfactory (1)
(5) (4) (3)

Ability to Conduct
Experiment

Ability to assimilate the


results

Effective use of lab


equipment and
follows the lab
safety rules

Total Marks: Obtained Marks:

LAB REPORT ASSESSMENT:


Attributes Excellent Good Average Satisfactory Unsatisfactory
(5) (4) (3) (2) (1)

Data presentation

Experimental results

Conclusion

Total Marks: Obtained Marks:

Date: _______12/18/2024______ Signature:


Air University

DEPARTMENT OF ELECTRICAL AND


COMPUTER ENGINEERING

LAB REPORT 11

SUBMITTED TO: Miss Sidrish Ehsan

SUBMITTED BY: Muhammad Burhan Ahmed

Date: 12/18/2024
2|Page
LAB TASKS

Lab Task 01:

Write a C program to demonstrate the use of MPI_Bcast()


Case 1: broadcast = 500

Setup:

Code & Output:

3|Page
4|Page
Lab Task 02:

Assume the variable rank contains the process rank and root is 3. What will be
stored in array b [ ] on each of four processes if each executes the following
code fragment?
int b [4] = {0 , 0 , 0 , 0};
MPI_Gather ( & rank , 1 , MPI_INT , b , 1 , MPI_INT , root
,MPI_COMM_WORLD);
Hint. The function prototype is as follows:
int MPI_Gather (
void * sendbuf , // pointer to send buffer
int sendcount , // number of items to send
MPI_Datatype sendtype , // type of send buffer data void * recvbuf ,
// pointer to receive buffer
int recvcount , // items to receive per process
MPI_Datatype recvtype , // type of receive buffer data int root , //
rank of receiving process MPI_Comm comm ) // MPI communicator to
use
Code:

5|Page
Output:

6|Page
Conclusion:

In this lab, we focused on the fundamentals of parallel and distributed computing using
MPI. We gained practical insights into the benefits and challenges of parallel computing. I
used MPI communication primitives such as MPI_Bcast and MPI_Gather to efficiently
distribute data and aggregate results, ensuring effective collaboration between processes
and reducing redundancy. These skills are critical for solving real-world problems in high
performance computing and distributed systems.

7|Page

You might also like