Lab11 Parallel and Distributed Computing
Lab11 Parallel and Distributed Computing
EXPERIMENT NO 11
Ability to Conduct
Experiment
Data presentation
Experimental results
Conclusion
LAB REPORT 11
Date: 12/18/2024
2|Page
LAB TASKS
Setup:
3|Page
4|Page
Lab Task 02:
Assume the variable rank contains the process rank and root is 3. What will be
stored in array b [ ] on each of four processes if each executes the following
code fragment?
int b [4] = {0 , 0 , 0 , 0};
MPI_Gather ( & rank , 1 , MPI_INT , b , 1 , MPI_INT , root
,MPI_COMM_WORLD);
Hint. The function prototype is as follows:
int MPI_Gather (
void * sendbuf , // pointer to send buffer
int sendcount , // number of items to send
MPI_Datatype sendtype , // type of send buffer data void * recvbuf ,
// pointer to receive buffer
int recvcount , // items to receive per process
MPI_Datatype recvtype , // type of receive buffer data int root , //
rank of receiving process MPI_Comm comm ) // MPI communicator to
use
Code:
5|Page
Output:
6|Page
Conclusion:
In this lab, we focused on the fundamentals of parallel and distributed computing using
MPI. We gained practical insights into the benefits and challenges of parallel computing. I
used MPI communication primitives such as MPI_Bcast and MPI_Gather to efficiently
distribute data and aggregate results, ensuring effective collaboration between processes
and reducing redundancy. These skills are critical for solving real-world problems in high
performance computing and distributed systems.
7|Page