0% found this document useful (0 votes)
12 views16 pages

PDC Lecture 17 & 18

The document discusses communication methods in parallel and distributed computing, specifically focusing on cooperative and one-sided operations for data exchange between processes. It introduces the Message Passing Interface (MPI) for writing parallel programs, explaining how to find the number of processes and their ranks within a communicator. Additionally, it covers point-to-point communication, detailing the send and receive operations, and provides an exercise for practical application of these concepts.

Uploaded by

pivoke4989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views16 pages

PDC Lecture 17 & 18

The document discusses communication methods in parallel and distributed computing, specifically focusing on cooperative and one-sided operations for data exchange between processes. It introduces the Message Passing Interface (MPI) for writing parallel programs, explaining how to find the number of processes and their ranks within a communicator. Additionally, it covers point-to-point communication, detailing the send and receive operations, and provides an exercise for practical application of these concepts.

Uploaded by

pivoke4989
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

PARALLEL & DISTRIBUTED COMPUTING CS469

LECTURE # 17 & 18

Faizan ul Mustafa
Lecturer | Dept. of Computer Science
GIFT University Gujranwala, Pakistan
[email protected]

Faizan ul Mustafa | [email protected] 1


Communication between processes

Data must be exchanged with other workers


 Cooperative (Blocking)
all parties agree to transfer data
 One sided
one worker performs transfer of data

Faizan ul Mustafa | [email protected] 2


Cooperative operations

 Message-passing is an approach that makes the exchange of data cooperative.


 Data must both be explicitly sent and received.
 An advantage is that any change in the receiver's memory is made with the
receiver's participation.

Faizan ul Mustafa | [email protected] 3


One-sided operations

 One-sided operations between parallel


 processes include remote memory reads and writes.
 An advantage is that data can be accessed without waiting for another
process.

Faizan ul Mustafa | [email protected] 4


Writing MPI programs

#include "mpi.h"
#include <stdio.h>
Commentary
int main( int argc, char **argv )
{ • #include "mpi.h" provides basic MPI
MPI_Init( &argc, &argv ); definitions and types
printf( "Hello world\n" ); • MPI_Init starts MPI
MPI_Finalize(); • MPI_Finalize exits MPI
return 0;
} These two commands are always called first and last
in the program. The corresponding commands are
MPI_Init and MPI_Finalize. MPI_Init always takes a
reference to the command line arguments, while
MPI_Finalize does not. Thus, in C++, their
signatures are as follows :

int MPI_Init(int *argc, char ***argv);


int MPI_Finalize();
Faizan ul Mustafa | [email protected] 5
Finding out about the environment

Two of the first questions asked in a parallel program are: How many processes
are there? and Who am I?
How many is answered with MPI_Comm_size
and who am I is answered with MPI_Comm_rank.
The rank is a number between zero and size-1.

Faizan ul Mustafa | [email protected] 6


MPI_COMM_WORLD, size and ranks

Before starting any coding, we need a bit of context. When a program is ran
with MPI all the processes are grouped in what we call a communicator. You can
see a communicator as a box grouping processes together, allowing them to
communicate. Every communication is linked to a communicator, allowing the
communication to reach different processes. Communications can be either of
two types
Point-to-Point
Two processes in the same communicator are going to communicate.

Collective

All the processes in a communicator are going to communicate together.

Faizan ul Mustafa | [email protected] 7


MPI_COMM_WORLD is not the only communicator in MPI. We will later how to
create custom communicators, but for the moment, let's stick with
MPI_COMM_WORLD. In the following lessons, every time communicators will be
mentioned, just replace that in your head by MPI_COMM_WORLD.
The number in a communicator does not change once it is created. That number
is called the size of the communicator. At the same time, each process inside
a communicator has a unique number to identify it. This number is called the
rank of the process. In the previous example, the size of MPI_COMM_WORLD is 5.
The rank of each process is the number inside each circle. The rank of a
process always ranges from 0 to size – 1.

Faizan ul Mustafa | [email protected] 8


As we know about the SIZE and RANK of the processes under the command of a
communicator. The way to obtain these SIZE and RANK of processes, is to use
the following calls :

int size, rank;


MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);

Faizan ul Mustafa | [email protected] 9


A simple program

Faizan ul Mustafa | [email protected] 10


Point-to-point communication

 There are two types of communications, point-to-point (that we are going to call P2P from now
on), and collective. P2P communications are divided in two operations Send and Receive.
 The most basic forms of P2P communication are called blocking communications. The process
sending a message will be waiting until the process receiving has finished receiving all the
information.
SENDING MESSAGE
A send operation, sends a buffer of data of a certain type to another process. A P2P message has
the following properties.
 A reference to a buffer
 A data_type
 A number of elements
 A tag
 A destination id
 A communicator

Faizan ul Mustafa | [email protected] 11


• A reference to a buffer
The reference will always be a pointer to a buffer. This array will hold the data that you wish to
send from the current process to another.
• A data type
The data type must correspond precisely to the data stored in the buffer. For this, MPI has
predefined types that can be used. Most common types and their C counterparts are

• A number of elements
The number of elements in the buffer that you want to send to the destination.

Faizan ul Mustafa | [email protected] 12


 A tag
The tag is a simple integer that identifies the "type" of communication. This is a
completely informal value that you put yourself.
 A destination id
The rank of the process you want to send the data to.
 A communicator
The communicator on which to send the data to. Remember that the rank of processes
might change depending on the communicator you are choosing.
RECEIVING MESSAGES
The receiving of a message works in the exact same way as the send operation.
However, instead of a destination id, the call will require a source id : the
identification of the process from which you are waiting a message. On top of
that, depending if you are using blocking or non-blocking communications, you will
need additional arguments

Faizan ul Mustafa | [email protected] 13


Point-to-point communications, exercise 1

Let's have an actual communication between two processes. The objective of the
exercise is as follows : The program will be ran with two processes. Your program
will be given two random integers on the command line and read into a variable
local_value. Then, depending on the id of the process, your program will have
different behaviors :
Process #0
 Send your integer to Process #1
 Receive the integer of Process #1
 Write the sum of the two values on stdout
Process #1
 Receive the integer of Process #0
 Send your integer to Process #0
 Write the product of the two values on stdout

Faizan ul Mustafa | [email protected] 14


Hints
You can send information to a
process using the command
MPI_Send :
int MPI_Send(void *buf, int count ,
MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)

Faizan ul Mustafa | [email protected] 15


Thank You

Faizan ul Mustafa | [email protected] 16

You might also like