0% found this document useful (0 votes)
92 views6 pages

Distributed Memory Programming With Mpi: Collective vs. Point-to-Point Communications

This document discusses the differences between collective and point-to-point communications in MPI (Message Passing Interface). Collective communications require all processes to call the same function, with compatible arguments, while point-to-point uses tags and communicators to match communications between specific processes. The document provides examples showing that the order of collective communication calls determines how they are matched, rather than the specific memory locations used in the calls. MPI_Allreduce is introduced as a collective function useful when all processes need the result of a global operation like a sum.

Uploaded by

thanisha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views6 pages

Distributed Memory Programming With Mpi: Collective vs. Point-to-Point Communications

This document discusses the differences between collective and point-to-point communications in MPI (Message Passing Interface). Collective communications require all processes to call the same function, with compatible arguments, while point-to-point uses tags and communicators to match communications between specific processes. The document provides examples showing that the order of collective communication calls determines how they are matched, rather than the specific memory locations used in the calls. MPI_Allreduce is introduced as a collective function useful when all processes need the result of a global operation like a sum.

Uploaded by

thanisha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

3/20/2017

DISTRIBUTED MEMORY
PROGRAMMING WITH MPI
Point-to-Point Communications

Collective vs. Point-to-Point Communications

• Allthe processes in the communicator must call the


same collective function.

• For example, a programthat attempts to match a call to


MPI_Reduceon one process with a call to MPI_Recvon
another process is erroneous, and, in all likelihood, the
programwill hang or crash.

1
3/20/2017

Collective vs. Point-to-Point Communications

• The arguments passedby each process to an MPI


collective communication must be “compatible.”

• For example, if one process passes in 0 as the


dest_processand another passes in 1, then the
outcome of a call to MPI_Reduceis erroneous, and,
once again, the programis likely to hang or crash.

Collective vs. Point-to-Point Communications

• The output_data_pargument is only used on


dest_process.

• However, all of the processes still needto pass in an


actual argument corresponding to output_data_p, even
if it’s just NULL.

2
3/20/2017

Collective vs. Point-to-Point Communications

• Point-to-point communications are matched on the


basis of tags and communicators.

• Collective communications don’t use tags.


• They’re matched solely on the basis of the
communicator and the order in which they’re called.

Example (1)

Multiple calls to MPI_Reduce

3
3/20/2017

Example (2)
• Suppose that each process calls MPI_Reducewith
operator MPI_SUM, and destination process 0.

• At first glance, it might seem that after the two calls to


MPI_Reduce, the value of b will be 3, and the value of d
will be 6.

Example (3)
• However, the names of the memory locations are
irrelevant to the matching of the calls to MPI_Reduce.

• The order of the calls will determine the matching so


the value stored in b will be 1+2+1 = 4, and the value
storedin d will be 2+1+2 = 5.

4
3/20/2017

MPI_Allreduce

• Useful in a situation in which all of the processes need


the result of a global sum in order to complete some
larger computation.

A global sum followed


by distribution of the
result.

5
3/20/2017

A butterfly-structured global sum.

You might also like