0% found this document useful (0 votes)
106 views2 pages

QuestionBank Module2&3

The document is a question bank for a course on Parallel Computing, specifically covering Modules 2 and 3. It includes a variety of questions related to shared and distributed memory systems, MPI programming, mutual exclusion techniques, and performance evaluation of parallel programs. Topics also address GPU programming, communication methods in MPI, and algorithms for parallel processing.

Uploaded by

Siddaraju S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views2 pages

QuestionBank Module2&3

The document is a question bank for a course on Parallel Computing, specifically covering Modules 2 and 3. It includes a variety of questions related to shared and distributed memory systems, MPI programming, mutual exclusion techniques, and performance evaluation of parallel programs. Topics also address GPU programming, communication methods in MPI, and algorithms for parallel processing.

Uploaded by

Siddaraju S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Parallel Computing (BCS702)

Question Bank Module 2 & 3


Sl.No. Questions

1 Define dynamic and static threads in shared memory? Discuss the


various issues related to shared memory.
2 What is a distributed memory system? Discuss about the message
passing API in distributed memory with an example.
3 The distributed memory systems can also be programmed using one-
sided communication and partitioned global address space languages.
Justify your answer.
4 Discuss different techniques used to ensure mutual exclusion of a shared
variable in a shared memory system.
5 Differentiate between Amdahl’s Law and Gustafson’s Law. Why is
wall-clock time preferred over CPU time in measuring the performance
of parallel programs?
6 Discuss the advantages and limitations of GPU programming. Explain
how GPU performance is evaluated.
7 Discuss the use of MPI_Init and MPI_Finalize functions in MPI
programs and write an MPI program to print a message where we use
process 0 as the designated process and other processes will send it
messages, which it will print.
8 Discuss the role of tree-structured communication in improving the
efficiency of collective operations
9 Discuss the challenges of input/output in MPI programs and explain how
MPI handles nondeterministic output
10 Explain how the trapezoidal rule can be parallelised using MPI. Write
the pseudocode for the parallel trapezoidal rule and describe how the
work is divided among processes.
11 Explain the role of following functions in MPI programming:
MPI_Send MPI_Reduce
MPI_Recv MPI_Bcast
MPI_Comm_size MPI_Scatter
MPI_Comm_rank MPI_Gather
12 What is collective communication in MPI? Explain with an example
how a global sum operation can be performed using collective
communication
13 Write the function for serial implementation of vector addition. Discuss
how to implement this using MPI and write the function for parallel
implementation.
14 In MPI, discuss the functions for reading and distributing a vector and
also printing a distributed vector with examples.
15 What are MPI derived datatypes? Discuss how to build a derived data
type in MPI and apply the same to write the Get_input function
16 The matrix-vector multiplication program is apparently weakly scalable.
Justify your answer.
17 Analyse a scenario where deadlock occurs due to mismatched blocking
MPI_Send and MPI_Recv calls. Suggest a solution to tell a program
is safe.

18 Discuss how to parallelize the serial odd-even transposition sorting


algorithm.

19 Differentiate between collective communication and point-to-point


communication.

20 Discuss how to evaluate the performance of MPI programs.

You might also like