0% found this document useful (0 votes)
73 views17 pages

Final Revision MCQs Parallel Processing CSW325 - 16 - 1 - 2023

The document contains a series of questions and answers related to parallel and distributed computing concepts. It covers topics such as types of computing systems, programming models, communication mechanisms, and characteristics of various architectures. Additionally, it includes essay questions that delve deeper into the principles and differences between parallel and distributed systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • Logical OR,
  • Flexibility,
  • Memory Architecture,
  • Dependability,
  • Multithreading,
  • Load Balancing,
  • Multiple Program Multiple Data,
  • Job Throughput,
  • MIMD Architecture,
  • Quality of Service
0% found this document useful (0 votes)
73 views17 pages

Final Revision MCQs Parallel Processing CSW325 - 16 - 1 - 2023

The document contains a series of questions and answers related to parallel and distributed computing concepts. It covers topics such as types of computing systems, programming models, communication mechanisms, and characteristics of various architectures. Additionally, it includes essay questions that delve deeper into the principles and differences between parallel and distributed systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • Logical OR,
  • Flexibility,
  • Memory Architecture,
  • Dependability,
  • Multithreading,
  • Load Balancing,
  • Multiple Program Multiple Data,
  • Job Throughput,
  • MIMD Architecture,
  • Quality of Service

1: Computer system of a parallel computer is capable of

A. Decentralized computing
B. Parallel computing
C. Centralized computing
D. Decentralized computing
E. Distributed computing
F. All of these
G. None of these
Answer - Click Here:
A
2: Writing parallel programs is referred to as
A. Parallel computation
B. Parallel processes
C. Parallel development
D. Parallel programming
E. Parallel computation
F. All of these
G. None of these
Answer - Click Here:
D
4: Dynamic networks of networks, is a dynamic connection that grows is called
A. Multithreading
B. Cyber cycle
C. Internet of things
D. Cyber-physical system
E. All of these
F. None of these
Answer - Click Here:
C
5: In which application system Distributed systems can run well?
A. HPC
D. HTC
C. HRC
D. Both A and B
E. All of these
F. None of these
Answer - Click Here:
D
6: In which systems desire HPC and HTC.
A. Adaptivity
B. Transparency
C. Dependency
D. Secretive
E. Adaptivity
F. All of these
G. None of these
Answer - Click Here:
B

7: No special machines manage the network of architecture in which resources are known
as
A. Peer-to-Peer
B. Space based
C. Tightly coupled
D. Loosely coupled
E. All of these
F. None of these
Answer - Click Here:
A
8: Significant characteristics of Distributed systems have of
A. 5 types
B. 2 types
C. 3 types
D. 4 types
E. All of these
F. None of these
Answer - Click Here:
C
9: Built of Peer machines are over
A. Many Server machines
B. 1 Server machine
C. 1 Client machine
D. Many Client machines
E. All of these
F. None of these
Answer - Click Here:
D
10: Type HTC applications are
A. Business
B. Engineering
C. Science
D. Media mass
E. All of these
F. None of these
Answer - Click Here:
A
11: Virtualization that creates one single address space architecture that of, is called
A. Loosely coupled
B. Peer-to-Peer
C. Space-based
D. Tightly coupled
E. Loosely coupled
F. All of these
G. None of these
Answer - Click Here:
C
12: We have an internet cloud of resources In cloud computing to form
A. Centralized computing
B. Decentralized computing
C. Parallel computing
D. Both A and B
E. All of these
F. None of these
Answer - Click Here:
E
13: Data access and storage are elements of Job throughput, of __________.
A. Flexibility
B. Adaptation
C. Efficiency
D. Dependability
E. All of these
F. None of these
Answer - Click Here:
C
14: Billions of job requests is over massive data sets, ability to support known as
A. Efficiency
B. Dependability
C. Adaptation
D. Flexibility
E. All of these
F. None of these
Answer - Click Here:
C
15: Broader concept offers Cloud computing to select which of the following.
A. Parallel computing
B. Centralized computing
C. Utility computing
D. Decentralized computing
E. Parallel computing
F. All of these
G. None of these
Answer - Click Here:
C
16: Resources and clients transparency that allows movement within a system is called
A.Mobility transparency
B. Concurrency transparency
C. Performance transparency
D. Replication transparency
E. All of these
F. None of these
Answer - Click Here:
A
17: Distributed program in a distributed computer running a is known as
A. Distributed process
B. Distributed program
C. Distributed application
D. Distributed computing
E. All of these
F. None of these
Answer - Click Here:
B
18: Uniprocessor computing devices is called__________.
A. Grid computing
B. Centralized computing
C. Parallel computing
D. Distributed computing
E. All of these
F. None of these
Answer - Click Here:
B
19: Utility computing focuses on a______________ model.
A. Data
B. Cloud
C. Scalable
D. Business
E. All of these
F. None of these
Answer - Click Here:
D
21: Aberavationn of HPC
A. High-peak computing
B. High-peripheral computing
C. High-performance computing
D. Highly-parallel computing
E. All of these
F. None of these
Answer - Click Here:
C

22: Peer-to-Peer leads to the development of technologies like


A. Norming grids
B. Data grids
C. Computational grids
D. Both A and B
E. All of these
F. None of these
Answer - Click Here:
D
23: Type of HPC applications of.
A. Management
B. Media mass
C. Business
D. Science
E. All of these
F.None of these
Answer - Click Here:
D
24: The development generations of Computer technology has gone through
A. 6
B. 3
C. 4
D. 5
E. All of these
F. None of these
Answer - Click Here:
D

25: Utilization rate of resources in an execution model is known to be its


A. Adaptation
B. Efficiency
C. Dependability
D. Flexibility
E. All of these
F. None of these
Answer - Click Here:
B
26: Even under failure conditions Providing Quality of Service (QoS) assurance is the
responsibility of
A. Dependability
B. Adaptation
C. Flexibility
D. Efficiency
E. All of these
F. None of these
Answer - Click Here:
A
27: Interprocessor communication that takes place
A. Centralized memory
B. Shared memory
C. Message passing
D. Both A and B
E. All of these
F. None of these
Answer - Click Here:
D

28: Data centers and centralized computing covers many and


A. Microcomputers
B. Minicomputers
C. Mainframe computers
D. Supercomputers
E. All of these
F. None of these
Answer - Click Here:
D
29: Which of the following is an primary goal of HTC paradigm___________.
A. High ratio Identification
B. Low-flux computing
C. High-flux computing
D. Computer utilities
E. All of these
F. None of these
Answer - Click Here:
C
30: The high-throughput service provided is measures taken by
A. Flexibility
B. Efficiency
D. Adaptation
E. Dependability
F. All of these
G. None of these
Answer - Click Here:
D

31-Parallel computing can include

a) Single computer with multiple processors

b) Arbitrary number of computers connected by a network


c) Combination of both A and B

d) None of these

32-In shared Memory

a) Changes in a memory location effected by one processor do not affect all other processors.

b) Changes in a memory location effected by one processor are visible to all other processors

c) Changes in a memory location effected by one processor are randomly visible to all other
processors.

d) None of these

33- Parallel computing can include

a) Single computer with multiple processors

b) Arbitrary number of computers connected by a network

c) Combination of both A and B

d) None of these

34- Uniform Memory Access (UMA) referred to

a) Here all processors have equal access and access times to memory

b) Here if one processor updates a location in shared memory, all the other processors know
about the update.

c) Here one SMP can directly access memory of another SMP and not all processors have equal
access time to all memories

d) None of these

35- Parallel computing can include


a) Single computer with multiple processors

b) Arbitrary number of computers connected by a network

c) Combination of both A and B

d) None of these

Answer »
Discuss »
Answer: (c)

36 -In the threads model of parallel programming

a) A single process can have multiple, concurrent execution paths

b) A single process can have single, concurrent execution paths.

c) A multiple process can have single concurrent execution paths.

d) None of these

Answer: (a)

37- Point-to-point communication referred to

a) It involves data sharing between more than two tasks, which are often specified as being
members in a common group, or collective.

b) It involves two tasks with one task acting as the sender/producer of data, and the other acting
as the receiver/consumer.

c) It allows tasks to transfer data independently from one another.

d) None of these

Answer: (b)

38- Here a single program is executed by all tasks simultaneously. At any moment in time, tasks
can be executing the same or different instructions within the same program. These programs
usually have the necessary logic programmed into them to allow different tasks to branch or
conditionally execute only those parts of the program they are designed to execute.

a) Single Program Multiple Data (SPMD)

b) Multiple Program Multiple Data (MPMD)


c) Von Neumann Architecture

d) None of these

Answer »
Answer: (a)

39- It is the simultaneous use of multiple compute resources to solve a computational problem

a) Parallel computing

b) Single processing

c) Sequential computing

d) None of these

Answer »
Answer: (a)

40-Synchronous communication operations referred to

a) Involves only those tasks executing a communication operation

b) It exists between program statements when the order of statement execution affects the results
of the program.

c) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of
the time. It can be considered as minimization of task idle time.

d) None of these

Answer »
Answer: (a)

41-These computer uses the stored-program concept. Memory is used to store both program and
data instructions and central processing unit (CPU) gets instructions and/ or data from memory.
CPU, decodes the instructions and then sequentially performs them.

a) Single Program Multiple Data (SPMD)

b) Flynn’s taxonomy

c) Von Neumann Architecture

d) None of these
Answer »
Answer: (c)

42-Synchronous communications

a) It require some type of “handshaking” between tasks that are sharing dat(A) This can be
explicitly structured in code by the programmer, or it may happen at a lower level unknown to
the programmer.

b) It involves data sharing between more than two tasks, which are often specified as being
members in a common group, or collective.

c) It involves two tasks with one task acting as the sender/producer of data, and the other acting
as the receiver/consumer.

d) It allows tasks to transfer data independently from one another.

Answer: (a)

43- Asynchronous communications

a) It involves data sharing between more than two tasks, which are often specified as being
members in a common group, or collective.

b) It involves two tasks with one task acting as the sender/producer of data, and the other acting
as the receiver/consumer.

c) It allows tasks to transfer data independently from one another.

d) None of these

Answer »
Discuss »
Answer: (c)

44. It is the simultaneous use of multiple compute resources to solve a computational


problem

a) Parallel computing

b) Single processing

c) Sequential computing

d) None of these
Answer: (a)

45-In designing a parallel program, one has to break the problem into discreet chunks of work
that can be distributed to multiple tasks. This is known as

a) Decomposition

b) Partitioning

c) Compounding

d) Both A and B

Answer »
Answer: (d)

46- These applications typically have multiple executable object files (programs). While the
application is being run in parallel, each task can be executing the same or different
program as other tasks. All tasks may use different data

a) Single Program Multiple Data (SPMD)

b) Multiple Program Multiple Data (MPMD)

c) Von Neumann Architecture

d) None of these

Answer: (b)

47- Fine-grain Parallelism is

a) In parallel computing, it is a qualitative measure of the ratio of computation to communication

b) Here relatively small amounts of computational work are done between communication events

c) Relatively large amounts of computational work are done between communication /


synchronization events

d) None of these

Answer »
Answer: (b)
48-

Essay Questions

1) What is parallel processing/computing?

Answer

using multiple processors/cores in parallel to solve problems more quickly than with a single
processor/core.

2) What is the main disadvantage of shared memory multiprocessors?


Answer
Developing parallel programs for shared memory multiprocessors is not too difficult since all
memory read operations are invisible to the programmer and could be coded the same as in a
serial program. Programming write instructions are relatively more difficult since this
operation requires locking the data access until a certain thread has finished processing the
data. The programmer has to identify the critical sections in the program and introduce
interprocess and interthread synchronization to ensure data integrity.

3) What is the main communication mechanism between processors in a distributed memory


multiprocessor system?
Answer
In a distributed-memory multiprocessor, each memory module is associated with a processor
and any processor can directly access its own memory and use message passing (MP)
mechanism in order to allow a processor to access other memory modules associated with
other processors.

4) In realistic workload, the parallel fraction (1 − α) of a program is not perfectly parallelizable.


What are the three fundamental reasons for this imperfectibly?
Answer
 Synchronization
 Load imbalance
 Resource contention.
5) Give four levels of parallelism in parallel computers.
Answer
The following levels of parallelism can be distinguished:
 Bit-level parallelism, e.g. all bits in a data word can be operated on simultaneously.
 Instruction level parallelism; a number of instructions are executed simultaneously.
 Multiple functional units; a number of functional units can operate in parallel.
 Multiple processors.
6) Identify one advantage and one disadvantage of the distributed memory architecture
(compared to the shared memory architecture).
Answer
Advantage: More scalable
Disadvantage: Harder to program.
7) What are the main differences between a parallel system and a distributed system?
Answer
Parallel Computing: In parallel computing multiple processors performs multiple tasks
assigned to them simultaneously. Memory in parallel systems can either be shared or
distributed. Parallel computing provides concurrency and saves time and money.
Distributed Computing: In distributed computing we have multiple autonomous computers
which seems to the user as single system. In distributed systems there is no shared memory
and computers communicate with each other through message passing.
In distributed computing a single task is divided among different computers.

8) What is the difference between fine-grained, coarse-grained and simultaneous


multithreading?
Answer
Fine-grained Multithreading: switching among threads happens at each instruction,
independently from the fact that the thread instruction has caused a cache miss. The threads
issue instructions in round-robin manner.
Coarse-grained Multithreading: a switch only happens when the thread in execution causes a
stall, thus wasting a clock cycle. At this point, a switch is made to another thread. When
this thread in turn causes a stall, a third thread is scheduled (or possibly the first one is re-
scheduled) and so on.
Simultaneous Multithreading: a, multiple instructions are issued at each clock cycle,
possibly belonging to different threads; this increases the utilization of the various
CPU resources.
9) Describe about four characteristics of MIMD multiprocessors that distinguish them from
multi computer systems or computer networks?

Answer
MIMD: Multiple Instructions and, Multiple Data. The MIMD class of parallel architecture is
the most familiar and possibly most basic form of parallel processor. MIMD architecture
consists of a collection of P independent, tightly coupled processors, each with memory that
may be common to all processors, and /or local and not directly accessible by the other
processors. The following are the characteristics of MIMD multiprocessor that distinguish
them from multi computer systems:
 Tightly coupled set of processors.
 Simultaneously execute different instruction sequences.
 Different sets of data
 Controlled by single OS.

10) What are the main differences between a parallel system and a distributed system?

Answer
Parallel Computing In parallel computing multiple processors performs multiple tasks
assigned to them simultaneously. Memory in parallel systems can either be shared or
distributed. Parallel computing provides concurrency and saves
time and money.
Distributed Computing In distributed computing we have multiple autonomous computers
which seems to the user as single system. In distributed systems there is no shared memory
and computers communicate with each other through message passing. In distributed
computing a single task is divided among different computers.

11) Compare and contrast cloud computing with more traditional cluster computer computing?
And What is novel about cloud computing as a concept?

Answer
The term cloud computing is used to capture this vision of computing as a utility.
A cloud is defined as a set of Internet-based application, storage and computing services
sufficient to support most users’ needs, thus enabling them to largely or totally
dispense with local data storage and application software. Clouds are generally
implemented on cluster computers to provide the necessary scale and performance
required by such services.
A cluster computer is a set of interconnected computers that cooperate closely to
Provide a single, integrated high-performance computing capability.

12) You are given n binary values X1, . . . ,Xn. Find the logical OR of these n values in constant
time on a CRCW PRAM with p processors. What is the parallel running time, speedup and
efficiency of your proposed algorithm? *****

Answer
The result stored in cell 0 is 0 (False) unless a processor writes a 1 in cell 0; then one of
the xi is 1 (True) and the result X should be True, as it is. The algorithm is the
following:
OR(n , p , x1 , . . . , xn )
mypid = pid ( ) ; // pid = 0 . . . p−1
if mypid == 0 then X=0;
for ( i =0; i<p ; i++) i f xi = 1 then X=1;
return (X)

The Boolean OR of n bits can be computed in O(1) time on an n-processor


common CRCW PRAM. The run time of a best known sequential algorithm be
S(n) = O(n). The total cost of a parallel algorithm is: p ∗ T(n, p) = n ∗ T(n, n) = n ∗
O(1). The speedup of a parallel algorithm is S(n)/T (n, p) = p and efficiency (Ω) =
speed up/p = 1.

13) What is an algorithm for Parallel Sum if we have p processors, where p < n? (You may
assume that n is a multiple of p).
Answer
Split the input into p groups of n/p keys each. Processor i is assigned the ith
group and adds the numbers sequentially in n/p−1 additions. This is done in
algorithm below. p sums are formed and summed up using the parallel sum
algorithm in log p time. Total is n/p + log p − 1. If n is not a multiple of p and
log p is not an integer, two more steps are required.

Sum (n , p , A[ 0 . . . n−1])
mypid = pid ( ) ; / / pid = 0 . . . p−1
sum[ pid ]=0;
f o r ( i =0; i<n/p−1; i++)
sum[ pid ] = sum[ pid ] + A[ pid ∗n/p+i ] ;
Paral lelSum(+, sum, Sum) ; / / Pa r a l l e l sum on p input s
return (Sum) ;

14) Provide a brief description of the following terms


SIMD: Single Instruction Multiple Data: That is, with a single instruction,
multiple data are concurrently processed.
2 NUMA: Non Uniform Memory Access: A computer architecture where each
CPU socket has its own memory banks. In a NUMA machine, accessing data
from a remote memory bank is slower than accessing data from local memory
bank.

You might also like