0% found this document useful (0 votes)
179 views4 pages

HW2 Solutions

This document contains solutions to homework problems related to parallel programming concepts. It discusses performance comparisons between two computers, speedup calculations for parallelizing an application across multiple processors, Amdahl's law analysis of improving different parts of an application, efficiency metrics for an application's performance, and plots of scalability and parallel efficiency for a sample parallel program.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views4 pages

HW2 Solutions

This document contains solutions to homework problems related to parallel programming concepts. It discusses performance comparisons between two computers, speedup calculations for parallelizing an application across multiple processors, Amdahl's law analysis of improving different parts of an application, efficiency metrics for an application's performance, and plots of scalability and parallel efficiency for a sample parallel program.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Parallel Programming WS15 HOMEWORK #2 (Solutions)

1 Basic concepts

1. Performance. Suppose we have two computers A and B. Computer A has a clock cycle of
1 ns and performs 2 instructions per cycle. Computer B, instead, has a clock cycle of 600 ps
and performs 1.25 instructions per cycle. Assuming a program requires the execution of the
same number of instructions in both computers:

• Which computer is faster for this program?


• What if Computer B required a 10% more instructions than Computer A?

Solution.
2 instructions 1 cycle instructions
Computer A performs 1 cycle × 10−9 seconds
= 2 × 109 second
1.25 instructions 1 cycle instructions
Computer B performs 1 cycle × 600×10−12 seconds
= 2.08 × 109 second

Computer B performs more instructions per second, thus it is the fastest for this program.

Now, let’s n be the number of instructions required by Computer A, and 1.1 × n the number
n
of instructions required by Computer B. The program will take 2×10 9 seconds in Computer
n
A and 1.89×109 seconds in Computer B. Therefore, in this scenario, Computer A executes the
program faster.

2. Speedup.
Assume the runtime of an application for a problem is 100 seconds for problem size 1. It
consists of an initialization phase which lasts for 10 seconds and cannot be parallelized, and a
problem solving phase which can be perfectly parallelized and grows quadratic with growing
problem size.

• What is the speedup for the given application as a function of the number of processors
p and the problem size n.
• What is the execution time and speedup of the application with problem size 1, if it is
parallelized and run on 4 processors?
• What is the execution time of the application if the problem size is increased to 4 and it is
run on 4 processors? And on 16 processors? What is the speedup of both measurements?

Solution.
The application has an inherently sequential part (cs ) that takes 10 seconds, and a paralleliz-
able part (cp ) that takes 90 seconds for problem size 1. Since, the parallelizable part grows
quadratically with the problems size, we can model T1 (execution time in 1 processor) as:

cs + cp × n2 .

The function of the speedup is, thus,

1
T (1, n) cs + cp × n2 10 + 90 × n2
S(p, n) := := ≡ .
T (p, n) cs + (cp × n2 )/p 10 + (90 × n2 )/p
For problem size 1 (n = 1) and 4 processors (p = 4), the execution time is 32.5 seconds. The
achieved speedup is 3.08.
Finally, if problem size is increased to 4, the execution time and speedup using 4 and 16
processors is:

• 4 processors: 370 seconds, speedup of 3.92.


• 16 processors: 100 seconds, speedup of 14.5.

3. Amdahl’s law. Assume an application where the execution of floating-point instructions on


a certain processor P consumes 60% of the total runtime. Moreover, let’s assume that 25%
of the floating-point time is spent in square root calculations.
• Based on some initial research, the design team of the next-generation processor P 2
believes that it could either improve the performance of all floating point instructions
by a factor of 1.5 or alternatively speed up the square root operation by a factor of 8.
From which design alternative would the aforementioned application benefit the most?
• Instead of waiting for the next processor generation the developers of the application
decide to parallelize the code. What speedup can be achieved on a 16-CPU system,
if 90% of the code can be perfectly parallelized? What fraction of the code has to be
parallelized to get a speedup of 10?
Solution.
1
Amdahl’s law: Sp (n) := β+(1−β)/p .

• Improvement 1 (all fp instructions sped up by a factor 1.5). Sequential part (β): 0.4.
p = 1.5. The application would observe a total speedup of:

1 1
Sp (n) := = = 1.25.
β + (1 − β)/p 0.4 + (1 − 0.4)/1.5
• Improvement 2 (square root instructions sped up by a factor of 8). Sequential part (β):
0.4 + 0.45. p = 8. The application would observe a total speedup of:

1 1
Sp (n) := = = 1.15.
β + (1 − β)/p .85 + (1 − 0.85)/8
Thus, the application would benefit the most from the first alternative.
Parallelization of code. The speedup achieved on a 16-CPU system is:

1 1
Sp (n) := = = 6.4.
β + (1 − β)/p 0.1 + (1 − 0.1)/16
To attain a speedup of 10, a 96% of the code would need to be perfectly parallelizable. This
value is obtained by solving the equation:

1
10 == .
β + (1 − β)/16

2
4. Efficiency. Consider a computer that has a peak performance of 8 GFlops/s. An application
running on this computer executes 15 TFlops, and takes 1 hour to complete.

• How many GFlops/s did the application attain?


• Which efficiency did it achieve?

Solution.
15 TFlops/s
The application attained: 3600 s = 4.26 GFlops/s.
4.26 GFlops/s
The achieved efficiency is: 8 GFlops/s = 53%.

5. Parallel efficiency. Given the data in Tab. 1, use your favorite plotting tool to plot

a) The scalability of the program (speedup vs number of processors)


b) The parallel efficiency attained (parallel efficiency vs number of processors)

In both cases plot also the ideal case, that is, scalability equal to the number of processors
and parallel efficiency equal to 1, respectively.

# Processors Best seq.(1) 2 4 8 16


# GFlops/s 4.0 7.6 14.9 23.1 35.6

Table 1: Performance attained vs number of processors.

Solution.
Table 2 includes the speedup and parallel efficiency attained. Figure 1 gives an example of
the requested plots.

# Processors Best seq.(1) 2 4 8 16


# Speedup 1 1.9 3.725 5.775 8.9
# Par. Eff. 1 0.95 0.93 0.72 0.56

Table 2: Performance attained vs number of processors.

3
16 Parallel efficiency 1

0.75
Speedup

8 0.5

4 0.25
2
1

1 2 4 8 16 1 2 4 8 16
Number of processors Number of processors

Figure 1: Scalability and parallel efficiency.

You might also like