0% found this document useful (0 votes)
153 views30 pages

CP4292 Multicore ME Lab Record

Uploaded by

haritha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
153 views30 pages

CP4292 Multicore ME Lab Record

Uploaded by

haritha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

St.

PETER'S
COLLEGE OF ENGINEERING & TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
Avadi, Chennai, Tamilnadu – 600 054
Phone: 7358110159/56 website: [Link] email: spcet2008@[Link]

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CP4292-MULTICORE ARCHITECTURE AND PROGRAMMING


LABORATORY

NAME

REGNO.

YEAR

SEM
St. PETER'S
COLLEGE OF ENGINEERING & TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
Avadi, Chennai, Tamilnadu – 600 054
Phone: 7358110159/56 website: [Link] email: spcet2008@[Link]

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Bonafide Certificate

NAME: …………………………………………………………………………………………..

YEAR:I SEMESTER: I I BRANCH: COMPUTER SCIENCE AND ENGINEERING

REGISTER NO: ………………………………………………..

Certified that this bonafide record of work done by the above student of the CS4292– Multicore
Architecture And Programming Laboratory during the year 2022-2023.

Faculty-in-charge Head of the Department

Submitted for the practical Examination held on …………………………………………….


at [Link]’S COLLEGE OF ENGINEERING AND TECHNOLOGY.

Internal Examiner External Examiner


INDEX

[Link] DATE TITLE PG NO SIGN


St. PETER'S
COLLEGE OF ENGINEERING & TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
Avadi, Chennai, Tamilnadu – 600 054
Phone: 7358110159/56 website: [Link] email: spcet2008@[Link]

VISION

To emerge as an Institution of Excellence by providing High Quality Education in

Engineering, Technology and Management to contribute for the economic as well as societal

growth of our Nation.

MISSION

 To impart strong fundamental and Value-Based Academic knowledge in various

Engineering, Technology and Management disciplines to nurture creativity.

 To promote innovative Research and Development activities by collaborating with

Industries, R&D organizations and other statutory bodies.

 To provide conducive learning environment and training so as to empower the students with

dynamic skill development for employability.

 To foster Entrepreneurial spirit amongst the students for making a positive impact on

remarkable community development.


St. PETER'S
COLLEGE OF ENGINEERING & TECHNOLOGY
Affiliated to Anna University | Approved by AICTE
Avadi, Chennai, Tamilnadu – 600 054
Phone: 7358110159/56 website: [Link] email: spcet2008@[Link]

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

VISION

To empower our students to take part in progressive socio– economic conditioned nation

building process by creating diligent, adept and responsible Computer Science Engineers.

MISSION

1. To provide Quality Engineering Education through cutting edge technologies in Computer

Science and Engineering.

2. To provide conducive learning milieu to comprehend nitty-gritty of computing process,

edify the students to become successful professional lifelong learners.

3. To establish Industry-Institution liaison to make the students to cater the needs of Industry.

4. To promote projects and activities in the promising areas of Engineering and Technology.
1. PROGRAM EDUCATIONAL OBJECTIVES
Graduates can
PEO1: Apply their technical competence in computer science to solve real world problems, with
technical and people leadership..
PEO2: Conduct cutting edge research and develop solutions on problems of social relevance.
PEO3: Work in a business environment, exhibiting team skills, work ethics, adaptability and
lifelong learning.
2. PROGRAMSPECIFICOUTCOMES
Engineering Graduates will be able to:
PSO1: Exhibit design and programming skills to build and automate business solutions
using cutting edge technologies
PSO2: Strong theoretical foundation leading to excellence and excitement towards research,
to provide elegant solutions to complex problems.
PSO3: Ability to work effectively with various engineering fields as a team to design, build
and develop system applications.
3. PROGRAMOUTCOMES

Engineering Graduates will be able to:


PO1 - Engineering Knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex
engineering problems.

PO2 - Problem Analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.

PO3-Design/Development of Solutions: Design solutions for complex engineering problems and


design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
PO4 - Conduct investigations of complex problems: Use research-based knowledge and research
methods including design no of experiments, analysis and interpretation of data and synthes
is of the information to provide valid conclusions.

PO5 - Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and it tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.

PO6 - the engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant
to the professional engineering practice.

PO7 - Environment and sustainability: Understand the impact of the professional


engineeringsolutionsinsocietalandenvironmentalcontexts,anddemonstratetheknowledgeof,
and need for sustainable development.

PO8-Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.

PO9-Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.

PO10 - Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and
receive clear instructions.

PO11 - Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multi disciplinary environments.
PO12-Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

COURSEOUTCOME

At the end of this course, the students will be able to:


CO1: Describe Multicore architectures and identify their characteristics and challenges.
CO2: Identify the issues in programming Parallel Processors.

CO3: Write programs using OpenMP and MPI.

CO4: Design parallel programming solutions to common problems.

CO5: Compare and contrast programming for serial processors and programming for parallel
processors.

9
Write a simple Program to demonstrate an OpenMP Fork-Join Parallelism
[Link].1

1. Aim:

To write a simple Program to demonstrate an OpenMP Fork-Join Parallelism.

Program
*
Create a program that computes a simple matrix vector multiplication
b=Ax, either in fortran or C/C++. Use OpenMP directives to make
it run in parallel.

This is the parallel version.


*/

#include <stdio.h>
#include <omp.h>

int main() {
float A[2][2] = {{1,2},{3,4}};
float b[] = {8,10};
float c[2];
int i,j;

// computes A*b
#pragma omp parallel for
for (i=0; i<2; i++) {
c[i]=0;
for (j=0;j<2;j++) {
c[i]=c[i]+A[i][j]*b[j];
}
}

// prints result
for (i=0; i<2; i++) {
printf("c[%i]=%f \n",i,c[i]);
}

return 0;
}

10
Output:

RESULT:
Thus the program successfully executed.

11
[Link].2 Create a program that computes a simple matrix-vector multiplication b=Ax, either
in C/C++. Use OpenMP directives to make it run in parallel.

Aim:

To Create a program that computes a simple matrix-vector multiplication b=Ax, either in C/C++.
Use OpenMP directives to make it run in parallel.

Steps:
1. Open your C++ project in Visual Studio.
2. Right-click on your project in the Solution Explorer window and select "Properties".
3. In the Properties window, navigate to Configuration Properties -> C/C++ -> Language.
4. Set the "Open MP Support" option to "Yes (/openmp)".
5. Click "Apply" and "OK" to save the changes.

Program
#include <iostream>
#include <vector>
#include <chrono>
#include <omp.h>

using namespace std;


const int N = 1000;
int main()
{
vector<vector<int>> A(N, vector<int>(N));
vector<vector<int>> B(N, vector<int>(N));
vector<vector<int>> C(N, vector<int>(N));

// Initialize matrices A and B with random values


for (int i = 0; i < N; i++)
{ for (int j = 0; j < N; j++)
{
A[i][j] = rand() % 100;
B[i][j] = rand() % 100;
}
auto start_serial = chrono::high_resolution_clock::now();
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++)
{ int sum = 0;
for (int k = 0; k < N; k++)
{ sum += A[i][k] * B[k][j];
}
C[i][j] = sum;
}
}
auto end_serial = chrono::high_resolution_clock::now();
12
auto duration_serial = chrono::duration_cast<chrono::milliseconds>(end_serial -
start_serial);
// Perform matrix multiplication in parallel using OpenMP
auto start_parallel = chrono::high_resolution_clock::now();
#pragma omp parallel for
for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
int sum = 0;
for (int k = 0; k < N; k++)
{ sum += A[i][k] *
} B[k][j];
C[i][j] = sum;
}
}
auto end_parallel = chrono::high_resolution_clock::now();
auto duration_parallel = chrono::duration_cast<chrono::milliseconds>(end_parallel -
start_parallel);
// Display the time taken for each approach
cout << "Time taken for serial matrix multiplication: " << duration_serial.count() << "
milliseconds" << endl;
cout << "Time taken for parallel matrix multiplication: " << duration_parallel.count()
<< " milliseconds" << endl;
return 0;
}
}

OUTPUT

RESULT:
Thus the program successfully executed.

13
[Link].3 Create a program that computes the sum of all the elements in an array A (C/C++)
or a program that finds the largest number in an array A. Use OpenMP directives to
make it run in parallel.

Aim:

To create a program that computes the sum of all the elements in an array A (C/C++) or
a program that finds the largest number in an array A. Use OpenMP directives to make it run in parallel.

program

#include<stdio.h>
#include<omp.h>

/* Main Program */

main()
{
float *Array, *Check, serial_sum, sum, partialsum;
int array_size, i;

printf("Enter the size of the array\n");


scanf("%d", &array_size);

if (array_size <= 0) {
printf("Array Size Should Be Of Positive Value ");
exit(1);
}
/* Dynamic Memory Allocation */

Array = (float *) malloc(sizeof(float) * array_size);


Check = (float *) malloc(sizeof(float) * array_size);

/* Array Elements Initialization */

for (i = 0; i < array_size; i++) {


Array[i] = i * 5;
Check[i] = Array[i];
}

printf("The Array Elements Are \n");

for (i = 0; i < array_size; i++)


printf("Array[%d]=%f\n", i, Array[i]);

sum = 0.0;
partialsum = 0.0;

/* OpenMP Parallel For Directive And Critical Section */

#pragma omp parallel for shared(sum)


14
for (i = 0; i < array_size; i++)
{ #pragma omp critical
sum = sum + Array[i];

serial_sum = 0.0;

/* Serail Calculation */
for (i = 0; i < array_size; i++)
serial_sum = serial_sum + Check[i];

if (serial_sum == sum)
printf("\nThe Serial And Parallel Sums Are Equal\n");
else {
printf("\nThe Serial And Parallel Sums Are UnEqual\n");
exit(1);
}

/* Freeing Memory */
free(Check);
free(Array);

printf("\nThe SumOfElements Of The Array Using OpenMP Directives Is %f\n", sum);


printf("\nThe SumOfElements Of The Array By Serial Calculation Is %f\n",
serial_sum);
}

OUTPUT

RESULT:
Thus the program successfully executed.

15
READING DATA FROM TEXT FILES, EXCEL AND THE WEB AND
[Link].4 EXPLORING VARIOUS COMMANDS FOR DOING DESCRIPTIVE
ANALYTICS ON THE IRIS DATA SET

Aim:

To read a data from textfiles, excel and the web and explore various commands descriptive
analytics of the iris data set.

program

// OpenMP program to print Hello World


// using C language

// OpenMP header
#include <omp.h>

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char* argv[])


{

// Beginning of parallel region


#pragma omp parallel
{

printf("Hello World... from thread = %d\n",


omp_get_thread_num());
}
// Ending of parallel region
}

Output :

RESULT:
Thus the program successfully executed.

16
Implement the All-Pairs Shortest-Path Problem (Floyd's Algorithm) Using OpenMP
[Link].5

Aim:

To implement the All-Pairs Shortest-Path Problem (Floyd's Algorithm) Using OpenMP

program

// C++ Program for Floyd Warshall Algorithm


#include <bits/stdc++.h>
using namespace std;

// Number of vertices in the graph


#define V 4

/* Define Infinite as a large enough


[Link] value will be used for
vertices not connected to each other */
#define INF 99999

// A function to print the solution matrix


void printSolution(int dist[][V]);

// Solves the all-pairs shortest path


// problem using Floyd Warshall algorithm
void floydWarshall(int dist[][V])
{

int i, j, k;

/* Add all vertices one by one to


the set of intermediate vertices.
---> Before start of an iteration,
we have shortest distances between all
pairs of vertices such that the
shortest distances consider only the
vertices in set {0, 1, 2, .. k-1} as
intermediate vertices.
----> After the end of an iteration,
vertex no. k is added to the set of
intermediate vertices and the set becomes {0, 1, 2, ..
k} */
for (k = 0; k < V; k++) {
// Pick all vertices as source one by one
17
for (i = 0; i < V; i++) {
// Pick all vertices as destination for the
// above picked source
for (j = 0; j < V; j++) {
// If vertex k is on the shortest path from
// i to j, then update the value of
// dist[i][j]
if (dist[i][j] > (dist[i][k] + dist[k][j])
&& (dist[k][j] != INF
&& dist[i][k] != INF))
dist[i][j] = dist[i][k] + dist[k][j];
}
}
}

// Print the shortest distance matrix


printSolution(dist);
}

/* A utility function to print solution */


void printSolution(int dist[][V])
{
cout << "The following matrix shows the shortest "
"distances"
" between every pair of vertices \n";
for (int i = 0; i < V; i++) {
for (int j = 0; j < V; j++)
{ if (dist[i][j] ==
INF)
cout << "INF"
<< " ";
else
cout << dist[i][j] << " ";
}
cout << endl;
}
}

// Driver's code
int main()
{
/* Let us create the following weighted graph
10
(0)------->(3)
| /|\
5| |
| |1

18
\|/ |

19
(1)------->(2)
3 */
int graph[V][V] = { { 0, 5, INF, 10 },
{ INF, 0, 3, INF },
{ INF, INF, 0, 1 },
{ INF, INF, INF, 0 } };

// Function call
floydWarshall(graph);
return 0;
}

OUTPUT:

The following matrix shows the shortest distances between every pair of
vertices
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0
Time Complexity: O(V3)
Auxiliary Space: O(V2)
:

RESULT:
: Thus the program successfully executed.

20
Implement a program Parallel Random Number Generators using Monte Carlo
[Link].6 Methods in OpenMP

Aim:
To implement a program Parallel Random Number Generators using Monte Carlo Methods in OpenMP.

PROGRAM:
// C++ program for the above approach
#include <iostream>
using namespace std;

// Function to find estimated


// value of PI using Monte
// Carlo algorithm
void monteCarlo(int N, int K)
{

// Stores X and Y coordinates


// of a random point
double x, y;

// Stores distance of a random


// point from origin
double d;

// Stores number of points


// lying inside circle
int pCircle = 0;

// Stores number of points


// lying inside square
int pSquare = 0;
int i = 0;

// Parallel calculation of random


// points lying inside a circle
#pragma omp parallel firstprivate(x, y, d, i) reduction(+ : pCircle, pSquare) num_threads(K)
{

// Initializes random points


// with a seed

21
srand48((int)time(NULL));

for (i = 0; i < N; i++)


{

// Finds random X co-ordinate


x = (double)drand48();

// Finds random X co-ordinate


y = (double)drand48();

// Finds the square of distance


// of point (x, y) from origin
d = ((x * x) + (y * y));

// If d is less than or
// equal to 1
if (d <= 1)
{

// Increment pCircle by 1
pCircle++;
}

// Increment pSquare by 1
pSquare++;
}
}

// Stores the estimated value of PI


double pi = 4.0 * ((double)pCircle / (double)(pSquare));

// Prints the value in pi


cout << "Final Estimation of Pi = "<< pi;
}

// Driver Code
int main()
{

// Input
int N = 100000;
int K = 8;

// Function call
monteCarlo(N, K);
}

22
// This code is contributed by shivanisinghss2110
Output:

RESULT:
Thus the program successfully executed.

23
[Link].7
Write a Program to demonstrate MPI-broadcast-and-collective-communication in C

Aim:

To write a program to demonstrate MPI-broadcast - and - collective - communication in C.

program:
include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
int rank, nprocs;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Barrier(MPI_COMM_WORLD);
printf("Hello, world. I am %d of %d\n", rank, nprocs);fflush(stdout);
MPI_Finalize();
return 0;
}

Output
Standard Output
Compiling
Compilation is OK
Execution ...
Hello, world. I am 0 of 8
Hello, world. I am 1 of 8
Hello, world. I am 2 of 8
Hello, world. I am 3 of 8
Hello, world. I am 4 of 8
Hello, world. I am 5 of 8
Hello, world. I am 6 of 8
Hello, world. I am 7 of 8

RESULT:

Thus the program successfully executed.

24
[Link].8 Write a Program to demonstrate MPI-broadcast-and-collective-communication in C

Aim:

To write a program to demonstrate MPI- broadcast - and - collective - communication in C.

program:
const int recvsize = 50;
int *sendbuf, recvbuf[recvsize];
int sendsize = nb_proc*recvsize;
sendbuf = new int[sendsize];
if (proc_id == 0)
Generate_data(sendbuf, sendsize);
MPI_Scatter(sendbuf, recvsize, MPI_INT, recvbuf, recvsize, MPI_INT, 0, MPI_COMM_WORLD);
for (i=0; i<nb_proc; i++)
Print_data(recvbuf, recvsize);

Example Using Gather

const int sendsize = 50;


int sendbuf[sendsize], *recvbuf;
int recvsize = nb_proc*sendsize;
if (proc_id == 0)
recvbuf = new int[recvsize];
for (i=0; i<nb_proc; i++)
Generate_data(sendbuf, sendsize);
MPI_Gather(sendbuf, sendsize, MPI_INT, recvbuf, sendsize, MPI_INT, 0, MPI_COMM_WORLD);
if (proc_id == 0)
Print_data(recvbuf, recvsize);

Output :

RESULT:

Thus the program successfully executed.

25
[Link].9 Write a Program to demonstrate MPI-send-and-receive in C.

Aim:

To write a Program to demonstrate MPI-send-and-receive in C.

program:
const int recvsize = 50;

#include <stdio.h>
#include <mpi.h>

#define max_rows 100000


#define send_data_tag 2001
#define return_data_tag 2002

int array[max_rows];
int array2[max_rows];

main(int argc, char **argv)


{
long int sum, partial_sum;
MPI_Status status;
int my_id, root_process, ierr, i, num_rows, num_procs,
an_id, num_rows_to_receive, avg_rows_per_process,
sender, num_rows_received, start_row, end_row, num_rows_to_send;

/* Now replicte this process to create parallel processes.


* From this point on, every process executes a seperate copy
* of this program */

ierr = MPI_Init(&argc, &argv);

root_process = 0;

/* find out MY process ID, and how many processes were started. */

ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);


ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);

if(my_id == root_process) {

/* I must be the root process, so I will query the user


* to determine how many numbers to sum. */

printf("please enter the number of numbers to sum: ");


scanf("%i", &num_rows);

if(num_rows > max_rows)


{ printf("Too many numbers.\n");
26
exit(1);
}

avg_rows_per_process = num_rows / num_procs;

/* initialize an array */

for(i = 0; i < num_rows; i++)


{ array[i] = i + 1;
}

/* distribute a portion of the bector to each child process */

for(an_id = 1; an_id < num_procs; an_id++) {


start_row = an_id*avg_rows_per_process + 1;
end_row = (an_id + 1)*avg_rows_per_process;

if((num_rows - end_row) < avg_rows_per_process)


end_row = num_rows - 1;

num_rows_to_send = end_row - start_row + 1;

ierr = MPI_Send( &num_rows_to_send, 1 , MPI_INT,


an_id, send_data_tag, MPI_COMM_WORLD);

ierr = MPI_Send( &array[start_row], num_rows_to_send, MPI_INT,


an_id, send_data_tag, MPI_COMM_WORLD);
}

/* and calculate the sum of the values in the segment assigned


* to the root process */

sum = 0;
for(i = 0; i < avg_rows_per_process + 1; i++)
{ sum += array[i];
}

printf("sum %i calculated by root process\n", sum);

/* and, finally, I collet the partial sums from the slave processes,
* print them, and add them to the grand sum, and print it */

for(an_id = 1; an_id < num_procs; an_id++) {

ierr = MPI_Recv( &partial_sum, 1, MPI_LONG, MPI_ANY_SOURCE,


return_data_tag, MPI_COMM_WORLD, &status);

sender = status.MPI_SOURCE;

printf("Partial sum %i returned from process %i\n", partial_sum, sender);

sum += partial_sum;
}

printf("The grand total is: %i\n", sum);


}

else {
27
/* I must be a slave process, so I must receive my array segment,
* storing it in a "local" array, array1. */

ierr = MPI_Recv( &num_rows_to_receive, 1, MPI_INT,


root_process, send_data_tag, MPI_COMM_WORLD, &status);

ierr = MPI_Recv( &array2, num_rows_to_receive, MPI_INT,


root_process, send_data_tag, MPI_COMM_WORLD, &status);

num_rows_received = num_rows_to_receive;

/* Calculate the sum of my portion of the array */

partial_sum = 0;
for(i = 0; i < num_rows_received; i++)
{ partial_sum += array2[i];
}

/* and finally, send my partial sum to hte root process */

ierr = MPI_Send( &partial_sum, 1, MPI_LONG, root_process,


return_data_tag, MPI_COMM_WORLD);
}
ierr = MPI_Finalize();
}

OutPut:

RESULT:
Thus the program successfully executed.

28
[Link].10 Write a Program to demonstrate MPI-send-and-receive in C.

Aim:

To write a Program to demonstrate MPI-send-and-receive in C.

Program:
const int recvsize = 50;
#include <mpi.h>
#include "stdio.h"

int main(int argc, char** argv)


{
// Initialize the MPI environment
MPI_Init(NULL, NULL);

// Get the rank of the process


int PID;
MPI_Comm_rank(MPI_COMM_WORLD, &PID);

// Get the number of processes


int number_of_processes;
MPI_Comm_size(MPI_COMM_WORLD, &number_of_processes);

// Get the name of the processor


char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_length; MPI_Get_processor_name(processor_name,
&name_length);

// Print off a hello world message


printf("Hello MPI user: from process PID %d out of %d processes on machine %s\n",
PID, number_of_processes, processor_name);

// Finalize the MPI environment


MPI_Finalize();

return 0;
29
}

Output:

RESULT:
Thus the program successfully executed.

30

You might also like