0% found this document useful (0 votes)
19 views

cc lab manual

The document outlines procedures for installing a C compiler, migrating virtual machines, installing Google App Engine, and simulating cloud scenarios using CloudSim. Each section provides step-by-step instructions for executing tasks such as creating applications, transferring files, and launching virtual machines. The results indicate successful execution of all procedures described.

Uploaded by

thamotharanpavi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

cc lab manual

The document outlines procedures for installing a C compiler, migrating virtual machines, installing Google App Engine, and simulating cloud scenarios using CloudSim. Each section provides step-by-step instructions for executing tasks such as creating applications, transferring files, and launching virtual machines. The results indicate successful execution of all procedures described.

Uploaded by

thamotharanpavi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

1

MODULE - I
Ex.No: 1a INSTALLING A C COMPILER IN THE VIRTUAL MACHINE AND
Date: EXECUTING A SAMPLE PROGRAM

AIM:

To find a procedure to install a C compiler in the virtual machine and execute a sample
program.

PROCEDURE:

Step 1: Open the terminal


2

Step 2: To check if GCC is already installed type the command gcc –version

After executing the above command if the system displays the message as shown above, then
it means GCC is already present

Step 3: In case if GCC is not present then execute the following command,
sudo apt install build-essential
The above command has to execute only if gcc is not present.
3

Step 4: Now let us create a C file in which we want to write our hello world program.
If you want to create a C file in desktop, set path accordingly.
To create file in ubuntu we can use touch command <file name>
4

Step 5: Now double click on the C file

Type the program in the window that appears and save the file by pressing Cltr+S and close
the file.
5

Step 7: Now to compile the file type the command as


Gcc <file name> <flag to provide output file name> <output file name>

gcc hello.c -o test


6

Now a output file will be created in desktop

Step 8: Since it a binary file we can execute it. To execute it provide the command as ./test
7

RESULT:

Thus, the procedure to install a C compiler in the virtual machine and execute a
sample program was executed successfully.
8

Ex.No: 1b
VIRTUAL MACHINE MIGRATION
Date:

AIM:
To show the migration of virtual machine based on the certain condition from one
node to the other.

PROCEDURE:
Step1: Select the VM and click File->Export Appliance

Step2: Select the VM to be exported and click NEXT.


9

Step 3: Note the file path and click “Next”

Step 4: Click “Export”


10

Step 5: The Virtual machine is being exported


11

Step 6: Install “ssh” to access the neighbour's VM.

Step7: Go to File->Computer:/home/sam/Documents/

Step 8: Type the neighbour's URL: sftp://[email protected]._/


12

Step 9: Give the password(sam123) and get connected.

Step10: Select the VM and copy it in desktop.


13

Step 11: Open VirtualBox and select File->Import Appliance->Browse


14

Step 12: Select the VM to be imported and click Open”.

Step13: Click “Next” button


15

Step 14: Click “Import”.


Step 15:TheVirtual Machine starts to get imported
16

Step 16: Thus the VM is imported.

RESULT:

Thus, Virtual machine migration has been implemented successfully.


17

MODULE - II
Ex. No: 2a
INSTALLING GOOGLE APP ENGINE & CREATING HELLO WORLD APP
Date:

AIM:

To find a procedure to install Google App Engine and creating Hello World & other simple
web applications using python.

PROCEDURE:

Step 1: Now go to https://cloud.google.com/

In the window that appears next click on the drop-down arrow mark near project name.

In the next window that appears select the option as New Project.
18

In the New Project tab that appears give project name and click create option.

Step 2: Now open cloud shell in the console page.


19

Step 3: To view the list of projects available the selected project console type the command
as gcloud projects list.
Once this command is entered it will ask us to authorize cloud shell.
20

This command will list all the projects available in that particular console.

Step 4: Select APIs and services from the menu to enable Google App Engine.
21
22

In the welcome to API Library page, serach for google app engine in the search tab

From the list of options, select App Engine Admin API.


Step 5: In the App Engine Admin API page click on the Enable button.

Step 6: Now go to cloud shell & type command as gcloud config set project <project_id>
23

Step 7: Now select App Engine → Dashboard


24

Now either You can click on Create Application or go to shell and type command gcloud
app create.

After entering the gcloud app create command it will display a list of location and ask us to
enter numeric value of that location.
25

After selecting location, the App Engine will be created.

After this the Dashboard will be changed as

Step 8: Now clone the Hello World sample app repository to our local machine.
26

Step 9: Go to Cloud Shell and type the command as


Git clone https://github.com/GoogleCloudPlatform/python-docs-samples

Step 10: Now type the command as


cd python-docs-samples/appengine/standard_python3/hello_world
27

Step 11: Now type the command as cat main.py

Step 12: Now execute the following commands


Virtual env && source env/bin/activate
Pip install -r requirements.txt
Python main.py
28

Step 13: Now Select Billing menu & Click Link Billing account
29

Step 14: Now give the billing details

Step 15: Once the billing id done. Copy paste the link that appears in the window next in a
new tab.

RESULT:

Thus, the procedure to install Google App Engine and creating Hello World & other simple
web applications using python was executed successfully.
30

Ex. No: 2b
LAUNCHING WEB APPLICATION USING GAE
Date:

AIM:

To find a procedure to install Google App Engine and creating Hello World & other simple
web applications using python.

PROCEDURE:

Step 1: Download python from https://www.python.org/


31

Step 2: Now double click on the setup file & procedure with the installation process.
32
33

Step 3: Now install Google App Engine SDK from https://cloud.google.com/sdk/docs/install


34

In the guide page click on Google Cloud CLI installer & proceed with the installation process.

In the Google Cloud SDK setup page that appears next simply click on Next button
35

In the next window that appears select the option as Single user & click Next button.
36
37

Step 4: Now Create a folder named app in desktop and then create a another subfolder
named ae-01-trivial in app.

Step 5: Open notepad and type the following code

application: ae-01-trivial
version: 1
runtime: python
api_version: 1
handlers:
- url: /.*
script: index.py

and then save the file. While saving the file change the filetype as all files and give the
filename as app.yaml

Step 6: Open the notepad and type the following code in it.

print ('Content-Type: text/plain')


print (' ')
print ('Hello there Chuck')

and then save the file. While saving the file change the filetype as all files and give the
filename as index.py

Step 7: Open the cloud SDK Shell


38

Step 8: The Cloud SDK Shell will be opened in that type the command as google-cloud-
sdk\bin\dev_appserver.py “<app folder path>”

Step 9: Copy paste the URL in any browser and click enter

RESULT:

Thus, the procedure to install Google App Engine and creating Hello World & other
simple web applications using python was executed successfully.
39

MODULE - III
Ex. No: 3a
SIMULATE A CLOUD SCENARIO USING CLOUD SIM
Date:

AIM:

To simulate a cloud scenario using CloudSim and run a scheduling algorithm using CloudSim.

PROCEDURE:

Step1 : Now within Eclipse window navigate the menu: File -> New -> Project,
To open the new project wizard

Step2: select the ‘Java Project‘ option, once done click ‘Next‘
40

Step3: provide the project name and the path of CloudSim project source code as

Project Name: CloudSim.

Step4: Unselect the ‘Use default location’ option and then click on Browse' to open the path
41

where you have unzipped the Cloudsim project and finally click Next toset project settings.

Step5: You navigate the path till you can see the bin, docs, examples etc folder in thenavigation
plane.
42

Step 6: Once done , click ‘Next’ to go to the next step i.e. setting up of project settings Now
open‘Libraries’ tab in the list then simply click on ‘Add External Jar’(commons- math3-
3.x.jar will be included in the project from this step) .

Step7: Open the path where you have unzipped the commons-math binaries andselect
43

‘Commons-math3-3.x.jar’ and click on open.

Step 8: Ensure external jar that you opened in the previous step is displayed in the list and then
click on ‘Finish’.

Step 9: Once the project is configured you can open the ‘Project Explorer‘ and start
exploring the Cloudsim project.
Following is the final screen which you will see after Cloudsim is configured
44

Step 10: Within the ‘Project Explorer‘, you should navigate to the ‘examples‘ folder,then
expand the package ‘org.cloudbus.cloudsim.examples‘ and double click to open the
‘CloudsimExample1.java‘.

Step 11: Now navigate to the Eclipse menu ‘Run -> Run‘ or directly use a keyboardshortcut‘Ctrl
45

+ F11’to execute the ‘CloudsimExample1.java‘.

Step 12: The following displays the output in the console window of the Eclipse IDE.

CloudSimExample1.java

The first step is to initialize the CloudSim package by initializing theCloudSim library, as
follows:
46

CloudSim.init(num_user, calendar, trace_flag)


Data centers are the resource providers in CloudSim; hence, creation of data centersis a
second step. To create Datacenter, you need the DatacenterCharacteristics object that stores
the properties of a data center such as architecture, OS, list of machines, allocation policy that
covers the time or space shared, the time zone and its price:

Datacenter datacenter9883 = new Datacenter(name, characteristics, new


VmAllocationPolicySimple(hostList), s

The third step is to create a broker: DatacenterBroker broker = createBroker();

The fourth step is to create one virtual machine unique ID of the VM, userId ID of theVM’s
owner, mips, number Of Pes amount of CPUs, amount of RAM, amount of bandwidth,
amount of storage, virtual machine monitor, and cloudlet Scheduler policyfor cloudlets:

Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size,vmm, new


CloudletSchedulerTimeShared())
Submit the VM list to the broker:broker.submitVmList(vmlist)

Create a cloudlet with length, file size, output size, and utilization model: Cloudlet cloudlet =
new Cloudlet(id, length, pesNumber, fileSize, outputSize,utilizationModel, utilizationMode

Submit the cloudlet list to the broker:


broker.submitCloudletList(cloudletList)

Start the Simulation:


CloudSim.startSimulation()

Sample Output from the Existing Example:


Starting CloudSimExample1... Initialising...
Starting CloudSim version3.0
Datacenter_0 isstarting...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>null
Broker isstarting...Entities started.
: Broker: Cloud Resource List receivedwith 1 resource(s) 0.0: Broker: Trying to Create
VM #0 in Datacenter_0
: Broker: VM #0 has been created in Datacenter #2, Host #00.1: Broker: Sending cloudlet
0 to VM #0
400.1: Broker: Cloudlet 0 received
400.1: Broker: All Cloudlets executed. Finishing 400.1: Broker: Destroying VM#0
Broker is shutting down...Simulation: No more future events
CloudInformationService: Notify all CloudSim entities for shuttingdown. Datacenter_0 is
shutting down...
Broker is shutting downSimulation completed.
Simulationcompleted.
47

========== OUTPUT ==========


Cloudlet ID STATUS Data center ID VM ID Time Start TimeFinish Time 0 SUCCESS 2 0
400 0.1
400.1
*****Datacenter:
Datacenter_0*****
User id Debt
3 35.6

CloudSimExample1 finished!

RESULT:

Thus the procedure to simulate a cloud scenario using CloudSim is done successfully.
48

Ex.No: 3b
FILE TRANSFER FROM ONE VM TO ANOTHER VM
Date:

AIM:

To find the procedure to transfer the files from one virtual machine to another virtual
machine.

PROCEDURE:
Step1: Select the VM and click File->Export Appliance

Step2: Select the VM to be exported and click NEXT.


49

Step 3: Note the file path and click “Next”


Step 4: Click “Export”
50

Step 5: The Virtual machine is being exported


51

Step 6: Install “ssh” to access the neighbour's VM.

Step7: Go to File->Computer:/home/sam/Documents/

Step 8: Type the neighbour's URL: sftp://[email protected]._/


Step 9: Give the password(sam123) and get connected.
52

Step10: Select the VM and copy it in desktop.

Step 11: Open VirtualBox and select File->Import Appliance->Browse


53

Step 12: Select the VM to be imported and click Open”.


54

Step13: Click “Next” button


55

Step 14: Click “Import”.


Step 15: TheVirtual Machine starts to get imported
56

Step 16: Thus the VM is imported.

RESULT:

Thus, the procedure to transfer the files from one virtual machine to another virtual machine
was executed successfully.
57

MODULE - IV
Ex. No: 4a
LAUNCHING VIRTUAL MACHINE USING TRY STACK
Date:

AIM:

To find a procedure to launch virtual machine using trystack – Online Openstack

PROCEDURE:

OpenStack Compute Dashboard :

Step 1: Create a Network


1. Go to Network > Networks and then click Create Network.
2. In Network tab, fill Network Name for example internal and then click Next.
3. In Subnet tab,
a. Fill Network Address with appropriate CIDR, for example 192.168.1.0/24. Use
private network CIDR block as the bestpractice.
b. Select IP Version with appropriate IP version, in this case IPv4.
c. Click Next.
4. In Subnet Details tab, fill DNS Name Servers with 8.8.8.8(Google DNS) and then
58

click Create.
59

Create a subnet detail with DNS Server

Network Creation:

Step2: Create a Instance


1. Go to Compute > Instances and then click Launch Instance.
2. In Details tab,
a. Fill Instance Name, for example Ubuntu 1.
b. Select Flavor, for example m1.medium.
c. Fill Instance Count with 1.
d. Select Instance Boot Source with Boot from Image.
e. Select Image Name with Ubuntu 14.04 amd64 (243.7 MB) if you want install
Ubuntu 14.04 in your virtual machine.
3. In Access & Security tab,
a. Click [+] button of Key Pair to import key pair. This key pair is apublic and
private key that we will use to connect to the instancefrom our machine.
60

b. In Import Key Pair dialog,


b.1 Fill Key Pair Name with your machine name (for example Edward-Key).
b.2 Fill Public Key with your SSH public key (usually isin ~/.ssh/id_rsa.pub). See
description in Import Key Pair dialog box for more information. If you are using
Windows, you can use Puttygen to generate key pair.
b.3 Click Import key pair.
b.4 In Security Groups, mark/check default.
c. In Networking tab,
c.1 In Selected Networks, select network that have been created in Step 1, for
example internal.
d. Click Launch.
e. If you want to create multiple instances, you can repeat step 1-5. Icreated one more
instance with instance name Ubuntu 2.

Launch the Instances:


61

Launch the instances – Access Security:

Import the Key Pair:


62

Launch Instance – Key Pair:

Instances Launch:

Network Topology:
63

Step 3: Create Router


In the step 1, we created our network, but it is isolated. It doesn’tconnect to the internet. To
make our network has an internet connection, we need a router that running as the gateway to
theinternet.

1. Go to Network > Routers and then click Create Router.


2. Fill Router Name for example router1 and then click Create router.
3. Click on your router name link, for example router1, Router Details page.
4. Click Set Gateway button in upper right:
5. Select External networks with external.
6. Then OK.
7. Click Add Interface button.
8. Select Subnet with the network that you have been created in Step 1.
9. Click Add interface.
10. Go to Network > Network Topology. You will see the network topology. In the
example, there are two network, i.e. external andinternal, those are bridged by a router. There
are instances those are joined to internal network

Create a Router:
64

Set Gateway and External Gateway:

External Gateway:

Internal Interface:
65

Add Interface:

Network Topology after creation of Router:

Step 4: Configure Floating IP Address


Floating IP address is public IP address. It makes your instance is accessible from theinternet.
When you launch your instance, the instance will have a private network IP, but no public IP.
In OpenStack, the public IPs is collected in a pool and managed by admin (in our case is
TryStack).
You need to request a public (floating) IP address to be assigned to your instance.
66

1. Go to Compute > Instance.


2. In one of your instances, click More > Associate Floating IP.
3. In IP Address, click Plus [+].
4. Select Pool to external and then click Allocate IP.
5. Click Associate.
6. Now you will get a public IP, e.g. 8.21.28.120, for your instance.
Step 5: Configure Access & Security
OpenStack has a feature like a firewall. It can whitelist/blacklistyour in/out connection. It is
called Security Group.
1. Go to Compute > Access & Security and then open Security Groups tab.
2. In default row, click Manage Rules.
3. Click Add Rule, choose ALL ICMP rule to enable ping into your instance, and then
click Add.
4. Click Add Rule, choose HTTP rule to open HTTP port (port 80), and then click
Add.
5. Click Add Rule, choose SSH rule to open SSH port (port 22), and then click Add.
6. You can open other ports by creating new rules.

Access & Security:


67

Allocate Floating IP to OpenStack :

Step 6: SSH to Your Instance


Now, you can SSH your instances to the floating IP address that you got in the step 4. If you
are using Ubuntu image, the SSH user will be ubuntu.
Ping the Instance:

Associate IP Address to Openstack:

RESULT:
Thus, the procedure to launch virtual machine using trystack – OnlineOpenstack is
completed successfully.
68

Ex. No: 4b
DEVELOPING WEB APPLICATIONS IN CLOUD.
Date:

AIM:

To develop a new Web application in cloud.

PROCEDURE:
When you start Globus toolkit container, there will be number of services starts up. The
servicefor this task will be a simple Math service that can perform basic arithmetic for a
client.

The Math service will access a resource with two properties:


1. An integer value that can be operated upon by the service
2. A string values that holds string describing the last operation
The service itself will have three remotely accessible operations that operate upon
value:
(a) add, that adds a to the resource property value.
(b) subtract that subtracts a from the resource property value.
(c) getValueRP that returns the current value of value.
Usually, the best way for any programming task is to begin with an overall description of
whatyou want the code to do, which in this case is the service interface. The service interface
describes how what the service provides in terms of names of operations, their arguments
and return values. A Java interface for our service is:

public interface Math


{ public void add(int
a); public void
subtract(int a);public
int getValueRP();
}

It is possible to start with this interface and create the necessary WSDL file using the
standard Web service tool called Java2WSDL. However, the WSDL file for GT 4 has to
include detailsof resource properties that are not given explicitly in the interface above. Hence,
we will providethe WSDL file.
69

Step 1 Getting the Files

All the required files are provided and comes directly from [1]. The MathService source
code files can be found from http://www.gt4book.com
(http://www.gt4book.com/downloads/gt4book-examples.tar.gz)
A Windows zip compressed version can be found at
http://www.cs.uncc.edu/~abw/ITCS4146S07/gt4book-examples.zip. Download and
uncompress the file into a directory called GT4services. Everything is included (the java
source WSDL and deployment files, etc.):

WSDL service interface description file -- The WSDL service interface descriptionfile is
provided within the GT4services folder at:
GT4Services\schema\examples\MathService_instance\Math.wsdl
This file, and discussion of its contents, can be found in Appendix A. Later on we will need to
modify this file, but first we will use the existing contents that describe the Math service above.
Service code in Java -- For this assignment, both the code for service operations and for the
resource properties are put in the same class for convenience. More complex services and
resources would be defined in separate classes. The Java code for the service and its resource
properties is located within the GT4services folder at:

GT4services\org\globus\examples\services\core\first\impl\MathService.java.
Deployment Descriptor -- The deployment descriptor gives several different important sets of
information about the service once it is deployed. It is located within the GT4services folder
at:

GT4services\org\globus\examples\services\core\first\deploy-server.wsdd.
70

Step 2 – Building the Math Service


It is now necessary to package all the required files into a GAR (Grid Archive) file. The build
tool ant from the Apache Software Foundation is used to achieve this as shown overleaf:
Generating a GAR file with Ant (from http://gdp.globus.org/gt4-
tutorial/multiplehtml/ch03s04.html)
Ant is similar in concept to the Unix make tool but a java tool and XML based.
Build scripts are provided by Globus 4 to use the ant build file. The windows version of the
build script for MathService is the Python file called globus-build-service.py, which held in
the GT4services directory. The build script takes one argument, the name of your service that
you want to deploy. To keep with the naming convention in [1], this service will be called
first.In the Client Window, run the build script from the GT4services directory with:
globus-build-service.py first
The output should look similar to the following:
Buildfile: build.xml
.
.
.
.
.
BUILD SUCCESSFUL
Total time: 8 seconds
During the build process, a new directory is created in your GT4Services directory that is
named build. All of your stubs and class files that were generated will be in that directory and
its subdirectories. More importantly, there is a GAR (Grid Archive) file called
org_globus_examples_services_core_first.gar. The GAR file is the package that contains
every file that is needed to successfully deploy your Math Service into the Globus container.
The files contained in the GAR file are the Java class files, WSDL, compiled stubs, and the
deployment descriptor.

Step 3 – Deploying the Math Service


If the container is still running in the Container Window, then stop it using Control-C. To
deploy the Math Service, you will use a tool provided by the Globus Toolkit called globus-
deploy-gar.In the Container Window, issue the command:
71

globus-deploy-gar org_globus_examples_services_core_first.gar
Successful output of the command is :

The service has now been deployed.


Check service is deployed by starting container from the Container Window:You should see
the service called MathService.

Step 4 – Compiling the Client


A client has already been provided to test the Math Service and is located in theGT4Services
directory at: GT4Services\org\globus\examples\clients\MathService_instance\Client.java
and contains

You should see the service called MathService.


Step 4 – Compiling the Client
A client has already been provided to test the Math Service and is located in theGT4Services
directory at: GT4Services\org\globus\examples\clients\MathService_instance\Client.java
72

and contains the following code:


package org.globus.examples.clients.MathService_instance;import
org.apache.axis.message.addressing.Address;
import org.apache.axis.message.addressing.EndpointReferenceType; import
org.globus.examples.stubs.MathService_instance.MathPortType;import
org.globus.examples.stubs.MathService_instance.GetValueRP; import
org.globus.examples.stubs.MathService_instance.service.MathServiceAddressingL ocator;
public class Client

{
public static void main(String[] args)
{

MathServiceAddressingLocator locator =newMathServiceAddressingLocator()

try
{
String serviceURI = args[0];
// Create endpoint reference to service EndpointReferenceType endpoint = new
EndpointReferenceType(); endpoint.setAddress(new Address(serviceURI)); MathPortType
math;
// Get PortType
math = locator.getMathPortTypePort(endpoint);
// Perform an additionmath.add(10);
// Perform another additionmath.add(5);
// Access value System.out.println("Current value: "
+ math.getValueRP(new GetValueRP()));
// Perform a subtractionmath.subtract(5);
// Access value System.out.println("Current value: "
+ math.getValueRP(new GetValueRP()));
} catch (Exception e)
{e.printStackTrace();
}
}
}
When the client is run from the command line, you pass it one argument. The argument is the
73

URL that specifies where the service resides. The client will create the end point rerference
and incorporate this URL as the address. The end point reference is then used with the
getMathPortTypePort method of a MathServiceAdressingLocator object to obtain a
reference to the Math interface (portType). Then, we can apply the methods available in
the service as though they were local methods Notice that the call to the service (add and
subtract method calls) must be in a “try {} catch(){}” block because a “RemoteException”
may be thrown. The code for the “MathServiceAddressingLocator” is created during
the buildprocess. (Thus you don’t have to write it!)

(a) Settting the Classpath


To compile the new client, you will need the JAR files from the Globus toolkit in your
CLASSPATH. Do this by executing the following command in the Client Window:
%GLOBUS_LOCATION%\etc\globus-devel-env.bat
You can verify that this sets your CLASSPATH, by executing the command:
echo %CLASSPATH%
You should see a long list of JAR files.
Running \gt4\etc\globus-devel-env.bat only needs to be done once for each Client Window
that you open. It does not need to be done each time you compile.
(b) Compiling Client
Once your CLASSPATH has been set, then you can compile the Client code by typing in
thefollowing command:
javac -classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
org\globus\examples\clients\MathService_instance\Client.java

Step 5 – Start the Container for your Service


Restart the Globus container from the Container Window with:
globus-start-container -nosec
if the container is not running.

Step 6 – Run the Client


To start the client from your GT4Services directory, do the following in the Client
Window,which passes the GSH of the service as an argument:
java -classpath
build\classes\org\globus\examples\services\core\first\impl\:%CLASSPATH%
74

org.globus.examples.clients.MathService_instance.Client
http://localhost:8080/wsrf/services/examples/core/first/MathService
which should give the output:
Current value: 15
Current value: 10

Step 7 – Undeploy the Math Service and Kill a Container


Before we can add functionality to the Math Service (Section 5), we must undeploy the
service. In the Container Window, kill the container with a Control-C. Then to undeploy the
service, type in the following command:

globus-undeploy-gar org_globus_examples_services_core_first
which should result with the following output:
Undeploying gar...Deleting /.
.
.
Undeploy successful
6 Adding Functionality to the Math Service
In this final task, you are asked to modify the Math service and associated files so the
srvicesupports the multiplication operation. To do this task, you will need to modify.
The exact changes that are necessary are not given. You are to work them out yourself. You
will need to fully understand the contents of service code and WSDL files and then modify
them accordingly. Appendix A gives an explanation of the important parts of these files. Keep
all file names the same and simply redeploy the service afterwards. You will also need to add
a code to the client code (Client.java) to test the modified service to include multiplication.

RESULT:
Thus, a new Web application in cloud was developed successfully.
75

MODULE - V
Ex. No: 5a
USING API’S OF HADOOP TO INTERACT WITH IT
Date:

AIM:

To write a program to use the API’s of Hadoop to interact with it to display filecontent
of a file existing in hdfs.
PROCEDURE:
/home/hduser/HadoopFScat.java:
import
java.io.InputStream;
import java.net.URI;
import rg.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
public class HadoopFScat
{
public static void main(String[] args) throws Exception
{
String uri = args[0];
Configuration conf = new Configuration();
FileSystem fileSystem = FileSystem.get(URI.create(uri), conf);
InputStream inputStream = null;
try
{
inputStream = fileSystem.open(new Path(uri));
IOUtils.copyBytes(inputStream, System.out, 4096, false);
}
finally
{
IOUtils.closeStream(inputStream);
}
}}
76

Download the jar file:

Download Hadoop-core-1.2.1.jar, which is used to compile and execute the MapReduce


program. Visit the following link
http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-core/1.2.1 to download the jar.
Let us assume the downloaded folder is /home/hduser/.

Creating a direfctory to collect class files:

hduser@nspublin:/usr/local/hadoop/sbin$ mkdir /home/hduser/fscat

Compiling the java file - HadoopFScat.java:

hduser@nspublin:/usr/local/hadoop/sbin$ sudo /usr/lib/jvm/java-8-oracle/bin/javac -


classpath /home/hduser/hadoop-core-1.2.1.jar -d /home/hduser/fscat
/home/hduser/HadoopFScat.java
hduser@nspublin:/usr/local/hadoop/sbin$ ls /home/hduser/fscat
HadoopFScat.class

Creating jar file for HadoopFScat.java:

hduser@nspublin:/usr/local/hadoop/sbin$ jar -cvf /home/hduser/fscat.jar -C


/home/hduser/fsca
t/ .added manifest
adding: HadoopFScat.class(in = 1224) (out= 667)(deflated 45%)
77

OUTPUT:

Executing jar file for HadoopFScat.java:

hduser@nspublin:/usr/local/hadoop/sbin$ hadoop jar /home/hduser/fscat.jar HadoopFScat


/user/input/file.txt
16/06/08 15:29:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform... using builtin-java classes where applicable
Alzheimer's virtual reality app simulates dementia2 June 2016 Last updated at 19:13 BST
A virtual reality app has been launched to provide a sense of what it is like to live with
different forms of dementia.
A Walk Through Dementia was created by the charity Alzheimer's Research UK. It has been
welcomed by other experts in the field.
We will increasingly be asked for help by people with dementia, and having had some insight
into what may be happening for them will improve how we can help, said Tula Brannelly
from the University of Southampton.
A woman living with the condition and her husband told the Today programme why they
supported the Android app's creation.
Visitors to St Pancras International station in London can try out the app until 1700 on
Saturday 4 June.

RESULT:

Thus a program to use the API’s of Hadoop to interact with it to display file content of
a file existing in hdfs is created and executed successfully.
78

Ex. No: 5b
WORD COUNT PROGRAM
Date:

AIM:

To write a word count program to demonstrate the use of Map and Reduce task.

PROCEDURE:

Step 1:
cs1-17@cs117-HP-Pro-3330-MT:~$ sudo su user
[sudo] password for cs1-17:
user@cs117-HP-Pro-3330-MT:/home/cs1-17$ cd\
user@cs117-HP-Pro-3330-MT:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/09/20 10:09:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for
your platform... using builtin-java classes where applicable Starting namenodes on
[localhost]
localhost: starting namenode, logging to /usr/local/hadoop1/logs/hadoop-user-
namenode-cs117-HP-Pro-3330-MT.out
localhost: starting datanode, logging to /usr/local/hadoop1/logs/hadoop-user-datanode-
cs117-HP-Pro-3330-MT.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop1/logs/hadoop-user-
secondarynamenode-cs117-HP-Pro-3330-MT.out
16/09/20 10:10:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable starting yarn daemons
Starting resourcemanager, logging to /usr/local/hadoop1/logs/yarn-user-
resourcemanager-cs117-HP-Pro-3330-MT.out
localhost: starting nodemanager, logging to /usr/local/hadoop1/logs/yarn-user-
nodemanager-cs117-HP-Pro-3330-MT.out
user@cs117-HP-Pro-3330-MT:~$
jps9551 NodeManager
8924 NameNode
9857 Jps
79

9076 DataNode
9265 SecondaryNameNode
9420 ResourceManager

Step 2:

create a directory named ip1 on the desktop. in the ip1 directory create a two.txt file for
wordcount purpose. create a directory named op1 on the desktop.
user@cs117-HP-Pro-3330-MT:~$ cd /usr/local/hadoop1
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ bin/hdfs dfs -mkdir /user2
16/09/20 10:46:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ bin/hdfs dfs -put '/home/cs1-
17/Desktop/ip1' /user2
16/09/20 10:48:42 WARN util.NativeCobin/hadoop jar
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount
/user1/inputdata output1 deLoader: Unable to load native-hadoop library for yourplatform...
using builtin-java classes where applicable
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ user@cs117-HP-Pro-3330-
MT:/usr/local/hadoop1$ bin/hdfs dfs -put '/home/cs1-17/Desktop/op1' /user2
6/09/20 11:02:01 WARN util.NativeCodeLoader: Unable to load native-hadoop libraryfor
your platform... using builtin-java classes where applicable

Step 3:

user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ bin/hadoop jar


share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /user2/ip1op1
16/09/20 11:02:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
16/09/20 11:02:12 INFO Configuration.deprecation: session.id is deprecated. Instead,use
dfs.metrics.session-id
16/09/20 11:02:12 INFO jvm.JvmMetrics: Initializing JVM Metrics
withprocessName=JobTracker, sessionId=
16/09/20 11:02:12 INFO input.FileInputFormat: Total input paths to process : 2
16/09/20 11:02:12 INFO mapreduce.JobSubmitter: number of splits:2
80

16/09/20 11:02:13 INFO mapreduce.JobSubmitter: Submitting tokens for job:


job_local1146489696_0001
16/09/20 11:02:13 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/09/20 11:02:13 INFO mapreduce.Job: Running job: job_local1146489696_0001 16/09/20
11:02:13 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/09/20 11:02:13 INFO mapred.LocalJobRunner: OutputCommitter is
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Waiting for map tasks
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Starting task:
attempt_local1146489696_0001_m_000000_0
16/09/20 11:02:13 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
16/09/20 11:02:13 INFO mapred.MapTask: Processing split:
hdfs://localhost:54310/user2/ip1/two.txt:0+42
16/09/20 11:02:13 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/09/20 11:02:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/09/20 11:02:13 INFO mapred.MapTask: soft limit at 83886080
16/09/20 11:02:13 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/09/20 11:02:13 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/09/20 11:02:13 INFO mapred.MapTask: Map output collector class =
org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/09/20 11:02:13 INFO mapred.LocalJobRunner:
16/09/20 11:02:13 INFO mapred.MapTask: Starting flush of map output16/09/20
11:02:13 INFO mapred.MapTask: Spilling map output
16/09/20 11:02:13 INFO mapred.MapTask: bufstart = 0; bufend = 69; bufvoid =104857600
16/09/20 11:02:13 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend =
26214372(104857488); length = 25/6553600
16/09/20 11:02:13 INFO mapred.MapTask: Finished spill 0
16/09/20 11:02:13 INFO mapred.Task:
Task:attempt_local1146489696_0001_m_000000_0 is done. And is in the process of
committing
16/09/20 11:02:13 INFO mapred.LocalJobRunner: map
16/09/20 11:02:13 INFO mapred.Task: Task 'attempt_local1146489696_0001_m_000000_0'
done.
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Finishing task:
81

attempt_local1146489696_0001_m_000000_0
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Starting task:
attempt_local1146489696_0001_m_000001_0
16/09/20 11:02:13 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
16/09/20 11:02:13 INFO mapred.MapTask: Processing split:
dfs://localhost:54310/user2/ip1/one.txt~:0+0
16/09/20 11:02:13 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
16/09/20 11:02:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
16/09/20 11:02:13 INFO mapred.MapTask: soft limit at 83886080
16/09/20 11:02:13 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
16/09/20 11:02:13 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
16/09/20 11:02:13 INFO mapred.MapTask: Map output collector class =
org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/09/20 11:02:13 INFO mapred.LocalJobRunner:
16/09/20 11:02:13 INFO mapred.MapTask: Starting flush of map output16/09/20
11:02:13 INFO mapred.Task:
Task:attempt_local1146489696_0001_m_000001_0 is done. And is in the process of
committing
16/09/20 11:02:13 INFO mapred.LocalJobRunner: map
16/09/20 11:02:13 INFO mapred.Task: Task 'attempt_local1146489696_0001_m_000001_0'
done.
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1146489696_0001_m_000001_0
16/09/20 11:02:13 INFO mapred.LocalJobRunner: map task executor complete.
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Waiting for reduce tasks
16/09/20 11:02:13 INFO mapred.LocalJobRunner: Starting task:
attempt_local1146489696_0001_r_000000_0
16/09/20 11:02:13 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]
16/09/20 11:02:13 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin:
org.apache.hadoop.mapreduce.task.reduce.Shuffle@b0a9ac0
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: MergerManager:
memoryLimit=333971456, maxSingleShuffleLimit=83492864,
mergeThreshold=220421168, ioSortFactor=10,
memToMemMergeOutputsThreshold=10
82

16/09/20 11:02:13 INFO reduce.EventFetcher:


attempt_local1146489696_0001_r_000000_0 Thread started: EventFetcher for fetching
Map Completion Events
16/09/20 11:02:13 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output ofmap
attempt_local1146489696_0001_m_000000_0 decomp: 37 len: 41 to MEMORY
16/09/20 11:02:13 INFO reduce.InMemoryMapOutput: Read 37 bytes from map-outputfor
attempt_local1146489696_0001_m_000000_0
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-outputof
size: 37, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->37
16/09/20 11:02:13 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output ofmap
attempt_local1146489696_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
16/09/20 11:02:13 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-outputfor
attempt_local1146489696_0001_m_000001_0
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-outputof
size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 37, usedMemory ->39
16/09/20 11:02:13 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
16/09/20 11:02:13 INFO mapred.LocalJobRunner: 2 / 2 copied.
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-
memory map-outputs and 0 on-disk map-outputs
16/09/20 11:02:13 INFO mapred.Merger: Merging 2 sorted segments
16/09/20 11:02:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segmentsleft
of total size: 29 bytes
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: Merged 2 segments, 39 bytes todisk
to satisfy reduce memory limit
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: Merging 1 files, 41 bytes from disk
16/09/20 11:02:13 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from
memory into reduce
16/09/20 11:02:13 INFO mapred.Merger: Merging 1 sorted segments
16/09/20 11:02:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segmentsleft
of total size: 29 bytes
16/09/20 11:02:13 INFO mapred.LocalJobRunner: 2 / 2 copied.
16/09/20 11:02:14 INFO Configuration.deprecation: mapred.skip.on is deprecated.Instead,
use mapreduce.job.skiprecords
16/09/20 11:02:14 INFO mapred.Task: Task:attempt_local1146489696_0001_r_000000_0
83

is done. And is in the process ofcommitting


16/09/20 11:02:14 INFO mapreduce.Job: Job job_local1146489696_0001 running inuber
mode : false
16/09/20 11:02:14 INFO mapred.LocalJobRunner: 2 / 2 copied.
16/09/20 11:02:14 INFO mapred.Task: Task attempt_local1146489696_0001_r_000000_0 is
allowed to commit now
16/09/20 11:02:14 INFO mapreduce.Job: map 100% reduce 0%
16/09/20 11:02:14 INFO output.FileOutputCommitter: Saved output of task
'attempt_local1146489696_0001_r_000000_0' to
hdfs://localhost:54310/user/user/op1/_temporary/0/task_local1146489696_0001_r_000 000
16/09/20 11:02:14 INFO mapred.LocalJobRunner: reduce > reduce
16/09/20 11:02:14 INFO mapred.Task: Task 'attempt_local1146489696_0001_r_000000_0'
done.
16/09/20 11:02:14 INFO mapred.LocalJobRunner: Finishing task:
attempt_local1146489696_0001_r_000000_0
16/09/20 11:02:14 INFO mapred.LocalJobRunner: reduce task executor complete.16/09/20
11:02:15 INFO mapreduce.Job: map 100% reduce 100%
16/09/20 11:02:15 INFO mapreduce.Job: Job job_local1146489696_0001 completed
successfully
16/09/20 11:02:15 INFO mapreduce.Job: Counters: 38
File System Counters
FILE: Number of bytes read=812415 FILE: Number
of bytes written=1575498
FILE: Number of read operations=0 FILE: Number of large read operations=0FILE:
Number of write operations=0 HDFS: Number of bytes read=126 HDFS: Number of bytes
written=23 HDFS: Number of read operations=25
HDFS: Number of large read operations=0HDFS:
Number of write operations=5
Map-Reduce Framework Map
input records=5 Map output
records=7Map output bytes=69
Map output materialized bytes=47Input split
bytes=211
Combine input records=7 Combine
84

output records=3Reduce input


groups=3 Reduce shuffle bytes=47
Reduce input records=3 Reduce output
records=3 Spilled Records=6 Shuffled
Maps =2
Failed Shuffles=0 Merged Map
outputs=2 GC time elapsed (ms)=14
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0Virtual
memory (bytes) snapshot=0
Total committed heap usage (bytes)=925368320Shuffle Errors
BAD_ID=0 CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=42
File Output Format CountersBytes
Written=23

Step 4:

user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -lsop1


16/09/20 11:03:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r-- 1 user supergroup 0 2016-09-20 11:02 op1/_SUCCESS
-rw-r--r-- 1 user supergroup 23 2016-09-20 11:02 op1/part-r-00000

Step 5:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ usr/local/hadoop1/bin/hadoop fs -cat
op1/result.txt
bash: usr/local/hadoop1/bin/hadoop: No such file or directory
85

Step 6:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ usr/local/hadoop1/bin/hadoop fs -cat
op1/*
bash: usr/local/hadoop1/bin/hadoop: No such file or directory

Step 7:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -cat
op1/*
16/09/20 11:05:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
hello 3
helo 1
world 3

Step 8:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -cat
op1/result.txt
16/09/20 11:06:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
cat: `op1/result.txt': No such file or directory

Step 9:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs
op1/result.txt
op1/result.txt: Unknown command

Step 10:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -cat
op1/>>result.txt
16/09/20 11:06:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
cat: `op1': Is a directory

Step 11:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -cat
86

>> op1/result.txt
bash: op1/result.txt: No such file or directory

Step 12:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
16/09/20 11:11:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
Stopping namenodes on
[localhost]localhost: stopping
namenode localhost: stopping
datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
16/09/20 11:12:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
stopping yarn daemons
stopping
resourcemanager
localhost: stopping
nodemanagerno proxyserver to
stop

Step 13:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ cd\
>
user@cs117-HP-Pro-3330-MT:~$

Wordcount.java
//package org.myorg;
import java.io.IOException;
import
java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
87

import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import
org.apache.hadoop.mapreduce.lib.input.TextInputFormat; import
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {private
final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws
IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);while
(tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());context.write(word,
one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)throws
IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
88

job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);} }

RESULT:

Thus, a word count program to demonstrate the use of Map and Reduce task is created
and executed successfully.
89

CONTENT BEYOND SYLLABUS


Ex. No: 6
INSTALLING STORAGE CONTROLLER
Date:

AIM:

To find procedure to install storage controller and interact with it.

PROCEDURE:

Optionally, Set Quotas


The cloud administrator can set usage quotas for the vDC. In this case, we will put a limitof
10 VMs.
$ onegroup show web-
dev GROUP 100
INFORMATION
ID 100
NAME :
web-devGROUP
TEMPLATE
GROUP_ADMINS="web-dev-admin"
GROUP_ADMIN_VIEWS="vdcadmin"
SUNSTONE_VIEWS="cloud"
U
S
E
R
S
I
D
2
RESOURCE
PROVIDERSZONE
CLUSTER
90

0 100
RESOURCE USAGE & QUOTAS
NUMBER OF VMS MEMORY CPU
VOLATILE_SIZE0 / 10 0M / 0M 0.00 /
0.00 0M / 0M
Prepare Virtual Resources for the Users
At this point, the cloud administrator can also prepare working Templates andImages for the
vDC users.
$ onetemplate chgrp ubuntu web-dev

RESULT:

Thus, the procedure to install storage controller is executed and an interaction is


done with it successfully.

You might also like