0% found this document useful (0 votes)
232 views

Cloud & DevOps

Uploaded by

22r21a0531
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
232 views

Cloud & DevOps

Uploaded by

22r21a0531
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 92

DEPARTMENT OF INFORMATION TECHNOLOGY

Cloud & DevOps Lab


(Common to C.S.E & I.T.)

III B.TECH.- I SEMESTER


ACADEMIC YEAR : 2024-2025

Document No: Date of Issue Compiled by: Authorized by


MLRIT/CSE&IT/LABMANUAL/ 19-02-2024 D.Sandeep Dr.NV Rajashekar
Cloud & DevOps Lab Date of Revision Asst.Prof.Dept. Of I.T Reddy
19-02-2024 Verified by: HOD(IT)
Dr.NV Rajashekar
Reddy

1
Institute Vision

Promote academic excellence, research, innovation, and entrepreneurial skills to produce


graduates with human values and leadership qualities to serve the nation.

Institute Mission

Provide student-centric education and training on cutting-edge technologies to make the


students globally competitive and socially responsible citizens. Create an environment to
strengthen the research, innovation and entrepreneurship to solve societal problems.

Department of I.T Vision

To build IT Department with commitment towards continuous improvement that adapts


swiftly to 21st century challenges by developing professionals with robust technical and
research backgrounds.

Department of I.T Mission

M1: To provide quality Teaching Learning environment and make students proficient in
both theoretical and applied foundations of Information Technology.

M2: Create highly skilled IT engineers, capable of doing research and also develop
solutions for the betterment of the nation.

M3: Instill professional and ethical values among students.

M4: To develop entrepreneurial skills in students and also motive them towards pursuing
higher studies.
.
Program Educational Objectives (PEOs)

PEO I: Be successfully employed as a Software Engineer in the field of Information


Technology.

PEO 2: Be a successful entrepreneur and assume leadership position, responsibility


within an organization.

PEO 3: Progress through advanced degree or certificate programs in engineering,


business and other professionally related fields.

2
PROGRAM OUTCOMES (POs):
Engineering Graduates will be able to:

PO1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals,
and an engineering specialization to the solution of complex engineering problems.

PO2. Problem analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.

PO3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the public
health and safety, and the cultural, societal, and environmental considerations.

PO4. Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.

PO5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations.

PO6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.

PO7. Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.

PO8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice.

PO9. Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.

PO10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and design
documentation, make effective presentations, and give and receive clear instructions.

PO11. Project management and finance: Demonstrate knowledge and understanding of the engineering
and management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.

PO12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

PROGRAM SPECIFIC OUTCOMES (PSOs):

PSO1: Apply current technical concepts and practices in the core Information
Technology of Cloud Computing, Big data, Mobile Application Development and
Internet of Things.

PSO2: Use appropriate techniques, modern programming languages, tools for quality Software development

3
Course Overview:
The course is designed to give an introduction to cloud & DevOps and its real time usage
in the practical applications.

Course Objectives:
The course should enable the students to:
1. Apply the concepts of AWS and its cloud services.
2. Deploy and use virtual instances
3. Implement continuous integration using AWS CodeBuild.
4. Implement end-to-end continuous integration and continuous deployment (CI/CD) using
AWS CodePipeline.

Course Outcomes:
At the end of the course, student will be able to:
1 Deploy secured virtual instances in Amazon AWS.
2. Deploy various cloud services like S3, Databases
3. Proficiency in basic Git and GitHub commands.
4. Deploy automated applications

Course Articulation Matrix:

COs/POs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2

CO1 3 3 2 3

CO2 3 3 3 3

CO3 3 3 2 3 2

2
CO4 3 3 2 3

2
AVG 3 3 2 3

4
Justifications for CO - PO mapping

Mapping Level CO-PO Justification

CO1- PO3,PO5 3 By mapping CO-1 to the PO-3, PO5 which is highly related to the
course, the student can deploy cloud services.

By mapping CO-2 to the PO3, PO-5 which is highly related to the


CO2- PO3,PO5 3 course and PSO1 is highly related; the student can analyze and use
appropriate tools for cloud applications.

CO4- By mapping CO-4 to the PO3, PO5, PO-11 which is supportively


2 related to the course, the student can know how to store projects on
PO3,PO5,PO11
cloud.

5
PREFACE

Cloud & DevOps Lab is one of the important subjects included in the third year. The
students must grasp aspects of Amazon web services, devOps tools to familiarize the
concepts of cloud computing services while doing Cloud & DevOps Lab. Understanding
concept of Cloud & DevOps Lab student will be able to make use of available of services
for emerging technologies.

Students will be in a position to grasp the above aspects while doing lab practical’s as
defined in the manual through four steps

a. Inculcate the concepts of cloud computing


b. The concepts of AWS and its cloud services
c. Deploy and use virtual instances.
d. Implement DevOps tools in the AWS cloud
This manual is a collective effort of the faculty teaching third year faculty teaching Cloud
& DevOps Lab

This manual will need constant up gradation based on the student feedback and change in
the syllabus.

HOD (IT) PRINCIPAL

6
Introduction To Cloud & DevOps Lab

Cloud computing is a model for enabling convenient, on-demand network access to a


shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction .A Cloud is a type of parallel and
distributed system consisting of a collection of inter-connected and virtualized computers
that are dynamically provisioned and presented as one or more unified computing
resource(s) based on service-level agreements established through negotiation between the
service provider and consumers. Cloud computing provides a shared pool of resources,
including data storage space, networks, Computer processing power, and specialized
corporate and user applications.

Architecture

Cloud Service Models

 Cloud Software as a Service (SaaS)


 Cloud Platform as a Service (PaaS)
 Cloud Infrastructure as a Service (IaaS)

Cloud Deployment Models:

 Public
 Private
 Community Cloud
 Hybrid Cloud

DevOps is the combination of cultural philosophies, practices, and tools that increases an
organization’s ability to deliver applications and services at high velocity: evolving and
improving products at a faster pace than organizations using traditional software
development and infrastructure management processes. This speed enables organizations
to better serve their customers and compete more effectively in the market.

Under a DevOps model, development and operations teams are no longer “siloed.”
Sometimes, these two teams are merged into a single team where the engineers work across
the entire application lifecycle, from development and test to deployment to operations,
and develop a range of skills not limited to a single function.In some DevOps models,
quality assurance and security teams may also become more tightly integrated with
development and operations and throughout the application lifecycle. When security is the
focus of everyone on a DevOps team, this is sometimes referred to as DevSecOps.
These teams use practices to automate processes that historically have been manual and
slow. They use a technology stack and tooling which help them operate and evolve
applications quickly and reliably. These tools also help engineers independently

7
accomplish tasks (for example, deploying code or provisioning infrastructure) that
normally would have required help from other teams, and this further increases a team’s v

Department of Information Technology


LAB CODE
1. Students should report to the concerned lab as per the time table.

2. Students who turn up late to the labs will in no case be permitted to do the program
schedule for the day.

3. Students are required to prepare thoroughly to perform the experiment before coming
to laboratory.

4. Student should bring A4 sheets along with lab evaluation sheet to lab and the day
experiment should written and get verified by the faculty.

5. Including the scheduled experiment every student need to write a problem statement
specified by the faculty related to the scheduled lab concepts discussed.

6. Faculty will prepare 4-5 problem statements, so that every student should write one of
the problem statements as specified by the faculty.

7. The student will be permitted to use the system only if both the scheduled experiment
and given problem statement got verified by the faculty.

8. Student need to execute the lab programs and get the outputs verified by the faculty
with some concern test cases.

9. Once the execution is done then the students should upload all their programs into
student corner of MLRIT website.

10. Finally the student need to get verified his/her day to day evaluation sheet by concern
faculty.

11. Students should be maintain all the A4 sheets related to the lab as observation file to
the concern lab .

12. For each lab they need to maintain observation, print outs and lab evaluation sheet for
sure to get completion signature from the faculty.

8
INDEX
Exp. No. Name of the Experiment Page No.
1 Create Amazon AWS EC2 Linux instance with conceptual 10 - 11
understanding of SSH client software protocol and keys.
2 Create Amazon AWS EC2 Windows server instance with
conceptual understanding of RDP (Remote Desktop Protocol). 12 - 27

3 Create cloud storage Bucket using Amazon Simple Storage


Service (S3). Perform the
following operations:
A. Create a folder within a S3 Bucket
B. Upload content to S3 28 - 38
C. Change permissions to allow public access of
contents.
D. Set Meta Data on an S3 Bucket
E. Delete an S3 Bucket and its content

4 Launch and connect to an Amazon Relational Data Base (RDS)


Service, Use any one from MySQL, Oracle, SQL Server and 39 - 46
postgre SQLDataBase engines.

5 Study and practice Git and GitHub commands 47 - 48


6 Install and configure Jenkins to build and deploy Java or Web
Applications 49 - 52
7 Install and configure Docker tool and perform Docker
commands for content management. 53 - 61

8 Create a Docker image and push Docker image into AWS ECR 62 - 69

9 Create Repositories, Cloning, and Pushing Code Changes 70 - 76


using AWS CodeCommit
10 Implement continuous integration using AWS Codebuild 77 - 81

11 Automate application deployment into EC2 using AWS Code 82 - 86


Deploy

12 Implement CI/CD using AWS Code pipeline that automates 87 - 92


the build, test, and deployment phases

9
Experiment No. 1

Objective: Create Amazon AWS EC2 Linux instance with conceptual understanding of
SSH client software protocol and keys.

Requirement: Internet

Theory: We will create a developer account on Amazon (aws.amazon.com). We will work


on several exercises to explore the features of Amazon cloud and understand how it can
support the existing IT infrastructure you have. We will work on complete example in
collecting live twitter data and performing a simple analysis on the data.

Preliminaries:
1. Create an aws developer account at http://aws.amazon.com
2. Update your credits.
3. Navigate the aws console. Browse through the wide range of infrastructure services
offered by aws.
4. Create an Amazon key pair. This is the key-pair we will use for accessing
applications/services on the cloud. Call it Richs2014 or something you will remember.
This is like a key your safety box. Don’t lose. Store Richs1.pem and equivalent private
key for putty Richs2014.ppk in a safe location.
5. Identify the credentials of your Amazon; just be knowledge about where to locate them
when you need them for authenticate yourself/your application using these credentials.
https://console.aws.amazon.com/iam/home?#security_credential
6. Identity and Access Management (IAM) is an important service. Read
http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_Introduction.html

Exercise 1: Launch an instance EC2:


1. Click on the first one: Services window EC2 from the services dashboardStudy
the various items click on the lunch button

Step 1: Choose an AMI (Amazon machine image) for the instance you want: this can be
single CPU machine to a sophisticated cluster of powerful processors.
The instances can be from Amazon Market Place, Community (contributed) AMI, My own
AMIs (that I may have created: Eg., RichsAMI) Choose a “free-tier” eligible Windows
AMI.
Step 2: Choose an instance type: small, large, micro, medium , etc.
Step 3: Review and launch. We are done, we have Windows machine.
Step 4: Create a new key-pair to access the instance that will be created. We will be
accessing the instance we create using Public-private key pair. Download the pair of the
key and store it.
Launch instance. Once it is ready you will use its public IP and the key pair we saved and
the RDP protocol to access the instance on the cloud.

Hosting a static web site on Amazon aws.

10
Overview: Simply load the web site components into an appropriate S3 folder/directories
created. Configure a few parameters, policy file and the web site all set to go!
Step 1:
1. When you host a website on Amazon S3, AWS assigns your website a URL based on
the name of the storage location you create in Amazon S3 to hold the website files
(called an S3 bucket) and the geographical region where you created the bucket.
2. For example, if you create a bucket called richs on the east coast of the United States
and use it to host your website, the default URL will be http://richs.s3-website-us-east-
1.amazonaws.com/.
3. We will not use Route 53 and CloudFront for this proof-of-concept implementation.

Step 2: Enable logging and redirection (note: for some reason this collides with richs.com)
1. In the logging window enter logs.richs.com and /root in the next box; right click on
www.richs.com properties and redirect it to richs.com
2. In the richs.com bucket, enable web hosting and enter index,html as the index
document. If you an error document, you can ad that in the next box.
3. Click on the endpoint address that shows up in properties window of richs.com
4. You should be able to see the static application.

11
Experiment No. 2

Objective: Create Amazon AWS EC2 Windows server instance with conceptual
understanding of RDP (Remote Desktop Protocol).

Requirement: Internet

Theory: An EC2 instance is a virtual server in Amazon Web services terminology. It


stands for Elastic Compute Cloud. It is a web service where an AWS subscriber can
request and provision a compute server in AWS cloud.

An on-demand EC2 instance is an offering from AWS where the subscriber/user can
rent the virtual server per hour and use it to deploy his/her own applications.

The instance will be charged per hour with different rates based on the type of the
instance chosen. AWS provides multiple instance types for the respective business needs
of the user.

Thus, you can rent an instance based on your own CPU and memory requirements and
use it as long as you want. You can terminate the instance when it’s no more used and
save on costs. This is the most striking advantage of an on-demand instance- you can
drastically save on your CAPEX.

Step 1:

 Login to your AWS account and go to the AWS Services tab at the top left corner.
 Here, you will see all of the AWS Services categorized as per their area viz.
Compute, Storage, Database, etc. For creating an EC2 instance, we have to
choose Computeà EC2 as in the next step.

 Open all the services and click on EC2 under Compute services. This will launch
the dashboard of EC2.

12
Here is the EC2 dashboard. Here you will get all the information in gist about the AWS
EC2 resources running.

Step 2: On the top right corner of the EC2 dashboard, choose the AWS Region in which
you want to provision the EC2 server.

Here we are selecting N. Virginia. AWS provides 10 Regions all over the globe.

Step 3: In this step

 Once your desired Region is selected, come back to the EC2 Dashboard.
 Click on 'Launch Instance' button in the section of Create Instance (as shown
below).

13
Choose AMI

Step 4: In this step we will do,

1. You will be asked to choose an AMI of your choice. (An AMI is an Amazon
Machine Image. It is a template basically of an Operating System platform which
you can use as a base to create your instance). Once you launch an EC2 instance
from your preferred AMI, the instance will automatically be booted with the
desired OS. (We will see more about AMIs in the coming part of the tutorial).
2. Here we are choosing the default Amazon Linux (64 bit) AMI.

Choose EC2 Instance Types

Step 5: In the next step, you have to choose the type of instance you require based on
your business needs.

1. We will choose t2.micro instance type, which is a 1vCPU and 1GB memory
server offered by AWS.
2. Click on "Configure Instance Details" for further configurations

14
 In the next step of the wizard, enter details like no. of instances you want to
launch at a time.
 Here we are launching one instance.

Step 6: In the next step you will be asked to create a key pair to login to you an instance.
A key pair is a set of public-private keys.

AWS stores the private key in the instance, and you are asked to download the public
key. Make sure you download the key and keep it safe and secured; if it is lost you cannot
download it again.

1. Create a new key pair


2. Give a name to your key
3. Download and save it in your secured folder

 When you download your key, you can open and have a look at your RSA private
key.

15
Step 7: Once you are done downloading and saving your key, launch your instance.

 Click on the 'Instances' option on the left pane where you can see the status of the
instance as 'Pending' for a brief while.
 Click on the 'Instances' option on the left pane where you can see the status of the
instance as 'Pending' for a brief while.

16
 Once your instance is up and running, you can see its status as 'Running' now.
 Note that the instance has received a Private IP from the pool of AWS.

To launch, configure, and connect to a Windows Virtual Machine using Amazon


Elastic Compute Cloud (EC2). Amazon EC2 is the Amazon Web Service you use
to create and run virtual machines in the cloud (we call these virtual machines
'instances'). Everything done in this tutorial is free tier eligible.

17
Step 8: Create and Configure Your Virtual Machine
a. You are now in the Amazon EC2 console. Click Launch Instance

18
With Amazon EC2, you can specify the software and specifications of the instance you
want to use. In this screen, you are shown options to choose an Amazon Machine Image
(AMI), which is a template that contains the software configuration required to launch
your instance.

19
You will now choose an instance type. Instance types comprise of varying combinations
of CPU, memory, storage, and networking capacity so you can choose the appropriate mix
for your applications. Select the default option of t2.micro - this instance type is covered
within the free tier. Then click Review and Launch at the bottom of the page.

You can review the options that are selected for your instance which include AMI Details,
Instance Type, Security Groups, Instance Details, Storage, and Tags. You can leave these
at the defaults and click Launch from the bottom of the page.
20
Note: For detailed information on your options, see Launching an Instance

Step 9: Create a Key Pair and Launch Your Instance


To connect to your virtual machine, you need a key pair. A key pair is used to log into
your instance (just like your house key is used to enter your home).

a. In the popover, select Create a new key pair and name it MyFirstKey. Then
click Download Key Pair. MyFirstKey.pem will be downloaded to your computer --
make sure to save this key pair in a safe location on your computer.

 Windows users: We recommend saving your key pair in your user directory in a sub-
directory called .ssh (ex.C:\user\{yourusername}\.ssh\MyFirstKey.pem).
 Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from
your home directory (ex.~/.ssh/MyFirstKey.pem).

Note: If you don't remember where you store your SSH private key (the file you are
downloading), you won't be able to connect to your virtual machine.

21
b. After you have downloaded and saved your key pair, click Launch Instance to start
your Windows Server instance.
Note: It can take a few minutes to launch your instance.

On the next screen, click View Instances to view the instance you have just created
and see its status.

22
Step 10: Connect to Your Instance
After launching your instance, it's time to retrieve the administrator password and
connect to it using a Remote Desktop Protocol (RDP) client.

a. Select the Windows Server instance you just created and click Connect

23
b. In order to connect to your Windows virtual machine instance, you will need a user
name and password:

 The User name defaults to Administrator


 To receive your password, click Get Password

In order to retrieve the password, you will need to locate the Key Pair you created
in Step 3. Click Choose File and browse to the directory you
stored MyFirstKey.pem. Your Key Pair will surface in the text box. Click Decrypt
Password.

24
You now have a decrypted password for your Windows Server instance. Make sure
to save this information in a secure location. It is your Windows Server admin login
credentials.

Click Download Remote Desktop File and open the file.

When prompted log in to the instance, use the User Name and Password you generated in
to connect to your virtual machine.
Note: When you complete this step, you might get a warning that the security
certificate could not be authenticated. Simply choose yes and proceed to complete
the connection to your Windows Server instance

25
Step 11: Terminate Your Windows VM
You can easily terminate the Windows Server VM from the Amazon EC2 console. In
fact, it is a best practice to terminate instances you are no longer using so you don’t keep
getting charged for them.

Back on the EC2 Console, select the box next to the instance you created. Then click
the Actions button, navigate to Instance State, and click Terminate.

. You will be asked to confirm your termination - select Yes, Terminate.


Note: This process can take several seconds to complete. Once your instances has
been terminated, the Instance State will change to terminated on your EC2 Console.
26
27
Experiment No. 3
Objective: Create cloud storage Bucket using Amazon Simple Storage Service (S3).
Perform the following operations:
a. Create a folder within a S3 Bucket
b. Upload content to S3
c. Change permissions to allow public access of contents.
d. Set Meta Data on an S3 Bucket
e. Delete an S3 Bucket and its content.

Requirement: Internet

Theory: Before you can upload data to Amazon S3, you must create a bucket in one of
the AWS Regions to store your data in. After you create a bucket, you can upload an
unlimited number of data objects to the bucket.

A bucket is owned by the AWS account that created it. By default, you can create up to
100 buckets in each of your AWS accounts. If you need additional buckets, you can
increase your account bucket limit to a maximum of 1,000 buckets by submitting a
service limit increase. For information about how to increase your bucket limit, see AWS
Service Limits in the AWS General Reference.

Buckets have configuration properties, including their geographical region, who has
access to the objects in the bucket, and other metadata.

a. To create an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console
athttps://console.aws.amazon.com/s3/.
2. Choose Create bucket.

3. On the Name and region page, type a name for your bucket and choose the AWS
Region where you want the bucket to reside. Complete the fields on this page as
follows:
a. For Bucket name, type a unique DNS-compliant name for your new
bucket. Follow these naming guidelines:
28
4.
a. The name must be unique across all existing bucket names in Amazon S3.
b. The name must not contain uppercase characters.
c. The name must start with a lowercase letter or number.
d. The name must be between 3 and 63 characters long.
e. After you create the bucket you cannot change the name, so choose
wisely.
f. Choose a bucket name that reflects the objects in the bucket because the
bucket name is visible in the URL that points to the objects that you're
going to put in your bucket.

For information about naming buckets, see Rules for Bucket Naming in
the Amazon Simple Storage Service Developer Guide.

5. For Region, choose the AWS Region where you want the bucket to reside.
Choose a Region close to you to minimize latency and costs, or to address
regulatory requirements. Objects stored in a Region never leave that Region
unless you explicitly transfer them to another Region. For a list of Amazon S3
AWS Regions, see Regions and Endpoints in the Amazon Web Services General
Reference.
6. (Optional) If you have already set up a bucket that has the same settings that you
want to use for the new bucket that you want to create, you can set it up quickly
by choosing Copy settings from an existing bucket, and then choosing the
bucket whose settings
7. you want to copy.

The settings for the following bucket properties are copied: versioning, tags, and
logging.

8. Do one of the following:


a. If you copied settings from another bucket, choose Create. You're done,
so skip the following steps.
b. If not, choose Next.

29
On the Configure options page, you can configure the following properties and Amazon
CloudWatch metrics for the bucket. Or, you can configure these properties and
CloudWatch metrics later, after you create the bucket.

a. Versioning

Select Keep all versions of an object in the same bucket. to enable object
versioning for the bucket. For more information on enabling versioning

b. Server access logging

Select Log requests for access to your bucket. to enable server access logging
on the bucket. Server access logging provides detailed records for the requests
that are made to your bucket. For more information about enabling server access
logging.

30
B: Upload content to S3

Uploading Files and Folders by Using Drag and Drop

If you are using the Chrome or Firefox browsers, you can choose the folders and files to
upload, and then drag and drop them into the destination bucket. Dragging and dropping
is the only way that you can upload folders.

To upload folders and files to an S3 bucket by using drag and drop

1. Sign in to the AWS Management Console and open the Amazon S3 console at
https://console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to upload
your folders or files to.

3. In a window other than the console window, select the files and folders that you
want to upload. Then drag and drop your selections into the console window that
lists the objects in the destination bucket.

31
In the Upload dialog box, do one of the following:

a. Drag and drop more files and folders to the console window that displays the
Upload dialog box. To add more files, you can also choose Add more files. This
option works only for files, not folders.
b. To immediately upload the listed files and folders without granting or removing
permissions for specific users or setting public permissions for all of the files that
you're uploading, choose Upload. For information about object access
permissions,.

C: Change permissions to allow public access of contents.

32
To set permissions or properties for the files that you are uploading, choose Next.

On the Set Permissions page, under Manage users you can change the permissions for
the AWS account owner. The owner refers to the AWS account root user, and not an
AWS Identity and Access Management (IAM) user. For more information about the root
user.

Choose Add account to grant access to another AWS account.

Under Manage public permissions you can grant read access to your objects to the
general public (everyone in the world), for all of the files that you're uploading. Granting
public read access is applicable to a small subset of use cases such as when buckets are
used for websites. We recommend that you do not change the default setting of Do not
grant public read access to this object(s). You can always make changes to object
permissions after you upload the object. For information about object access permissions,

D: Set Meta Data on an S3 Bucket

Adding System-Defined Metadata to an S3 Object

You can configure some system metadata for an S3 object. For a list of system-defined
metadata and whether you can modify their values.

To add system metadata to an object

1. Sign in to the AWS Management Console and open the Amazon S3 console at
https://console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.

33
3. In the Name list, choose the name of the object that you want to add metadata to.

4. Choose Properties, and then choose Metadata.

34
35
5. Choose Add Metadata, and then choose a key from the Select a key menu.

36
6. Depending on which key you chose, choose a value from the Select a value menu
or type a value.

37
7. Choose Save.

E: To delete an S3 bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console
athttps://console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the bucket icon next to the name of the bucket
that you want to delete and then choose Delete bucket.

In the Delete bucket dialog box, type the name of the bucket that you want to delete for
confirmation, and then choose Confirm.

Note

38
The text in the dialog box changes depending on whether the bucket is empty, is used for
a static website, or is used for ELB access logs.

39
Experiment No. 4

Objective: Launch and connect to an Amazon Relational Data Base (RDS) Service , Use
any one from MySQL, Oracle, SQL Server and post greSQLDataBase engines .

Requirement: Internet

Theory: Launch and connect to an Amazon Relational Data Base (RDS) Service , Use
any one from MySQL, Oracle, SQL Server and post greSQLDataBase engines.

After you create an Amazon RDS DB instance, you can use any standard SQL client
application to connect to the DB instance. To connect to an Amazon RDS DB instance,
follow the instructions for your specific database engine.

 Connecting to a DB Instance Running the MariaDB Database Engine


 Connecting to a DB Instance Running the Microsoft SQL Server Database Engine
 Connecting to a DB Instance Running the MySQL Database Engine
 Connecting to a DB Instance Running the Oracle Database Engine
 Connecting to a DB Instance Running the PostgreSQL Database Engine

Before you can connect to a DB instance running the MySQL database engine, you must
create a DB instance. Once Amazon RDS provisions your DB instance, you can use any
standard MySQL client application or utility to connect to the instance. In the connection
string, you specify the DNS address from the DB instance endpoint as the host parameter,
and specify the port number from the DB instance endpoint as the port parameter.

To authenticate to your RDS DB instance, you can use one of the authentication methods
for MySQL and IAM database authentication.

 To learn how to authenticate to MySQL using one of the authentication methods


for MySQL
 To learn how to authenticate to MySQL using IAM database authentication. You
can use the AWS Management Console, the AWS CLI describe-db-
instances command, or the Amazon RDS API DescribeDBInstances action to list
the details of an Amazon RDS DB instance, including its endpoint.

To find the endpoint for a MySQL DB instance in the AWS Management Console:

1. Open the RDS console and then choose Databases to display a list of your DB
instances.

40
2. Choose the MySQL DB instance name to display its details.
3. On the Connectivity tab, copy the endpoint. Also, note the port number. You
need both the endpoint and the port number to connect to the DB instance.

If an endpoint value is mysql–instance1.123456789012.us-east-.rds.amazonaws.com and


the port value is 3306, then you would specify the following values in a MySQL
connection string:

 For host or host name, specify mysql–instance1.123456789012.us-east-


1.rds.amazonaws.com
 For port, specify 3306

You can connect to an Amazon RDS MySQL DB instance by using tools like the
MySQL command line utility. For more information on using the MySQL client, go
to mysql - The MySQL Command Line Tool in the MySQL documentation. One GUI-
based application you can use to connect is MySQL Workbench. For more information,
41
go to the Download MySQL Workbench page. For information about installing MySQL
(including the MySQL client), see Installing and Upgrading MySQL.

Two common causes of connection failures to a new DB instance are:

 The DB instance was created using a security group that does not authorize
connections from the device or Amazon EC2 instance where the MySQL
application or utility is running. If the DB instance was created in a VPC, it must
have a VPC security group that authorizes the connections. If the DB instance was
created outside of a VPC, it must have a DB security group that authorizes the
connections..
 The DB instance was created using the default port of 3306, and your company
has firewall rules blocking connections to that port from devices in your company
network. To fix this failure, recreate the instance with a different port.

You can use SSL encryption on connections to an Amazon RDS MySQL DB instance.

Connecting from the MySQL Client

To connect to a DB instance using the MySQL client, type the following command at a
command prompt to connect to a DB instance using the MySQL client. For the -h
parameter, substitute the DNS name (endpoint) for your DB instance. For the -P
parameter, substitute the port for your DB instance. Enter the master user password when
prompted.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com -P 3306 -u


mymasteruser -p

After you enter the password for the user, you will see output similar to the following.

Welcome to the MySQL monitor. Commands endwith ; or \g.


Your MySQL connection id is 350
Server version: 5.6.40-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type'\c'toclear the buffer.

mysql>

Connecting with SSL

Amazon RDS creates an SSL certificate for your DB instance when the instance is
created. If you enable SSL certificate verification, then the SSL certificate includes the
42
DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against
spoofing attacks. To connect to your DB instance using SSL, you can use native
password authentication or IAM database authentication. To connect to your DB instance
using native password authentication, you can follow these steps:

To connect to a DB instance with SSL using the MySQL client

1. A root certificate that works for all regions can be downloaded here.
2. Enter the following command at a command prompt to connect to a DB instance
with SSL using the MySQL client. For the -h parameter, substitute the DNS name
for your DB instance. For the --ssl-ca parameter, substitute the SSL certificate file
name as appropriate.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-


ca=rds-ca-2015-root.pem -p

3. You can require that the SSL connection verifies the DB instance endpoint against
the
4. endpoint in the SSL certificate.

For MySQL 5.7 and later:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-


ca=rds-ca-2015-root.pem --ssl-mode=VERIFY_IDENTITY -p

For MySQL 5.6 and earlier:

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-


ca=rds-ca-2015-root.pem --ssl-verify-server-cert -p

5. Enter the master user password when prompted.

You will see output similar to the following

Welcome to the MySQL monitor. Commands endwith ; or \g.


Your MySQL connection id is 350
Server version: 5.6.40-log MySQL Community Server (GPL)

Type 'help;' or '\h' for help. Type'\c'toclear the buffer.

mysql>

43
Connecting from MySQL Workbench

To connect from MySQL Workbench

1. Download and install MySQL Workbench at Download MySQL Workbench.


2. Open MySQL Workbench.

1. From Database, choose Manage Connections.


2. In the Manage Server Connections window, choose New.
3. In the Connect to Database window, enter the following information:
 Stored Connection – Enter a name for the connection, such as MyDB.
 Hostname – Enter the DB instance endpoint.
 Port – Enter the port used by the DB instance.
 Username – Enter the username of a valid database user, such as the
master user.
 Password – Optionally, choose Store in Vault and then enter and save
the password for the user.

The window looks similar to the following:

44
1. Optionally, choose Test Connection to confirm that the connection to the DB
instance is successful.
2. Choose Close.
3. From Database, choose Connect to Database.
4. From Stored Connection, choose your connection.
5. Choose OK.

Maximum MySQL connections

The maximum number of connections allowed to an Amazon RDS MySQL DB instance


is based on the amount of memory available for the DB instance class of the DB instance.
A DB instance class with more memory available will result in a larger amount of
connections available.

The connection limit for a DB instance is set by default to the maximum for the DB instance
class for the DB instance. You can limit the number of concurrent connections to any value
up to the maximum number of connections allowed using the max_connections parameter
in the parameter group for the DB instance.

You can retrieve the maximum number of connections allowed for an Amazon RDS
MySQL DB instance by executing the following query on your DB instance:

45
SELECT @@max_connections;

You can retrieve the number of active connections to an Amazon RDS MySQL DB
instance by executing the following query on your DB instance:

SHOWSTATUSWHERE`variable_name` = 'Threads_connected';

46
Experiment No. 5

Objective: Study and practice Git and GitHub commands

Requirement: GitHub account, Internet

Theory: Git is a distributed version control system that helps you track changes in your
code and collaborate with others. GitHub is a web-based platform that provides hosting
for Git repositories and additional collaboration features.

Step 1: Set Up Your GitHub.com Account


For this class, we will use a public account on GitHub.com. We do this for a few
reasons:
• We don’t want you to "practice" in repositories that contain real code.
• We are going to break some things so we can teach you how to fix them. (therefore,
refer to #1 above)
If you already have a github.com account you can skip this step. Otherwise, you can set
up your free account by following these steps:
1. Access GitHub.com and click Sign up.
2. Choose the free account.
3. You will receive a verification email at the address provided.
4. Click the link to complete the verification process.

Step 2: Install Git


Git is an open source version control application. You will need Git installed for this
class.
You may already have Git installed so let’s check! Open Terminal if you are on a Mac, or
Powershell if you are on a Windows machine, and type:
$ git --version
You should see something like this:
$ git --version
git version 2.6.3
Anything over 1.9.5 will work for this class!

Downloading and Installing Git

If you don’t already have Git installed, you have two options:
1. Download and install Git at www.git-scm.com.
2. Install it as part of the GitHub Desktop package found at desktop.github.com.

If you need additional assistance installing Git, you can find more information in the
ProGit chapter on installing Git: http://gitscm.com/book/en/v2/Getting-Started-Installing-
Git.
47
Git Commands:
1. git init: Initialize a new Git repository in the current directory.
2. git clone [repository URL]: Clone a remote repository into a new directory.
3. git add [file]: Add file(s) to the staging area.
4. git commit -m "[commit message]": Record changes to the repository with
a message describing the changes.
5. git status: Show the status of changes in the working directory.
6. git diff: Show the differences between the working directory, staging area, and the last
commit.
7. git log: Show commit logs.
8. git branch: List, create, or delete branches.
9. git checkout [branch name]: Switch to a different branch.
10. git merge [branch name]: Merge changes from one branch into another.
11. git pull: Fetch from and integrate with another repository or a local branch.
12. git push: Update remote references along with associated objects.
13. git remote add [name] [url]: Add a new remote repository.
14. git remote -v: List all remote repositories.
15. git rm [file]: Remove files from the working tree and from the index.

GitHub Commands:
1. git remote add origin [repository URL]: Add a remote repository on GitHub.
2. git push -u origin [branch name]: Push a branch to GitHub.
3. git pull origin [branch name]: Pull changes from GitHub to your local repository.
4. git branch -r: List all remote branches.
5. git checkout -b [branch name]: Create a new branch and switch to it.
6. git clone -b [branch name] [repository URL]: Clone a specific branch from GitHub.
7. git fetch: Download objects and refs from another repository.
8. git merge origin/[branch name]: Merge changes from a remote branch into your
current branch.
9. git push origin --delete [branch name]: Delete a branch on GitHub.
10. git config --global user.name "[name]": Set your username globally for
GitHub.
11. git config --global user.email "[email]": Set your email globally for
GitHub.
12. git tag [tag name]: Create a lightweight tag.
13. git push --tags: Push all tags to GitHub.

48
Experiment No. 6

Objective: Install and configure Jenkins to build and deploy Java or Web Applications

Requirement: Jenkins Tool, Internet

Theory:
1. Install Java Development Kit (JDK)
 Download JDK 8 and choose windows 32-bit or 64-bit according to your system
configuration. Click on “accept the license agreeme.”

2. Set the Path for the Environmental Variable for JDK


 Go to System Properties. Under the “Advanced”tab, select “Environment
Variables.”
 Under system variables, select “new.” Then copy the path of the JDK folder and
paste it in the corresponding value field. Similarly, do this for JRE.
 Under system variables, set up a bin folder for JDK in PATH variables.
 Go to the command prompt and type the following to check if Java has been
successfully installed:

C:\Users\Simplilearn>java -version

3. Download and Install Jenkins

 Download Jenkins. Under LTS, click on windows.


 After the file is downloaded, unzip it. Click on the folder and install it. Select
“finish”; once done.

49
4. Run Jenkins on Localhost 8080

 Once Jenkins is installed, explore it. Open the web browser and type
“localhost:8080”.
 Enter the credentials and log in. If you install Jenkins for the first time, the
dashboard will ask you to install the recommended plugins. Install all the
recommended plugins.

5. Jenkins Server Interface

 New Item allows you to create a new project.


 Build History shows the status of your builds.
 Manage System deals with the various configurations of the system.

50
6. Build and Run a Job on Jenkins
 Select a new item (Name - Jenkins_demo). Choose a freestyle project and click
Ok.
 Under the General tab, give a description like “This is my first Jenkins job.” Under
the “Build Triggers” tab, select add built step and then click on the “Execute
Windows” batch command.
 In the command box, type the following: echo “Hello... This is my first Jenkins
Demo: %date%: %time%”. Click on apply and then save.

51
 Select build now. You can see a building history has been created. Click on that. In
the console output, you can see the output of the first Jenkins job with time and
date.

Congratulations, you’ve just installed Jenkins on your Windows system!

52
Experiment No. 7

Objective: Install and configure Docker tool and perform Docker commands for content
management.

Requirement: Docker tool ,Internet

Theory:
Docker is a platform for developing, shipping, and running applications in containers.
Containers allow developers to package an application and its dependencies together into
a single unit, ensuring consistency across different environments, from development to
production.

Step 1: Downloading Docker

The first place to start is the https://www.docker.com/get-started/ from where we can


download Docker Desktop.

Please note that Docker Desktop is intended only for Windows 10/11 and not for
Windows Server.

So let’s open the downloaded installer and begin the installation.

Step 2: Configuration

To run Linux on Windows, Docker requires a virtualization engine. Docker recommends


using WSL 2

53
Step 3: Running the installation

Click Ok, and wait a bit…

Step 4: Restart

For Docker to be able to properly register with Windows, a restart is required at this
point.

54
Step 5: License agreement

After the restart, Docker will start automatically and you should see the window below:

Essentially, if you are a small business or use Docker for personal use, Docker contains
to remain free. However, if you are in a large organization, please get in touch with
your IT department to clarify the license agreement.

55
Step 6: WSL 2 installation

After you accept the license terms, the Docker Desktop window will open. However, we
are not done yet. Since we have selected WSL 2 as our virtualization engine, we also
need to install it. Don’t click Restart just yet!

Follow the link in the dialog window and download WSL 2.

Open the installer.

56
Click on Next to begin installing the Windows Subsystem for Linux (WSL).

After a few seconds, the installation should complete. So you may click on Finish.

If you still have the Docker Desktop dialog window in the background, click on Restart.
Otherwise, just restart your computer as you normally do.

57
Step 7 — Starting Docker Desktop

If Docker Desktop did not start on its own, simply open it from the shortcut on your
Desktop.

If you wish, you can do the initial orientation by clicking Start.

After this, your Docker Desktop screen should look like this.

58
Step 8— Testing Docker

Open your favorite command line tool and type in the following command:

docker run hello-world

This will download the hello-world Docker image and run it. This is just a quick test to
ensure everything is working fine.

GENERAL COMMANDS:

Start the docker daemon


docker -d

Get help with Docker. Can also use –help on all subcommands
docker --help

Display system-wide information


docker info

IMAGES:
Docker images are a lightweight, standalone, executable package of software that includes
everything needed to run an application: code, runtime, system tools, system libraries and
settings.

Build an Image from a Dockerfile


docker build -t <image_name>

Build an Image from a Dockerfile without the cache


59
docker build -t <image_name> . –no-cache

List local images


docker images

Delete an Image
docker rmi <image_name>

Remove all unused images


docker image prune

CONTAINERS:
A container is a runtime instance of a docker image. A container will always run the same,
regardless of the infrastructure. Containers isolate software from its environment and
ensure that it works uniformly despite differences for instance between development and
staging.

Create and run a container from an image, with a custom name:


docker run –name <container_name> <image_name>

Run a container with and publish a container’s port(s) to the host.


docker run -p <host_port> : <container_name> <image_name>

Run a container in the background


docker run -d <image_name>

Start or stop an existing container:


docker start|stop <container_name> (or <containe-id>)

Remove a stopped container:


docker rm <container_name>

Open a shell inside a running container:


docker exec -it <container_name> sh

Fetch and follow the logs of a container:


docker logs -f <container_name>

To inspect a running container:


docker inspect <container_name> (or <containe-id>)

To list currently running containers:


docker ps

List all docker containers (running and stopped):

60
docker ps --all

View resource usage stats


docker container stats

61
Experiment No. 8

Objective: Create a Docker image and push Docker image into AWS ECR

Requirement: Docker tool ,Internet

Theory:
Step 1: Docker Login
Go to hub.docker.com/signup and create your account. To connect your system with your
Docker account, execute docker login in the terminal.
You will see Login succeeded prompted in the terminal.

Checking if the Docker login is successful


Once Docker is installed and configured in your system, let's move to the next section.

How to Dockerize Your Project


By Dockerize, I mean setting up your existing project with Docker and containerizing it.
Create a file named Dockerfile without any extension in the root of your project
directory. It contains the code required to build a Docker image and run the
dockerized app as a container.
If you are using VS Code, the Docker extension will come in handy.
How to Configure the Dockerfile
As a bare minimum configuration, paste the following code in the Dockerfile.

FROM node:12.17.0

WORKDIR /app

COPY package*.json ./

RUN npm install

62
COPY . .

ENV PORT=3000

EXPOSE 3000

CMD [ "npm", "start" ]

Before understanding these instructions, create a .dockerignore file and


add node_modules in it. It works the same as .gitignore.
Now let's go through that code:

 FROM: Sets the base image for further instructions. For the sake of simplicity, we
will use an officially supported Node.js image. I am using the exact version
mentioned in my package.json, so change it accordingly if you're using a different
node version.
 WORKDIR: Adds source code from our current working directory to the image.
 COPY: Copy files and folders from source to destination in the image filesystem.
We are copying package.json and package-lock.json. This command ensures that
we have a list of dependencies to install in our docker container.
 RUN: Executes the given command. As we have package.json from the previous
step, we can install dependencies in our container.
 COPY: Now, we are copying everything from project directory to our container.
Since both are in the same directory, we are using . which indicates the current
working directory. node_modules doesn't get copied since we have added it
in .dokcerignore.
 ENV: Sets an environment variable for the docker container.
 EXPOSES: When we are running this container, we want to listen our app on a
particular port. EXPOSES allows us to access the containerized application
publicly. It doesn't need to be the same as ENV, but it reduces complexity :)
 CMD: There can be only one CMD command in one image, which tells the
container how to start the application. Notice we have passed as array and the
necessary command as elements. This is called as exec form and it allows us to
run the command without starting a shell session.

Step 2: Create a Docker Image


63
You use the docker build command to create a build of Docker image. There are a bunch
of parameters we can pass with the command. But one we are going to use here is -t. This
gives your image a name tag which makes it easy to remember as well as access.
There is no standardized way to name your image, but normally you would see this:
Docker user name followed by a slash (/) and then version number separated by a
colon(:).

docker build -t <name-tag> .

The second argument is the location of the Dockerfile. Since ours are in the same
directory, we can put a period(.).

When you run the command, you will see that steps are being executed in the same order
as they are written in the Dockerfile. Once done, it will prompt Successfully built
<baseID> in the terminal.
You can use baseID to access the particular Docker image instead of using its name tag.
You can verify this by looking at the Images section in the Docker app. Also you can see
the local container in Containers/ Apps section.
For the time being, let's run our Docker image in our local system

docker run -p 3000:3000 <name-tag>

Step 3: Login AWS account:

Step 4: Required IAM permisions for puhing an docker image:

64
Step 5: Create the AWS ECR repository
In the AWS console go to the AWS ECR page. Click the “Create repository” button.

65
AWS ECR list all repositories page

Then chouse visibility of your repository. I leave it as “private”, so it will be managed by

IAM and repository policy permissions and won't be accessible to the public. Then fill up

the name and click Create a repository on the bottom of the page.

Create AWS ECR repository form

An empty repository has been created!

66
The newly created AWS ECR repository

Step 6: Prepare the image to be pushed.

In this example, I will push the image of a simple Node.js app that listens on port 8080 and

displays its host/container name. The source code you can check here. The root directory

has a Dockerfile. We will use it to build an image. Before pushing the image to the

repository we need to tag it with the repository URL. In the root directory in the terminal run

commands:

# Command to build an image


$docker image build -t <IMAGE_NAME>:<IMAGE_TAG> .

# Command example with my image


$docker image build -t simple-app-image .

# Command to tag an image


$docker image tag <IMAGE_NAME>:<IMAGE_TAG>
<REPOSITORY_URI>:<IMAGE_TAG>

# Command example with my image and repository


$docker image tag simple-app-image:latest 708995052028.dkr.ecr.us-east-
2.amazonaws.com/ecr-tutorial-image:latest

The command to tag an image

The result will be like this:

67
Result of commands execution

Step 7: Authenticate the Docker CLI to your AWS ECR

Now we need to authenticate the Docker CLI to AWS ECR.

# Command
$aws ecr get-login-password --region <REPOSITORY_REGION> | docker login --
username AWS --password-stdin <REPOSITORY_URI>

# Command example with my region and repository


$aws ecr get-login-password --region us-east-2 | docker login --username AWS --
password-stdin 708995052028.dkr.ecr.us-east-2.amazonaws.com

The login command

For this command to execute successfully you have to have your AWS credentials stored in

the credentials file and your IAM principal has to have the necessary permission. Make

sure you use the right region, as it is a common mistake.

The result of the executed login command

68
This command retrieves an authentication token using the GetAuthorizationToken API and

then redirects it using the pipe (|) to the login command of the container client, the Docker

CLI in my case. The authorization token is valid for 12 hours.

Step 8: Push the image to AWS ECR

To push the image to the AWS ECR we will execute the following command:

# Command
$docker image push <IMAGE_NAME[:TAG]>

# example with my image


$docker image push 708995052028.dkr.ecr.us-east-2.amazonaws.com/ecr-tutorial-
image:latest
The push image command

As you can see, to push the image I've used the tag created in step 2 of this tutorial.

The result of the executed push image command

Now the image is in my repository created in step 1.

The image in the repository

69
Experiment No. 9

Objective: Create Repositories, Cloning, and Pushing Code Changes using AWS
CodeCommit

Requirement: Internet

Theory:
Steps to be followed:

1. Goto Aws portal and create an IAM user with administrative permissions.

2. Goto AWS portal and search for Code Commit.

3. Create a repository named: my-webapp and hit create.

4. After creating the repository, we will go inside it and clone it in local system, by
clicking Clone URL > Clone HTTPS.

70
5. Now, Goto your command line in local system and type,

Git clone <URL_copied_from_repo> and it will ask username and password.

Here you need to give the username and password of IAM user that you have created.

6. And, it will clone an empty repository. Just run “ls -l” command and you will find out
an empty directory named “my-webpage”

7. Now, we will push some code in the repository.

8. Copy all the files that you have got from the link “https://github.com/chxtan/my
webpage” and paste inside the folder “my-webapp” that created by cloning the empty
repository.

9. Now with command line, browse inside the my-webpage folder and do “ls”, it will
list down all the files inside.

71
10. Run command “git status” and it will show that all the files are untracked.

11. We will use the “git add .” command to track all the files that were untracked.

12. Now, we will do the first commit as, “git commit -m “first commit””

13. We made a commit; it was made locally. Still we need to push our changes to our
repository.

14. For that we will use command: “git push”

72
15. And, the code will be uploaded to the AWS repository.

16. Now, we will go to index.html file and modify the content inside.

Suppose, we will change “Congratulations V1” to “Congratulations V2”,

17. And in CMD, we will run “git status” again and the index file will be in untracked
category.

73
18. To make the file tracked, we will run “git add .\index.html”, and the file will be
tracked again.

19. And now we will push the file again, with “git push” command.

20. We can view the latest commits on AWS portal itself.

21. We can create a new branch by running, “git checkout -b my-feature”.

74
22. Now we will made some changes again to the “index.html” file as “Congratulations
V2” to “Congratulation V3”.

23. Run “git status” command and “index.html” will be in untracked mode. Keep in my
mind that we are in “my-feature” branch.

24. Move that file in tracked mode and run the command “git push –set-upstream origin
my-feature”.

25. Here, in AWS portal, both the branches will be pushed.

26. Now, we need to add the new feature added that is in “my-feature” branch, for that
we will “create pull request”.

75
27. Compare the both branches.

28. And create a pull request.

29. Reviewer will Merge this pull request.

30. And your new feature will be merge in master branch.

76
Experiment No. 10

Objective: Implement continuous integration using AWS Codebuild

Requirement: Internet

Theory:
Step 1: First log into the AWS console

Step 2: navigate to CodeBuild > BuildProjects

Step 3: Add a build name and description, and configure for Github

77
Step 4: Select your repository and specify a branch (in our case develop)

Step 5: Select which hooks you want Github to trigger

Step 6: Environment configuration

78
Step 7: You may need to add a specific variable for running the tests in CI mode

Step 8: Adding the build commands manually inline

Step 9: Modifying the build spec after the project has been built

79
Step 10: You require admin privileges to reach the settings of a github repo

Step 11: Then in the options select Require status checks to pass before merging, the new
AWS CodeBuild that we created should be there as option (the name we assigned it will be
in brackets to be sure we have the correct one). Tick the option.

80
Step 12: Successfully Started the Build:

81
Experiment No. 11

Objective: Automate application deployment into EC2 using AWS Code Deploy

Requirement: Internet

Theory:

Step 1: First log into the AWS console

Step 2: Launch an EC2 instance, same as experiment 1.

Step 3: Install the required packages by running the command below

1.Update to the latest current version of package


sudo yum update -y

82
2. Install git and python 3
sudo yum install git -y
sudo yum install python3-pip python3-devel python3-setuptools -y

3. Install CodeDeploy Agent


sudo yum update
sudo yum install -y ruby
sudo yum install wget
wget https://aws-codedeploy-ap-southeast-1.s3.ap-southeast-1.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
sudo service codedeploy-agent start

4. Install authbind and modify port 80 (for enabling production server to run app at port 80

without superuser privileges)


wget https://s3.amazonaws.com/aaronsilber/public/authbind-2.1.1-0.1.x86_64.rpm
sudo rpm -Uvh https://s3.amazonaws.com/aaronsilber/public/authbind-2.1.1-0.1.x86_64.rpm
sudo touch /etc/authbind/byport/80
sudo chmod 500 /etc/authbind/byport/80
sudo chown ec2-user /etc/authbind/byport/80

5. Clone GitHub Repository. (You may change to your own working GitHub repository)
git clone https://github.com/azzan-amin-97/FlaskAppCodeDeploy.git

Step 4: Configure AWS CodeDeploy Service


 Create an Application

83
 Create the Deployment Group:

Step 5: Configure CodeDeploy AppSpec file:

We use EC2/On-Premises compute platform for our application. Therefore, in order to


make the deployment automation successful, the AppSpec file must be a YAML-
formatted file named appspec.yml and it must be placed in the root of our application
project’s source code directory.
In your root project directory, add appspec.yml . This is what our AppSpec file looks like.
version: 0.0
os: linux
files:
- source: .
destination: /home/ec2-user/app
hooks:
AfterInstall:
- location: scripts/install_app_dependencies
timeout: 300
runas: root
ApplicationStart:
- location: scripts/start_server
timeout: 300
runas: root

84
Add scripts folder in root directory and add these files below

install_app_dependencies (shell script)


#!/bin/bash
sudo pip3 install virtualenv
cd /home/ec2-user/app
virtualenv environment
source environment/bin/activate
sudo pip3 install -r requirements.txt

start_server (shell script)


#!/bin/bash
cd /home/ec2-user/app/
source environment/bin/activate
supervisord -c supervisord.conf

Step 6: Setting up GitHub Actions workflow pipeline

To create the CI/CD workflow in GitHub Actions, create a folder


named .github/workflows in our application root that will contain the GitHub action
workflows. You can use the code below in the terminal as the guide to achieving this
process.
cd path/to/root_repo
mkdir .github/workflows
touch .github/workflows/workflow.yml

Step 7:Implementing the continuous integration and continuous deployment by


configuring AWS credentials to complete the deployment.

Step 8: Running the Complete Pipeline:

85
Step 9: Successfully created a working CI/CD pipeline for Amazon EC2 Deployment using
GitHub Actions and AWS CodeDeploy.

86
Experiment No. 12

Objective: Implement CI/CD using AWS Code pipeline that automates the build, test, and
deployment phases

Requirement: Internet

Theory:

Step 1: Create an S3 bucket and allow public access to it

Step 2: Go to AWS console and create a new codepipeline as below:

Enter the pipeline name and the role name. Click next to move to next screen.

Step 3: Link your github repository with the codepipeline. Select the Github version 2
and click on the button “Connect to github”. Here, a new popup will open where you will
be entering your github credentials. After successfully logging into github, you will select
the repository and branch name. Then click next to move to the next stage.

87
Step 4: On the build stage, select “AWS Codebuild” as your build provider. However,
you can select any other build provider as well. Then click on “Create Project” to create a
new build project. Specify the OS, runtime, service role, environment image, and image
version on this screen. After specifying this detail, click “Continue to pipeline”.

Step 5: The next stage is the deploy stage. Here you will mention Amazon S3 as deploy
provider because the build will be deployed to S3 bucket. You will select the S3 bucket
for build deployment. S3 bucket allows static web hosting, so the build will be deployed
to S3, and the project will be accessible from S3 URL.

88
Step 6: Click “Next” And then click “Create pipeline”.

Step 7: Now, your pipeline is created. Next, select CodeCommit for source provider
(or Github if you are using it, for this, you have to log in to Github) and select the
repository and branch to configure for CI/CD.

89
Step 8: Next, for the build provider select CodeBuild and create a new build project,
give it a name and configure it as follows

90
Step 9: Search for ECR and select the policy as below and click Attach policy

For Deploy provider, select Amazon ECS, cluster, and service name. Also, add the
name of the image definitions file as ‘images.json’ that we will create during the
build process

 Click Next, Review it and click Create pipeline


 Once Pipeline is created successfully, it will launch a new container as a
Service

Step 10: Test the complete Pipeline by changing the source code and
pushing the changes to the Code commit repository.

91
Once the changes are pushed, CodePipeline will trigger the CI/CD process and creates
a new deployment of the AWS Fargate Service with the new image build

To verify our changes are made and deployed successfully, visit the DNS name in the
browser

92

You might also like