Cloud & DevOps
Cloud & DevOps
1
Institute Vision
Institute Mission
M1: To provide quality Teaching Learning environment and make students proficient in
both theoretical and applied foundations of Information Technology.
M2: Create highly skilled IT engineers, capable of doing research and also develop
solutions for the betterment of the nation.
M4: To develop entrepreneurial skills in students and also motive them towards pursuing
higher studies.
.
Program Educational Objectives (PEOs)
2
PROGRAM OUTCOMES (POs):
Engineering Graduates will be able to:
PO1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals,
and an engineering specialization to the solution of complex engineering problems.
PO2. Problem analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.
PO3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the public
health and safety, and the cultural, societal, and environmental considerations.
PO4. Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.
PO5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations.
PO6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.
PO7. Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
PO8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice.
PO9. Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.
PO10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and design
documentation, make effective presentations, and give and receive clear instructions.
PO11. Project management and finance: Demonstrate knowledge and understanding of the engineering
and management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.
PO12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
PSO1: Apply current technical concepts and practices in the core Information
Technology of Cloud Computing, Big data, Mobile Application Development and
Internet of Things.
PSO2: Use appropriate techniques, modern programming languages, tools for quality Software development
3
Course Overview:
The course is designed to give an introduction to cloud & DevOps and its real time usage
in the practical applications.
Course Objectives:
The course should enable the students to:
1. Apply the concepts of AWS and its cloud services.
2. Deploy and use virtual instances
3. Implement continuous integration using AWS CodeBuild.
4. Implement end-to-end continuous integration and continuous deployment (CI/CD) using
AWS CodePipeline.
Course Outcomes:
At the end of the course, student will be able to:
1 Deploy secured virtual instances in Amazon AWS.
2. Deploy various cloud services like S3, Databases
3. Proficiency in basic Git and GitHub commands.
4. Deploy automated applications
COs/POs PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2
CO1 3 3 2 3
CO2 3 3 3 3
CO3 3 3 2 3 2
2
CO4 3 3 2 3
2
AVG 3 3 2 3
4
Justifications for CO - PO mapping
CO1- PO3,PO5 3 By mapping CO-1 to the PO-3, PO5 which is highly related to the
course, the student can deploy cloud services.
5
PREFACE
Cloud & DevOps Lab is one of the important subjects included in the third year. The
students must grasp aspects of Amazon web services, devOps tools to familiarize the
concepts of cloud computing services while doing Cloud & DevOps Lab. Understanding
concept of Cloud & DevOps Lab student will be able to make use of available of services
for emerging technologies.
Students will be in a position to grasp the above aspects while doing lab practical’s as
defined in the manual through four steps
This manual will need constant up gradation based on the student feedback and change in
the syllabus.
6
Introduction To Cloud & DevOps Lab
Architecture
Public
Private
Community Cloud
Hybrid Cloud
DevOps is the combination of cultural philosophies, practices, and tools that increases an
organization’s ability to deliver applications and services at high velocity: evolving and
improving products at a faster pace than organizations using traditional software
development and infrastructure management processes. This speed enables organizations
to better serve their customers and compete more effectively in the market.
Under a DevOps model, development and operations teams are no longer “siloed.”
Sometimes, these two teams are merged into a single team where the engineers work across
the entire application lifecycle, from development and test to deployment to operations,
and develop a range of skills not limited to a single function.In some DevOps models,
quality assurance and security teams may also become more tightly integrated with
development and operations and throughout the application lifecycle. When security is the
focus of everyone on a DevOps team, this is sometimes referred to as DevSecOps.
These teams use practices to automate processes that historically have been manual and
slow. They use a technology stack and tooling which help them operate and evolve
applications quickly and reliably. These tools also help engineers independently
7
accomplish tasks (for example, deploying code or provisioning infrastructure) that
normally would have required help from other teams, and this further increases a team’s v
2. Students who turn up late to the labs will in no case be permitted to do the program
schedule for the day.
3. Students are required to prepare thoroughly to perform the experiment before coming
to laboratory.
4. Student should bring A4 sheets along with lab evaluation sheet to lab and the day
experiment should written and get verified by the faculty.
5. Including the scheduled experiment every student need to write a problem statement
specified by the faculty related to the scheduled lab concepts discussed.
6. Faculty will prepare 4-5 problem statements, so that every student should write one of
the problem statements as specified by the faculty.
7. The student will be permitted to use the system only if both the scheduled experiment
and given problem statement got verified by the faculty.
8. Student need to execute the lab programs and get the outputs verified by the faculty
with some concern test cases.
9. Once the execution is done then the students should upload all their programs into
student corner of MLRIT website.
10. Finally the student need to get verified his/her day to day evaluation sheet by concern
faculty.
11. Students should be maintain all the A4 sheets related to the lab as observation file to
the concern lab .
12. For each lab they need to maintain observation, print outs and lab evaluation sheet for
sure to get completion signature from the faculty.
8
INDEX
Exp. No. Name of the Experiment Page No.
1 Create Amazon AWS EC2 Linux instance with conceptual 10 - 11
understanding of SSH client software protocol and keys.
2 Create Amazon AWS EC2 Windows server instance with
conceptual understanding of RDP (Remote Desktop Protocol). 12 - 27
8 Create a Docker image and push Docker image into AWS ECR 62 - 69
9
Experiment No. 1
Objective: Create Amazon AWS EC2 Linux instance with conceptual understanding of
SSH client software protocol and keys.
Requirement: Internet
Preliminaries:
1. Create an aws developer account at http://aws.amazon.com
2. Update your credits.
3. Navigate the aws console. Browse through the wide range of infrastructure services
offered by aws.
4. Create an Amazon key pair. This is the key-pair we will use for accessing
applications/services on the cloud. Call it Richs2014 or something you will remember.
This is like a key your safety box. Don’t lose. Store Richs1.pem and equivalent private
key for putty Richs2014.ppk in a safe location.
5. Identify the credentials of your Amazon; just be knowledge about where to locate them
when you need them for authenticate yourself/your application using these credentials.
https://console.aws.amazon.com/iam/home?#security_credential
6. Identity and Access Management (IAM) is an important service. Read
http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_Introduction.html
Step 1: Choose an AMI (Amazon machine image) for the instance you want: this can be
single CPU machine to a sophisticated cluster of powerful processors.
The instances can be from Amazon Market Place, Community (contributed) AMI, My own
AMIs (that I may have created: Eg., RichsAMI) Choose a “free-tier” eligible Windows
AMI.
Step 2: Choose an instance type: small, large, micro, medium , etc.
Step 3: Review and launch. We are done, we have Windows machine.
Step 4: Create a new key-pair to access the instance that will be created. We will be
accessing the instance we create using Public-private key pair. Download the pair of the
key and store it.
Launch instance. Once it is ready you will use its public IP and the key pair we saved and
the RDP protocol to access the instance on the cloud.
10
Overview: Simply load the web site components into an appropriate S3 folder/directories
created. Configure a few parameters, policy file and the web site all set to go!
Step 1:
1. When you host a website on Amazon S3, AWS assigns your website a URL based on
the name of the storage location you create in Amazon S3 to hold the website files
(called an S3 bucket) and the geographical region where you created the bucket.
2. For example, if you create a bucket called richs on the east coast of the United States
and use it to host your website, the default URL will be http://richs.s3-website-us-east-
1.amazonaws.com/.
3. We will not use Route 53 and CloudFront for this proof-of-concept implementation.
Step 2: Enable logging and redirection (note: for some reason this collides with richs.com)
1. In the logging window enter logs.richs.com and /root in the next box; right click on
www.richs.com properties and redirect it to richs.com
2. In the richs.com bucket, enable web hosting and enter index,html as the index
document. If you an error document, you can ad that in the next box.
3. Click on the endpoint address that shows up in properties window of richs.com
4. You should be able to see the static application.
11
Experiment No. 2
Objective: Create Amazon AWS EC2 Windows server instance with conceptual
understanding of RDP (Remote Desktop Protocol).
Requirement: Internet
An on-demand EC2 instance is an offering from AWS where the subscriber/user can
rent the virtual server per hour and use it to deploy his/her own applications.
The instance will be charged per hour with different rates based on the type of the
instance chosen. AWS provides multiple instance types for the respective business needs
of the user.
Thus, you can rent an instance based on your own CPU and memory requirements and
use it as long as you want. You can terminate the instance when it’s no more used and
save on costs. This is the most striking advantage of an on-demand instance- you can
drastically save on your CAPEX.
Step 1:
Login to your AWS account and go to the AWS Services tab at the top left corner.
Here, you will see all of the AWS Services categorized as per their area viz.
Compute, Storage, Database, etc. For creating an EC2 instance, we have to
choose Computeà EC2 as in the next step.
Open all the services and click on EC2 under Compute services. This will launch
the dashboard of EC2.
12
Here is the EC2 dashboard. Here you will get all the information in gist about the AWS
EC2 resources running.
Step 2: On the top right corner of the EC2 dashboard, choose the AWS Region in which
you want to provision the EC2 server.
Here we are selecting N. Virginia. AWS provides 10 Regions all over the globe.
Once your desired Region is selected, come back to the EC2 Dashboard.
Click on 'Launch Instance' button in the section of Create Instance (as shown
below).
13
Choose AMI
1. You will be asked to choose an AMI of your choice. (An AMI is an Amazon
Machine Image. It is a template basically of an Operating System platform which
you can use as a base to create your instance). Once you launch an EC2 instance
from your preferred AMI, the instance will automatically be booted with the
desired OS. (We will see more about AMIs in the coming part of the tutorial).
2. Here we are choosing the default Amazon Linux (64 bit) AMI.
Step 5: In the next step, you have to choose the type of instance you require based on
your business needs.
1. We will choose t2.micro instance type, which is a 1vCPU and 1GB memory
server offered by AWS.
2. Click on "Configure Instance Details" for further configurations
14
In the next step of the wizard, enter details like no. of instances you want to
launch at a time.
Here we are launching one instance.
Step 6: In the next step you will be asked to create a key pair to login to you an instance.
A key pair is a set of public-private keys.
AWS stores the private key in the instance, and you are asked to download the public
key. Make sure you download the key and keep it safe and secured; if it is lost you cannot
download it again.
When you download your key, you can open and have a look at your RSA private
key.
15
Step 7: Once you are done downloading and saving your key, launch your instance.
Click on the 'Instances' option on the left pane where you can see the status of the
instance as 'Pending' for a brief while.
Click on the 'Instances' option on the left pane where you can see the status of the
instance as 'Pending' for a brief while.
16
Once your instance is up and running, you can see its status as 'Running' now.
Note that the instance has received a Private IP from the pool of AWS.
17
Step 8: Create and Configure Your Virtual Machine
a. You are now in the Amazon EC2 console. Click Launch Instance
18
With Amazon EC2, you can specify the software and specifications of the instance you
want to use. In this screen, you are shown options to choose an Amazon Machine Image
(AMI), which is a template that contains the software configuration required to launch
your instance.
19
You will now choose an instance type. Instance types comprise of varying combinations
of CPU, memory, storage, and networking capacity so you can choose the appropriate mix
for your applications. Select the default option of t2.micro - this instance type is covered
within the free tier. Then click Review and Launch at the bottom of the page.
You can review the options that are selected for your instance which include AMI Details,
Instance Type, Security Groups, Instance Details, Storage, and Tags. You can leave these
at the defaults and click Launch from the bottom of the page.
20
Note: For detailed information on your options, see Launching an Instance
a. In the popover, select Create a new key pair and name it MyFirstKey. Then
click Download Key Pair. MyFirstKey.pem will be downloaded to your computer --
make sure to save this key pair in a safe location on your computer.
Windows users: We recommend saving your key pair in your user directory in a sub-
directory called .ssh (ex.C:\user\{yourusername}\.ssh\MyFirstKey.pem).
Mac/Linux users: We recommend saving your key pair in the .ssh sub-directory from
your home directory (ex.~/.ssh/MyFirstKey.pem).
Note: If you don't remember where you store your SSH private key (the file you are
downloading), you won't be able to connect to your virtual machine.
21
b. After you have downloaded and saved your key pair, click Launch Instance to start
your Windows Server instance.
Note: It can take a few minutes to launch your instance.
On the next screen, click View Instances to view the instance you have just created
and see its status.
22
Step 10: Connect to Your Instance
After launching your instance, it's time to retrieve the administrator password and
connect to it using a Remote Desktop Protocol (RDP) client.
a. Select the Windows Server instance you just created and click Connect
23
b. In order to connect to your Windows virtual machine instance, you will need a user
name and password:
In order to retrieve the password, you will need to locate the Key Pair you created
in Step 3. Click Choose File and browse to the directory you
stored MyFirstKey.pem. Your Key Pair will surface in the text box. Click Decrypt
Password.
24
You now have a decrypted password for your Windows Server instance. Make sure
to save this information in a secure location. It is your Windows Server admin login
credentials.
When prompted log in to the instance, use the User Name and Password you generated in
to connect to your virtual machine.
Note: When you complete this step, you might get a warning that the security
certificate could not be authenticated. Simply choose yes and proceed to complete
the connection to your Windows Server instance
25
Step 11: Terminate Your Windows VM
You can easily terminate the Windows Server VM from the Amazon EC2 console. In
fact, it is a best practice to terminate instances you are no longer using so you don’t keep
getting charged for them.
Back on the EC2 Console, select the box next to the instance you created. Then click
the Actions button, navigate to Instance State, and click Terminate.
Requirement: Internet
Theory: Before you can upload data to Amazon S3, you must create a bucket in one of
the AWS Regions to store your data in. After you create a bucket, you can upload an
unlimited number of data objects to the bucket.
A bucket is owned by the AWS account that created it. By default, you can create up to
100 buckets in each of your AWS accounts. If you need additional buckets, you can
increase your account bucket limit to a maximum of 1,000 buckets by submitting a
service limit increase. For information about how to increase your bucket limit, see AWS
Service Limits in the AWS General Reference.
Buckets have configuration properties, including their geographical region, who has
access to the objects in the bucket, and other metadata.
a. To create an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console
athttps://console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. On the Name and region page, type a name for your bucket and choose the AWS
Region where you want the bucket to reside. Complete the fields on this page as
follows:
a. For Bucket name, type a unique DNS-compliant name for your new
bucket. Follow these naming guidelines:
28
4.
a. The name must be unique across all existing bucket names in Amazon S3.
b. The name must not contain uppercase characters.
c. The name must start with a lowercase letter or number.
d. The name must be between 3 and 63 characters long.
e. After you create the bucket you cannot change the name, so choose
wisely.
f. Choose a bucket name that reflects the objects in the bucket because the
bucket name is visible in the URL that points to the objects that you're
going to put in your bucket.
For information about naming buckets, see Rules for Bucket Naming in
the Amazon Simple Storage Service Developer Guide.
5. For Region, choose the AWS Region where you want the bucket to reside.
Choose a Region close to you to minimize latency and costs, or to address
regulatory requirements. Objects stored in a Region never leave that Region
unless you explicitly transfer them to another Region. For a list of Amazon S3
AWS Regions, see Regions and Endpoints in the Amazon Web Services General
Reference.
6. (Optional) If you have already set up a bucket that has the same settings that you
want to use for the new bucket that you want to create, you can set it up quickly
by choosing Copy settings from an existing bucket, and then choosing the
bucket whose settings
7. you want to copy.
The settings for the following bucket properties are copied: versioning, tags, and
logging.
29
On the Configure options page, you can configure the following properties and Amazon
CloudWatch metrics for the bucket. Or, you can configure these properties and
CloudWatch metrics later, after you create the bucket.
a. Versioning
Select Keep all versions of an object in the same bucket. to enable object
versioning for the bucket. For more information on enabling versioning
Select Log requests for access to your bucket. to enable server access logging
on the bucket. Server access logging provides detailed records for the requests
that are made to your bucket. For more information about enabling server access
logging.
30
B: Upload content to S3
If you are using the Chrome or Firefox browsers, you can choose the folders and files to
upload, and then drag and drop them into the destination bucket. Dragging and dropping
is the only way that you can upload folders.
1. Sign in to the AWS Management Console and open the Amazon S3 console at
https://console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to upload
your folders or files to.
3. In a window other than the console window, select the files and folders that you
want to upload. Then drag and drop your selections into the console window that
lists the objects in the destination bucket.
31
In the Upload dialog box, do one of the following:
a. Drag and drop more files and folders to the console window that displays the
Upload dialog box. To add more files, you can also choose Add more files. This
option works only for files, not folders.
b. To immediately upload the listed files and folders without granting or removing
permissions for specific users or setting public permissions for all of the files that
you're uploading, choose Upload. For information about object access
permissions,.
32
To set permissions or properties for the files that you are uploading, choose Next.
On the Set Permissions page, under Manage users you can change the permissions for
the AWS account owner. The owner refers to the AWS account root user, and not an
AWS Identity and Access Management (IAM) user. For more information about the root
user.
Under Manage public permissions you can grant read access to your objects to the
general public (everyone in the world), for all of the files that you're uploading. Granting
public read access is applicable to a small subset of use cases such as when buckets are
used for websites. We recommend that you do not change the default setting of Do not
grant public read access to this object(s). You can always make changes to object
permissions after you upload the object. For information about object access permissions,
You can configure some system metadata for an S3 object. For a list of system-defined
metadata and whether you can modify their values.
1. Sign in to the AWS Management Console and open the Amazon S3 console at
https://console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
33
3. In the Name list, choose the name of the object that you want to add metadata to.
34
35
5. Choose Add Metadata, and then choose a key from the Select a key menu.
36
6. Depending on which key you chose, choose a value from the Select a value menu
or type a value.
37
7. Choose Save.
E: To delete an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console
athttps://console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the bucket icon next to the name of the bucket
that you want to delete and then choose Delete bucket.
In the Delete bucket dialog box, type the name of the bucket that you want to delete for
confirmation, and then choose Confirm.
Note
38
The text in the dialog box changes depending on whether the bucket is empty, is used for
a static website, or is used for ELB access logs.
39
Experiment No. 4
Objective: Launch and connect to an Amazon Relational Data Base (RDS) Service , Use
any one from MySQL, Oracle, SQL Server and post greSQLDataBase engines .
Requirement: Internet
Theory: Launch and connect to an Amazon Relational Data Base (RDS) Service , Use
any one from MySQL, Oracle, SQL Server and post greSQLDataBase engines.
After you create an Amazon RDS DB instance, you can use any standard SQL client
application to connect to the DB instance. To connect to an Amazon RDS DB instance,
follow the instructions for your specific database engine.
Before you can connect to a DB instance running the MySQL database engine, you must
create a DB instance. Once Amazon RDS provisions your DB instance, you can use any
standard MySQL client application or utility to connect to the instance. In the connection
string, you specify the DNS address from the DB instance endpoint as the host parameter,
and specify the port number from the DB instance endpoint as the port parameter.
To authenticate to your RDS DB instance, you can use one of the authentication methods
for MySQL and IAM database authentication.
To find the endpoint for a MySQL DB instance in the AWS Management Console:
1. Open the RDS console and then choose Databases to display a list of your DB
instances.
40
2. Choose the MySQL DB instance name to display its details.
3. On the Connectivity tab, copy the endpoint. Also, note the port number. You
need both the endpoint and the port number to connect to the DB instance.
You can connect to an Amazon RDS MySQL DB instance by using tools like the
MySQL command line utility. For more information on using the MySQL client, go
to mysql - The MySQL Command Line Tool in the MySQL documentation. One GUI-
based application you can use to connect is MySQL Workbench. For more information,
41
go to the Download MySQL Workbench page. For information about installing MySQL
(including the MySQL client), see Installing and Upgrading MySQL.
The DB instance was created using a security group that does not authorize
connections from the device or Amazon EC2 instance where the MySQL
application or utility is running. If the DB instance was created in a VPC, it must
have a VPC security group that authorizes the connections. If the DB instance was
created outside of a VPC, it must have a DB security group that authorizes the
connections..
The DB instance was created using the default port of 3306, and your company
has firewall rules blocking connections to that port from devices in your company
network. To fix this failure, recreate the instance with a different port.
You can use SSL encryption on connections to an Amazon RDS MySQL DB instance.
To connect to a DB instance using the MySQL client, type the following command at a
command prompt to connect to a DB instance using the MySQL client. For the -h
parameter, substitute the DNS name (endpoint) for your DB instance. For the -P
parameter, substitute the port for your DB instance. Enter the master user password when
prompted.
After you enter the password for the user, you will see output similar to the following.
mysql>
Amazon RDS creates an SSL certificate for your DB instance when the instance is
created. If you enable SSL certificate verification, then the SSL certificate includes the
42
DB instance endpoint as the Common Name (CN) for the SSL certificate to guard against
spoofing attacks. To connect to your DB instance using SSL, you can use native
password authentication or IAM database authentication. To connect to your DB instance
using native password authentication, you can follow these steps:
1. A root certificate that works for all regions can be downloaded here.
2. Enter the following command at a command prompt to connect to a DB instance
with SSL using the MySQL client. For the -h parameter, substitute the DNS name
for your DB instance. For the --ssl-ca parameter, substitute the SSL certificate file
name as appropriate.
3. You can require that the SSL connection verifies the DB instance endpoint against
the
4. endpoint in the SSL certificate.
mysql>
43
Connecting from MySQL Workbench
44
1. Optionally, choose Test Connection to confirm that the connection to the DB
instance is successful.
2. Choose Close.
3. From Database, choose Connect to Database.
4. From Stored Connection, choose your connection.
5. Choose OK.
The connection limit for a DB instance is set by default to the maximum for the DB instance
class for the DB instance. You can limit the number of concurrent connections to any value
up to the maximum number of connections allowed using the max_connections parameter
in the parameter group for the DB instance.
You can retrieve the maximum number of connections allowed for an Amazon RDS
MySQL DB instance by executing the following query on your DB instance:
45
SELECT @@max_connections;
You can retrieve the number of active connections to an Amazon RDS MySQL DB
instance by executing the following query on your DB instance:
SHOWSTATUSWHERE`variable_name` = 'Threads_connected';
46
Experiment No. 5
Theory: Git is a distributed version control system that helps you track changes in your
code and collaborate with others. GitHub is a web-based platform that provides hosting
for Git repositories and additional collaboration features.
If you don’t already have Git installed, you have two options:
1. Download and install Git at www.git-scm.com.
2. Install it as part of the GitHub Desktop package found at desktop.github.com.
If you need additional assistance installing Git, you can find more information in the
ProGit chapter on installing Git: http://gitscm.com/book/en/v2/Getting-Started-Installing-
Git.
47
Git Commands:
1. git init: Initialize a new Git repository in the current directory.
2. git clone [repository URL]: Clone a remote repository into a new directory.
3. git add [file]: Add file(s) to the staging area.
4. git commit -m "[commit message]": Record changes to the repository with
a message describing the changes.
5. git status: Show the status of changes in the working directory.
6. git diff: Show the differences between the working directory, staging area, and the last
commit.
7. git log: Show commit logs.
8. git branch: List, create, or delete branches.
9. git checkout [branch name]: Switch to a different branch.
10. git merge [branch name]: Merge changes from one branch into another.
11. git pull: Fetch from and integrate with another repository or a local branch.
12. git push: Update remote references along with associated objects.
13. git remote add [name] [url]: Add a new remote repository.
14. git remote -v: List all remote repositories.
15. git rm [file]: Remove files from the working tree and from the index.
GitHub Commands:
1. git remote add origin [repository URL]: Add a remote repository on GitHub.
2. git push -u origin [branch name]: Push a branch to GitHub.
3. git pull origin [branch name]: Pull changes from GitHub to your local repository.
4. git branch -r: List all remote branches.
5. git checkout -b [branch name]: Create a new branch and switch to it.
6. git clone -b [branch name] [repository URL]: Clone a specific branch from GitHub.
7. git fetch: Download objects and refs from another repository.
8. git merge origin/[branch name]: Merge changes from a remote branch into your
current branch.
9. git push origin --delete [branch name]: Delete a branch on GitHub.
10. git config --global user.name "[name]": Set your username globally for
GitHub.
11. git config --global user.email "[email]": Set your email globally for
GitHub.
12. git tag [tag name]: Create a lightweight tag.
13. git push --tags: Push all tags to GitHub.
48
Experiment No. 6
Objective: Install and configure Jenkins to build and deploy Java or Web Applications
Theory:
1. Install Java Development Kit (JDK)
Download JDK 8 and choose windows 32-bit or 64-bit according to your system
configuration. Click on “accept the license agreeme.”
C:\Users\Simplilearn>java -version
49
4. Run Jenkins on Localhost 8080
Once Jenkins is installed, explore it. Open the web browser and type
“localhost:8080”.
Enter the credentials and log in. If you install Jenkins for the first time, the
dashboard will ask you to install the recommended plugins. Install all the
recommended plugins.
50
6. Build and Run a Job on Jenkins
Select a new item (Name - Jenkins_demo). Choose a freestyle project and click
Ok.
Under the General tab, give a description like “This is my first Jenkins job.” Under
the “Build Triggers” tab, select add built step and then click on the “Execute
Windows” batch command.
In the command box, type the following: echo “Hello... This is my first Jenkins
Demo: %date%: %time%”. Click on apply and then save.
51
Select build now. You can see a building history has been created. Click on that. In
the console output, you can see the output of the first Jenkins job with time and
date.
52
Experiment No. 7
Objective: Install and configure Docker tool and perform Docker commands for content
management.
Theory:
Docker is a platform for developing, shipping, and running applications in containers.
Containers allow developers to package an application and its dependencies together into
a single unit, ensuring consistency across different environments, from development to
production.
Please note that Docker Desktop is intended only for Windows 10/11 and not for
Windows Server.
Step 2: Configuration
53
Step 3: Running the installation
Step 4: Restart
For Docker to be able to properly register with Windows, a restart is required at this
point.
54
Step 5: License agreement
After the restart, Docker will start automatically and you should see the window below:
Essentially, if you are a small business or use Docker for personal use, Docker contains
to remain free. However, if you are in a large organization, please get in touch with
your IT department to clarify the license agreement.
55
Step 6: WSL 2 installation
After you accept the license terms, the Docker Desktop window will open. However, we
are not done yet. Since we have selected WSL 2 as our virtualization engine, we also
need to install it. Don’t click Restart just yet!
56
Click on Next to begin installing the Windows Subsystem for Linux (WSL).
After a few seconds, the installation should complete. So you may click on Finish.
If you still have the Docker Desktop dialog window in the background, click on Restart.
Otherwise, just restart your computer as you normally do.
57
Step 7 — Starting Docker Desktop
If Docker Desktop did not start on its own, simply open it from the shortcut on your
Desktop.
After this, your Docker Desktop screen should look like this.
58
Step 8— Testing Docker
Open your favorite command line tool and type in the following command:
This will download the hello-world Docker image and run it. This is just a quick test to
ensure everything is working fine.
GENERAL COMMANDS:
Get help with Docker. Can also use –help on all subcommands
docker --help
IMAGES:
Docker images are a lightweight, standalone, executable package of software that includes
everything needed to run an application: code, runtime, system tools, system libraries and
settings.
Delete an Image
docker rmi <image_name>
CONTAINERS:
A container is a runtime instance of a docker image. A container will always run the same,
regardless of the infrastructure. Containers isolate software from its environment and
ensure that it works uniformly despite differences for instance between development and
staging.
60
docker ps --all
61
Experiment No. 8
Objective: Create a Docker image and push Docker image into AWS ECR
Theory:
Step 1: Docker Login
Go to hub.docker.com/signup and create your account. To connect your system with your
Docker account, execute docker login in the terminal.
You will see Login succeeded prompted in the terminal.
FROM node:12.17.0
WORKDIR /app
COPY package*.json ./
62
COPY . .
ENV PORT=3000
EXPOSE 3000
FROM: Sets the base image for further instructions. For the sake of simplicity, we
will use an officially supported Node.js image. I am using the exact version
mentioned in my package.json, so change it accordingly if you're using a different
node version.
WORKDIR: Adds source code from our current working directory to the image.
COPY: Copy files and folders from source to destination in the image filesystem.
We are copying package.json and package-lock.json. This command ensures that
we have a list of dependencies to install in our docker container.
RUN: Executes the given command. As we have package.json from the previous
step, we can install dependencies in our container.
COPY: Now, we are copying everything from project directory to our container.
Since both are in the same directory, we are using . which indicates the current
working directory. node_modules doesn't get copied since we have added it
in .dokcerignore.
ENV: Sets an environment variable for the docker container.
EXPOSES: When we are running this container, we want to listen our app on a
particular port. EXPOSES allows us to access the containerized application
publicly. It doesn't need to be the same as ENV, but it reduces complexity :)
CMD: There can be only one CMD command in one image, which tells the
container how to start the application. Notice we have passed as array and the
necessary command as elements. This is called as exec form and it allows us to
run the command without starting a shell session.
The second argument is the location of the Dockerfile. Since ours are in the same
directory, we can put a period(.).
When you run the command, you will see that steps are being executed in the same order
as they are written in the Dockerfile. Once done, it will prompt Successfully built
<baseID> in the terminal.
You can use baseID to access the particular Docker image instead of using its name tag.
You can verify this by looking at the Images section in the Docker app. Also you can see
the local container in Containers/ Apps section.
For the time being, let's run our Docker image in our local system
64
Step 5: Create the AWS ECR repository
In the AWS console go to the AWS ECR page. Click the “Create repository” button.
65
AWS ECR list all repositories page
IAM and repository policy permissions and won't be accessible to the public. Then fill up
the name and click Create a repository on the bottom of the page.
66
The newly created AWS ECR repository
In this example, I will push the image of a simple Node.js app that listens on port 8080 and
displays its host/container name. The source code you can check here. The root directory
has a Dockerfile. We will use it to build an image. Before pushing the image to the
repository we need to tag it with the repository URL. In the root directory in the terminal run
commands:
67
Result of commands execution
# Command
$aws ecr get-login-password --region <REPOSITORY_REGION> | docker login --
username AWS --password-stdin <REPOSITORY_URI>
For this command to execute successfully you have to have your AWS credentials stored in
the credentials file and your IAM principal has to have the necessary permission. Make
68
This command retrieves an authentication token using the GetAuthorizationToken API and
then redirects it using the pipe (|) to the login command of the container client, the Docker
To push the image to the AWS ECR we will execute the following command:
# Command
$docker image push <IMAGE_NAME[:TAG]>
As you can see, to push the image I've used the tag created in step 2 of this tutorial.
69
Experiment No. 9
Objective: Create Repositories, Cloning, and Pushing Code Changes using AWS
CodeCommit
Requirement: Internet
Theory:
Steps to be followed:
1. Goto Aws portal and create an IAM user with administrative permissions.
4. After creating the repository, we will go inside it and clone it in local system, by
clicking Clone URL > Clone HTTPS.
70
5. Now, Goto your command line in local system and type,
Here you need to give the username and password of IAM user that you have created.
6. And, it will clone an empty repository. Just run “ls -l” command and you will find out
an empty directory named “my-webpage”
8. Copy all the files that you have got from the link “https://github.com/chxtan/my
webpage” and paste inside the folder “my-webapp” that created by cloning the empty
repository.
9. Now with command line, browse inside the my-webpage folder and do “ls”, it will
list down all the files inside.
71
10. Run command “git status” and it will show that all the files are untracked.
11. We will use the “git add .” command to track all the files that were untracked.
12. Now, we will do the first commit as, “git commit -m “first commit””
13. We made a commit; it was made locally. Still we need to push our changes to our
repository.
72
15. And, the code will be uploaded to the AWS repository.
16. Now, we will go to index.html file and modify the content inside.
17. And in CMD, we will run “git status” again and the index file will be in untracked
category.
73
18. To make the file tracked, we will run “git add .\index.html”, and the file will be
tracked again.
19. And now we will push the file again, with “git push” command.
74
22. Now we will made some changes again to the “index.html” file as “Congratulations
V2” to “Congratulation V3”.
23. Run “git status” command and “index.html” will be in untracked mode. Keep in my
mind that we are in “my-feature” branch.
24. Move that file in tracked mode and run the command “git push –set-upstream origin
my-feature”.
26. Now, we need to add the new feature added that is in “my-feature” branch, for that
we will “create pull request”.
75
27. Compare the both branches.
76
Experiment No. 10
Requirement: Internet
Theory:
Step 1: First log into the AWS console
Step 3: Add a build name and description, and configure for Github
77
Step 4: Select your repository and specify a branch (in our case develop)
78
Step 7: You may need to add a specific variable for running the tests in CI mode
Step 9: Modifying the build spec after the project has been built
79
Step 10: You require admin privileges to reach the settings of a github repo
Step 11: Then in the options select Require status checks to pass before merging, the new
AWS CodeBuild that we created should be there as option (the name we assigned it will be
in brackets to be sure we have the correct one). Tick the option.
80
Step 12: Successfully Started the Build:
81
Experiment No. 11
Objective: Automate application deployment into EC2 using AWS Code Deploy
Requirement: Internet
Theory:
82
2. Install git and python 3
sudo yum install git -y
sudo yum install python3-pip python3-devel python3-setuptools -y
4. Install authbind and modify port 80 (for enabling production server to run app at port 80
5. Clone GitHub Repository. (You may change to your own working GitHub repository)
git clone https://github.com/azzan-amin-97/FlaskAppCodeDeploy.git
83
Create the Deployment Group:
84
Add scripts folder in root directory and add these files below
85
Step 9: Successfully created a working CI/CD pipeline for Amazon EC2 Deployment using
GitHub Actions and AWS CodeDeploy.
86
Experiment No. 12
Objective: Implement CI/CD using AWS Code pipeline that automates the build, test, and
deployment phases
Requirement: Internet
Theory:
Enter the pipeline name and the role name. Click next to move to next screen.
Step 3: Link your github repository with the codepipeline. Select the Github version 2
and click on the button “Connect to github”. Here, a new popup will open where you will
be entering your github credentials. After successfully logging into github, you will select
the repository and branch name. Then click next to move to the next stage.
87
Step 4: On the build stage, select “AWS Codebuild” as your build provider. However,
you can select any other build provider as well. Then click on “Create Project” to create a
new build project. Specify the OS, runtime, service role, environment image, and image
version on this screen. After specifying this detail, click “Continue to pipeline”.
Step 5: The next stage is the deploy stage. Here you will mention Amazon S3 as deploy
provider because the build will be deployed to S3 bucket. You will select the S3 bucket
for build deployment. S3 bucket allows static web hosting, so the build will be deployed
to S3, and the project will be accessible from S3 URL.
88
Step 6: Click “Next” And then click “Create pipeline”.
Step 7: Now, your pipeline is created. Next, select CodeCommit for source provider
(or Github if you are using it, for this, you have to log in to Github) and select the
repository and branch to configure for CI/CD.
89
Step 8: Next, for the build provider select CodeBuild and create a new build project,
give it a name and configure it as follows
90
Step 9: Search for ECR and select the policy as below and click Attach policy
For Deploy provider, select Amazon ECS, cluster, and service name. Also, add the
name of the image definitions file as ‘images.json’ that we will create during the
build process
Step 10: Test the complete Pipeline by changing the source code and
pushing the changes to the Code commit repository.
91
Once the changes are pushed, CodePipeline will trigger the CI/CD process and creates
a new deployment of the AWS Fargate Service with the new image build
To verify our changes are made and deployed successfully, visit the DNS name in the
browser
92