0 ClassNotes
0 ClassNotes
------------
* Attending a standup meeting i.e Kanban or Scrum
* Create JIRA/Bugzilla tickets to track all the work
* Using git client to store the files
* Code review for peers
Responsibilities:
-------------
* Support Continuous Development
1. Support Gitlab maintenance - group,project,user,branches
management
Merge:
1. raise a merge request
2. review
3. accept the merge
Dev Build:
- done by developers
- on his workspace
- builds only the files he/she modified
- used for running unit test
nightly Build:
- automated builds
- separate infra
- full build
- used for smokes test and functional test
https://gitlab.com/scmlearningcentre/Ansible.git
[email protected]:scmlearningcentre/Ansible.git
Permissions:
- guest: read-only
- developer: read-write
- maintainer: read-write + manage the group/project
- owner: administration privilege
=============DOCKER=============
$ sudo su
$ apt update
$ apt install -y docker.io
$ systemctl status docker
$ docker info
$ docker ps -a
$ docker start <ContainerID/Name>
$ docker attach <ContainerID/Name>
$ docker stop <ContainerID/Name>
$ docker rm <ContainerID/Name>
$ docker logs -f <ContainerID/Name>
$ docker exec <ContainerID/Name> <CMD>
$ docker stats
Examples:
$ docker run -it centos
$ docker run --name test00 -it centos
$ docker run --name test00 -it centos /bin/sh
$ docker run --name testd -d centos /bin/sh -c "while true; do
echo hello Adam; sleep 5; done"
$ docker exec testd yum -y update
$ docker exec -it testd /bin/bash
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
==================================================
===TERRAFORM=========================
$ curl -O
https://releases.hashicorp.com/terraform/0.13.0/terraform_
0.13.0_linux_amd64.zip
$ apt install -y unzip
$ unzip terraform_0.13.0_linux_amd64.zip -d /usr/local/bin/
$ terraform version
$ mkdir -p terraform/doc
$ cd terraform/doc
$ vi main.tf
variable "container_name" {
default = "noname"
}
variable "container_name" {
default = "noname"
}
variable "index" {
default = "0"
}
------------------------MAP-------------
variable "image_name" {
type = map
default = {
"test" = "nginx:latest"
"prod" = "httpd:latest"
}
}
variable "container_name" {
default = "noname"
}
output "azlist" {
value = data.aws_availability_zones.example.names[0]
}
------------------------------------
provider "aws" {
region = "ap-south-1"
}
output "ec2list" {
value = data.aws_instances.test.ids[0]
}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
----------------------------STATE FILE-----------------
$ terraform destroy -target <ResourceType.Name.id[index]>
terraform{
backend "s3" {
bucket = "wezvatech-adam-demo-s3"
key = "default/terraform.tfstate" # path & file which will hold
the state #
region = "ap-south-1"
}
}
----------WORKSPACE-------
$ terraform workspace show
$ terraform workspace list
$ terraform workspace new test
$ terraform workspace select test
$ terraform workspace delete test
---------------------MODULES-------------
$ mkdir modulex
$ cd modulex
$ touch main.tf variables.tf
$ mkdir modules
$ cd modules
$ mkdir image container
--image module--
$ cd image
$ vi main.tf
resource "docker_image" "myimage" {
name = var.image_name
}
$ vi variables.tf
variable "image_name" {
default = "nginx:latest"
}
$ vi outputs.tf
output "image_out" {
value = docker_image.myimage.latest
}
--container module---
$ cd module/container
$ vi main.tf
resource "docker_container" "container_id" {
name = var.container_name
image = var.image_name
ports {
internal = 80
external = 80
}
}
$ vi variables.tf
variable "container_name" {
default = "noname"
}
variable "image_name" {}
--root module--
$ vi main.tf
# Download the image
module "image" {
source = "./modules/image"
image_name = var.rootimage_name
}
$ vi variables.tf
variable "rootimage_name" {
default = "httpd:latest"
}
varible "rootcontainer_name" {
default = "wezvatech"
}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
$ terraform console
max(1,31,12)
upper("hello")
split("a", "tomato")
substr("hello world", 1, 4)
index(["a", "b", "c"], "b")
length("adam")
length(["a", "b"])
lookup({a="1", b="2"}, "a", "novalue")
-------------------LOOPS-----------------
#AWS details
provider "aws" {
region = "ap-south-1"
}
#AWS details
provider "aws" {
region = "ap-south-1"
}
#AWS details
provider "aws" {
region = "ap-south-1"
}
output "user_uniqueid" {
value = aws_iam_user.example["robin"].unique_id
}
--------------------------CONDITIONS----------
provider "aws" {
region = "ap-south-1"
}
variable "enable_usercreation" {
description = "enable or disable user creation"
}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
----------------------PROVISIONER------------
resource {
provisioner {
# creation time
}
provisioner {
when = destroy
# destroy time
}
}
creation time provisioner: first resource creation - invoke the
provisioner at the end
destroy time provisioner: invoke the provisioner first -
destroy the resource at the end
# local-exec provisioner
provisioner "local-exec" {
command = "echo ${aws_instance.example.private_ip} >>
private_ips.txt"
}
provisioner "local-exec" {
command = "exit 1"
on_failure = continue
}
}
-----------FILE provisioner-----
resource "aws_instance" "example" {
ami = "ami-0b99c7725b9484f9e"
instance_type = "t2.micro"
key_name = "master"
provisioner "file" {
source = "test.conf"
destination = "/tmp/myapp.conf"
}
connection {
type = "ssh"
user = "ec2-user"
private_key = file("master.pem")
host = self.public_ip
}
}
---------REMOTE-EXEC PROVISIONER-----
provisioner "local-exec" {
command = "echo 'while true; do echo hi-students; sleep
5; done' > myscript.sh"
}
provisioner "file" {
source = "myscript.sh"
destination = "/tmp/myscript.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/myscript.sh",
"/tmp/myscript.sh 2>&1 &",
]
}
---------NULL RESOURCE------
resource "null_resource" "cluster" {
provisioner "local-exec" {
command = "echo hello > test.txt"
}
provisioner "local-exec" {
when = destroy
command = "rm test.txt"
}
}
$ export TF_LOG=TRACE
$ export TF_LOG_PATH=terraformlog.txt
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
Install AWS Cli
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-
x86_64.zip" -o "awscliv2.zip"
$ sudo apt install unzip && unzip awscliv2.zip
$ sudo ./aws/install --bin-dir /usr/bin --install-dir
/usr/bin/aws-cli --update
$ aws --version
Configure AWS Cli with Access/Secret Key
$ aws configure
provider "aws" {
alias = "virginia" # Alias name for reference #
region = "us-east-1"
profile = "QA"
}
resource "aws_instance" "example" {
ami = "ami-0742b4e673072066f"
instance_type = "t2.micro"
provider = aws.mumbai # Alias name to
pick the provider #
}
resource "aws_instance" "example" {
ami = "ami-0742b4e673072066f"
instance_type = "t2.micro"
provider = aws.virginia # Alias name to pick
the provider #
}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
YAML/YML
========
---
- key1: value
key2: value
key3:
key3.1: value
key3.2: value
key3.2.1: value
key3.3: value
- key22: value
key33: value
key44:
- keys4.1: value
keys4.2: value
- keys4.3: value
keys4.4: value
$ vi hosts
[groupname]
<alias-name> <IP/FQDN> <USER> <PASSWD>
<MachineName> ansible_host=<<ec2-private-ip>>
ansible_user=<<ec2-user>>
ansible_ssh_private_key_file=/location/of/the/keypair/your-
key.pem
[demo]
node1 ansible_host=172.31.35.52 ansible_user=ubuntu
ansible_ssh_private_key_file=/etc/ansible/master.pem
[local]
master ansible_host=172.31.42.167 ansible_user=ubuntu
ansible_ssh_private_key_file=/etc/ansible/master.pem
Node Machine:
$ apt update
$ apt install -y python3
Verify:
$ ansible node1 -m ping
$ ansible master -m ping
ADHOC CMDS
----------
$ ansible <host> -b -m <module> -a <arbituary options|OS
Cmds >
Host - Machine | Group | All
Arbituary Options - state
$ ansible demo --list-hosts
$ ansible demo -m copy -a "src=test.txt dest=/tmp/test.txt"
$ ansible demo -m copy -a "src=test.txt dest=test.txt"
$ ansible demo -b -m apt -a "name=git state=present"
present|absent|latest
$ ansible demo -b -m apt -a "name=apache2 state=present"
$ ansible demo -b -m service -a "name=apache2
state=started"
started|stopped|restarted
$ ansible demo -b -m user -a "name=adam state=present"
$ ansible demo -m command -a "ls"
$ ansible demo -m shell -a "ls | wc -l"
Assignment 2:
Develop a Terraform module to create a EC2 machine, with
parameters for AMI, Type, Key
Post machine creation run
Post machine creation, execute a script to add the machine
details into inventory file on Ansible Host
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
PLAYBOOK
--------
Syntax:
--- # this will be a comment
- hosts: <host>
become: <yes|no> # no is default
connection: <ssh|winrm|local> # ssh is default
become_user: <username> # user in the inventory is
default
gather_facts: <yes|no> # no is default
vars:
- <variablename>: value
- <variablename>: value # to retrive '{{variablename}}'
tasks:
- name: <name for your task1>
<module>: <artibutary options>
- name: <name for your task2>
<module>: <artibutary options>
notify: <name for your task to handle>
handlers:
- name: <name for your task to handle>
<module>: <artibutary options>
Example:
---
- hosts: demo
become: yes
tasks:
- name: Install Git
apt: name=git state=present
- name: Install Apache2
apt: name=apache2 state=present
- name: Run Apache2
service: name=apache2 state=started
---
- hosts: demo
become: yes
tasks:
- name: Install Git
apt: name=git state=present
- name: Install Apache2
apt: name=apache2 state=present
notify: Run Apache2
- name: Print Hi
command: echo Hi
notify: Run Apache2
handlers:
- name: Run Apache2
service: name=apache2 state=started
=========LOCAL VARIABLES===============
---
- hosts: demo
become: yes
vars:
- mypkg: apache2
tasks:
- name: Install Git
apt: name=git state=present
- name: Install Apache2
apt: name={{mypkg}} state=present
- name: Run Apache2
service: name={{mypkg}} state=started
$ ansible-playbook vars.yml -v
---
- hosts: demo
become: yes
vars:
- myname: ADAM
tasks:
- name: Print Name
command: echo {{myname}}
---
- hosts: demo
become: yes
vars_files:
- myvars.yml
tasks:
- name: Print Name
command: echo {{myname}}
--- #myvars.yml
myname: ADAM
================GLOBAL VARIABLES========
$ mkdir /etc/ansible/group_vars
$ cd /etc/ansible/group_vars
$ touch demo local
$ vi demo
---
myname: DEMOGROUP
$ vi local
---
myname: LOCALGROUP
$ mkdir /etc/ansible/host_vars
$ cd /etc/ansible/host_vars
$ touch node1 master
$ vi node1
---
myname: NODE1
$ vi master
---
myname: MASTER
1. runtime variable
2. local variable
3. host variable
4. group variable
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
------------------REGISTER-----------------
---
- hosts: demo
become: yes
task:
- name: Print Name
command: echo hi
register: output # any string can be given as variable
name
- debug: var=output.rc
- debug: var=output.stdout
----------------ASYNC---------
demo group:
node1
node2
task1:
- node1 X items
- node2 X items
task2
- node1
- node2
---
- hosts: all
become: yes
tasks:
- name: Sleep
command: sleep 30
async: 30 # timeout in sec
poll: 10 # poll time in sec
- name: EOT
command: echo EOT
-------------RUN ONCE---
---
- hosts: all
tasks:
- name: Print
command: echo Hi-Adam
run_once: true
delegate_to: master
- name: EOT
command: echo EOT
-------------LOOPS-----
---
- hosts: demo
become: yes
tasks:
- name: Create User
user: name={{item}} state=present
with_items:
- Robin
- Balkar
- Om
- Karthick
- Harshitha
===========
--- # Loop Playbook
- hosts: demo
become: yes
tasks:
- name: add a list of users
user: name={{ item.name }} groups={{ item.groups }}
state=present
with_items:
- { name: testuser1, groups: nogroup }
- { name: testuser2, groups: root }
============
---
- hosts: demo
tasks:
- debug:
msg: "{{ item }}"
with_file:
- myfile
- myfile2
=====================CONDITIONS==============
--- # When playbook example
- hosts: demo
become: yes
vars:
myvalue: false
tasks:
- name: Install apache for Debian
apt: name=apache2 state=present
when: ansible_os_family == "Debian"
- name: Install apache for Redhat
yum: name=httpd state=present
when: ansible_os_family == "RedHat"
- name: print numbers greater than 5
command: echo {{ item }}
with_items: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
- name: Boolean true
command: echo true
when: myvalue
- name: Boolean false
command: echo false
when: not myvalue
===================
---
- hosts: demo
tasks:
- name: get stats
stat: path=/tmp/thefile
register: st
- debug: var=st
- name: Create file if it doesnt exist
shell: touch /tmp/thefile
when: not st.stat.exists
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
======================ERROR HANDLING==========
---
- hosts: demo
tasks:
- name: dummy cmd
command: nosuchcommand
ignore_errors: yes
- name: EOT
command: echo EOT
--------------------------------------
---
- hosts: demo
tasks:
- name: Fail task when the command error output prints
FAILED
command: /usr/bin/example-command
register: command_result
ignore_errors: yes
- debug: var=command_result
- name: fail the play if the previous command did not
succeed
fail: msg="the command failed"
when: "'Err' in command_result.msg" and
command_result.rc > 2
- name: Fail task when both files are identical
command: diff file1 file2 # checks the files in the home dir
of the user
register: diff_cmd
failed_when: diff_cmd.rc == 2
- debug: var=diff_cmd
- name: EOT
command: echo EOT
------------------------JINJA TEMPLATES---------
---
- hosts: demo
tasks:
- name: copy module
copy: src=test.conf dest=/tmp/test.conf
---
- hosts: demo
tasks:
- name: Template
template: src=test.conf.j2 dest=/tmp/test.conf
$ vi test.conf.j2
YOU CAN SEE ME {{myname}}
$ vi test.conf.j2
{% for i in range(3)%}
hello {{myname}} - {{i}}
{% endfor %}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
==================ROLES=============
$ touch master.yml
$ mkdir roles
$ cd roles
$ mkdir dev # dev is our role name
$ cd dev
$ mkdir tasks vars handlers # a folder for each section
$ vi tasks/main.yml
- name: Print name
command: echo {{myname}}
- name: EOT
command: echo EOT
$ vi vars/main.yml
myname: ADAM
$ vi handlers/main.yml
- name: handler
command: echo handler
$ vi master.yml
---
- hosts: demo
roles:
- { role: dev, when: ansible_os_family == "RedHat" }
- { role: test, when: ansible_os_family == "Debian" }
---
- hosts: demo
pre_tasks:
- name: PRETASK
command: echo pretask
roles:
- dev # call dev role
post_tasks:
- name: POSTTASK
command: echo posttask
=================PASSWORDLESS SSH===========
# Goto controller server & run as ubuntu
$ ssh-keygen -t rsa
* Take the content of ~/.ssh/id_rsa.pub (as ubuntu user) from
controller & put it in ~/.ssh/authorized_keys on node as
ubuntu user
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
Assignment 3:
------------
* Develop a terraform/Ansible modules to setup Jenkins
Master/Slave, which should take parameters - AMI, Type, Key
- If its Jenkins Master
1. Create server
2. Install JDK, set key, update repo, install Jenkins (Develop
Ansible Playbook)
3. Generate SSH Keys
- If its Jenkins Slave
1. Create server
2. Install JDK
3. Copy the SSH public key from Jenkins Master
- Status of Jenkins
$ systemctl status jenkins
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
Jenkinsfile
===========
pipeline { // this is a comment
stages {
stage('name') {
agent { }
steps {}
}
stage('name') {
agent { }
steps {}
}
}
}
-------------------
pipeline {
agent any
stages {
stage('Stage1') {
steps {
echo 'First Stage'
}
}
}
}
------------------
pipeline {
agent { label 'demo' }
stages {
stage('Stage1') {
steps {
echo 'First Stage'
}
}
stage('Stage2') {
steps {
echo 'Second Stage'
}
}
}
}
------------------
pipeline {
agent none
stages {
stage('Stage1') {
agent { label 'demo' }
steps {
echo 'First Stage'
}
}
stage('Stage2') {
agent any
steps {
echo 'Second Stage'
}
}
}
}
--------------------
pipeline {
agent none
stages {
stage('Stage1') {
agent {
node {
label 'demo'
customWorkspace '/tmp'
}
}
steps {
echo 'First Stage'
}
}
stage('Stage2') {
agent any
steps {
echo 'Second Stage'
}
}
}
}
----------------------
pipeline {
agent { label 'demo' }
environment {
MYNAME = 'Adam'
}
stages {
stage('Stage1') {
steps {
sh "echo 'Your name: $MYNAME'"
}
}
stage('Stage2') {
steps {
echo env.MYNAME
}
}
}
}
-------------------
pipeline {
agent { label 'demo' }
environment {
VARVAL = 'global'
}
stages {
stage('Stage1') {
environment {
VARVAL = 'local'
}
steps {
sh "echo 'Your name: $VARVAL'"
}
}
stage('Stage2') {
steps {
echo env.VARVAL
}
}
}
}
---------------------
pipeline {
agent any
parameters {
string(name: 'PERSON', defaultValue: 'Mr Adam',
description: 'Who are you?')
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
-----------------------
pipeline {
agent { label 'demo' }
triggers {
cron('* * * * *')
}
stages {
stage('Stage1') {
steps {
echo 'test'
}
}
}
}
-----------------------
pipeline {
agent { label 'demo' }
triggers {
upstream(upstreamProjects: 'FirstJob', threshold:
hudson.model.Result.SUCCESS)
}
stages {
stage('Stage1') {
steps {
echo 'test'
}
}
stage('git') {
steps {
git changelog: false, credentialsId: 'gitlabCred', poll: false, url:
'https://gitlab.com/wezvatechprojects/Ansible.git'
}
}
}
}
----------------------
pipeline {
agent { label 'demo' }
stages {
stage('Calling FirstJob') {
steps {
echo " == Calling FirstJob =="
build job: 'FirstJob', wait: false
echo " == Completed FirstJob =="
}
}
stage('Stage2') {
steps {
echo 'Testing'
sh 'sleep 30'
}
}
}
}
-------------------
pipeline {
agent { label 'demo' }
stages {
stage('mail') {
steps {
mail bcc: '', body: 'Hi Adam', cc: '', from: '', replyTo: '',
subject: 'Test Mail', to: '[email protected]'
}
}
}
}
-------------------
pipeline {
agent { label 'demo' }
stages {
stage('Stage1') {
steps {
sh 'touch FILE1'
dir('/tmp/jenkins') {
sh 'touch FILENEW'
}
sh 'touch FILE2'
}
}
}
}
----------------
pipeline {
agent any
stages {
stage('Stage1') {
steps {
catchError(buildResult: 'UNSTABLE', message:
'ERROR', stageResult: 'FAILURE') {
sh "exit 1"
}
}
}
stage('Stage2') {
steps {
echo 'Running Stage1 for production'
}
}
}
}
---------------
pipeline {
agent any
environment { DEPLOY_TO = 'qa'}
stages {
stage('Stage1') {
when {
environment name: 'DEPLOY_TO', value: 'qa'
}
steps {
echo 'Running Stage1 for QA'
}
}
stage('Stage2') {
when {
environment name: 'DEPLOY_TO', value: 'production'
}
steps {
echo 'Running Stage1 for production'
}
}
}
}
-------------
pipeline {
agent any
parameters {
booleanParam(name: 'TOGGLE', defaultValue: true,
description: 'Toggle this value')
}
stages {
stage('Stage1') {
when {
expression { return params.TOGGLE }
}
steps {
echo 'Testing'
}
}
}
}
-------------
pipeline {
agent any
parameters {
string(name: 'PERSON', defaultValue: 'Mr Adam',
description: 'Who are you?')
}
stages {
stage('Stage1') {
when { equals expected: 'adam' , actual:
params.PERSON }
steps {
echo 'Hi Adam !!'
}
}
}
}
-------------
pipeline {
agent any
parameters {
string(name: 'PERSON', defaultValue: 'Mr Adam',
description: 'Who are you?')
}
stages {
stage('Stage1') {
when { not { equals expected: 'adam' , actual:
params.PERSON }}
steps {
echo 'Hi Students !!'
}
}
}
}
----------
pipeline {
agent any
parameters {
string(name: 'PERSON', defaultValue: 'Mr Adam',
description: 'Who are you?')
booleanParam(name: 'TOGGLE', defaultValue: true,
description: 'Toggle this value')
}
stages {
stage('Stage1') {
when {
allOf {
equals expected: 'adam' , actual: params.PERSON
expression { return params.TOGGLE }
}
}
steps {
echo 'Hi Adam !!'
}
}
}
}
-----------
pipeline {
agent any
parameters {
string(name: 'PERSON', defaultValue: 'Mr Adam',
description: 'Who are you?')
booleanParam(name: 'TOGGLE', defaultValue: true,
description: 'Toggle this value')
}
stages {
stage('Stage1') {
when {
anyOf {
equals expected: 'adam' , actual: params.PERSON
expression { return params.TOGGLE }
}
}
steps {
echo 'Hi Adam !!'
}
}
}
}
----------
pipeline {
agent any
stages {
stage('Stage 1') {
steps { sh 'sleep 10' }
}
stage('Stage 2') {
steps { sh 'sleep 10' }
}
stage('Stage 3') {
parallel {
stage('Parallel 3.1') {
steps { sh 'sleep 10' }
}
stage('Parallel 3.2') {
steps { sh 'sleep 10' }
}
}
}
}
}
-----------
pipeline {
agent any
stages {
stage('Example1') {
steps { echo 'Hello Students' }
}
stage('Example2') {
steps { echo 'Hello ADAM' }
}
}
post {
always {
echo 'Hello again!'
}
}
}
-----------
pipeline {
agent any
stages {
stage('Stage1') {
steps { echo 'Stage 1' }
post {
always { echo 'Hello again!' }
}
}
stage('Stage2') {
steps { echo 'Stage 2' }
post {
always { echo 'Hello again!' }
}
}
}
}
======DEMO CI PIPELINE=======
pipeline {
agent none
options {
timeout(time: 1, unit: 'HOURS')
}
parameters {
booleanParam(name: 'UNITTEST', defaultValue: true,
description: 'Enable UnitTests ?')
}
stages {
stage('Checkout')
{
agent { label 'demo' }
steps {
git credentialsId: 'GitlabCred', url:
'https://gitlab.com/scmlearningcentre/wezvatech-cicd.git'
}
}
stage('PreCheck')
{
agent { label 'demo' }
when {
anyOf {
changeset "samplejar/**"
changeset "samplewar/**"
}
}
steps {
script {
env.BUILDME = "yes" // Set env variable to enable
further Build Stages
}
}
}
stage('Build')
{
when {environment name: 'BUILDME', value: 'yes'}
agent { label 'demo' }
steps {
script {
if (params.UNITTEST) {
unitstr = ""
} else {
unitstr = "-Dmaven.test.skip=true"
}
}
}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
===============DEMO CI Pipeline====================
pipeline {
agent none
options {
timeout(time: 1, unit: 'HOURS')
}
parameters {
booleanParam(name: 'UNITTEST', defaultValue: true,
description: 'Enable UnitTests ?')
booleanParam(name: 'CODEANALYSIS', defaultValue: true,
description: 'Enable CODE-ANALYSIS ?')
}
stages {
stage('Checkout')
{
agent { label 'demo' }
steps {
git credentialsId: 'GitlabCred', url:
'https://gitlab.com/scmlearningcentre/wezvatech-cicd.git'
}
}
stage('PreCheck')
{
agent { label 'demo' }
when {
anyOf {
changeset "samplejar/**"
changeset "samplewar/**"
}
}
steps {
script {
env.BUILDME = "yes" // Set env variable to enable
further Build Stages
}
}
}
stage('Build')
{
when {environment name: 'BUILDME', value: 'yes'}
agent { label 'demo' }
steps {
script {
if (params.UNITTEST) {
unitstr = ""
} else {
unitstr = "-Dmaven.test.skip=true"
}
stage('SonarQube Analysis')
{
agent { label 'demo' }
when {environment name: 'BUILDME', value: 'yes'}
steps{
withSonarQubeEnv('demosonarqube') {
dir ("./samplejar") {
sh 'mvn sonar:sonar'
}
}
}
}
}
}
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
stage("Quality Gate"){
when {environment name: 'BUILDME', value: 'yes'}
steps{
script {
timeout(time: 10, unit: 'MINUTES') {
def qg = waitForQualityGate()
if (qg.status != 'OK') {
error "Pipeline aborted due to quality gate failure: $
{qg.status}"
}
}
}
}
}
stage('Stage Artifacts')
{
agent { label 'demo' }
when {environment name: 'BUILDME', value: 'yes'}
steps {
script {
/* Define the Artifactory Server details */
def server = Artifactory.server 'defaultjfrog'
def uploadSpec = """{
"files": [{
"pattern": "samplewar/target/samplewar.war",
"target": "DEMOCI"
}]
}"""
/* Upload the war to Artifactory repo */
server.upload(uploadSpec)
}
}
}
=======================================
CI Pipeline:
- Triggered on Commit, Gitlab webhook
- checkout
- validate code
- Build & UnitTest
- Upload Artifacts to Jfrog
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
======================DOCKERFILE==============
1. Create a container from a Base Image
2. Inside the container run all the cmds or instructions
3. Commit the container to generate a new Image
INSTRUCTION OS-CMD/ARGUMENTS
----------- ----------------
FROM <BASE-IMAGE> # creates a temp container in
the background
RUN <OS-CMD> # executes the cmds inside the
temp container
CMD ["executable","arg1","arg2"] # gives the default
startup cmd, however allows users cmd to overwrite
ENTRYPOINT ["executable","arg1","arg2"] # gives the
default startup cmd, however it allows users cmd as a option
to default cmd
COPY <HOST-SRC> <IMAGE-DEST> # Copies a single file
ADD <HOST-SRC> <IMAGE-DEST> # Extracts a archive
ENV <VARIABLENAME> <VALUE>
USER <USERNAME>
WORKDIR <PATH>
EXPOSE <PORT>
example:
-------
FROM centos
RUN yum -y update
RUN yum install -y vim
RUN touch /tmp/test
CMD ["/bin/bash"]
COPY dummyfile /tmp/dummyfile
ADD demo.tar /tmp
ENV JAVA_HOME /opt/jdk1.8/java
USER nobody
WORKDIR /tmp
EXPOSE 8081
EXPOSE 8082
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
Install Docker
$ sudo apt update && apt -y install docker.io
Install kubectl
$ curl -LO https://storage.googleapis.com/kubernetes-
release/release/$(curl -s
https://storage.googleapis.com/kubernetes-release/release/s
table.txt)/bin/linux/amd64/kubectl && chmod +x ./kubectl
&& sudo mv ./kubectl /usr/local/bin/kubectl
Install Minikube
$ curl -Lo minikube
https://storage.googleapis.com/minikube/releases/latest/mi
nikube-linux-amd64 && chmod +x minikube && sudo mv
minikube /usr/local/bin/
Start Minikube
$ apt install conntrack
$ minikube start --vm-driver=none
$ minikube status
==============================================
$ kubectl get nodes
$ kubectl describe node <nodename>
==============================================pod1
.yml
kind: Pod # Object Type
apiVersion: v1 # API version
metadata: # Set of data which describes the
Object
name: testpod # Name of the Object
spec: # Data which describes the state of
the Object
containers: # Data which describes the
Container details
- name: c00 # Name of the Container
image: ubuntu # Base Image which is used to
create Container
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 5 ; done"]
restartPolicy: Never # Defaults to Always
--------------deploy.yml-----
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeployments
spec:
replicas: 2
selector: # tells the controller which pods to
watch/belong to
matchLabels:
name: deployment
template:
metadata:
name: testpod8
labels:
name: deployment
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo
Hello-Adam; sleep 5; done"]
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
--------------------------svc.yml--------
kind: Service # Defines to create Service type
Object
apiVersion: v1
metadata:
name: demoservice
spec:
ports:
- port: 80 # Containers port exposed
targetPort: 80 # Pods port
selector:
myvalue: demo # Apply this service to any pods
which has the specific label
type: ClusterIP
-------------------emptydir.yml----
apiVersion: v1
kind: Pod
metadata:
name: myvolemptydir
spec:
containers:
- name: c1
image: centos
command: ["/bin/bash", "-c", "sleep 10000"]
volumeMounts: # Mount definition
inside the container
- name: xchange
mountPath: "/tmp/xchange" # Path inside the
container to share
- name: c2
image: centos
command: ["/bin/bash", "-c", "sleep 10000"]
volumeMounts:
- name: xchange
mountPath: "/tmp/data"
volumes: # Definition for host
- name: xchange
emptyDir: {}
---------------------hostpath.yml-------------
apiVersion: v1
kind: Pod
metadata:
name: myvolhostpath
spec:
containers:
- image: centos
name: testc
command: ["/bin/bash", "-c", "sleep 10000"]
volumeMounts:
- mountPath: /tmp/hostpath
name: testvolume
volumes:
- name: testvolume
hostPath:
path: /tmp/data
-----------------------------pv.yml-------
apiVersion: v1
kind: PersistentVolume
metadata:
name: myebsvol
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
awsElasticBlockStore:
volumeID: vol-0a742cfe33fb198ae
fsType: ext4
$ kubectl get pv
$ kubectl describe pv myebsvol
----------------------------pvc.yml-------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myebsvolclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
-------------------livenessprobe.yml------------
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: mylivenessprobe
spec:
containers:
- name: liveness
image: ubuntu
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 1000
livenessProbe: # define the health
check
exec:
command: # command to run
periodically
- ls
- /tmp/healthy
initialDelaySeconds: 30 # Wait for the
specified time before it runs the first probe
periodSeconds: 5 # Run the above
command every 5 sec
timeoutSeconds: 30
----------------------------------
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: mylivenessprobeurl
spec:
containers:
- name: c00
image: httpd
ports:
- containerPort: 80
livenessProbe:
initialDelaySeconds: 5
periodSeconds: 5
httpGet: # HTTP URL to check periodically
path: / # Endpoint to check inside the
container / means http://localhost/
port: 80
--------------------------------
kind: Pod
apiVersion: v1
metadata:
name: testservice
labels:
myvalue: demo
spec:
containers:
- name: c00
image: httpd
ports:
- containerPort: 80
livenessProbe:
initialDelaySeconds: 2
periodSeconds: 5
httpGet:
path: /
port: 80
readinessProbe: # Healthcheck for
readiness
initialDelaySeconds: 10
httpGet:
path: /
port: 80
--------------------------------
$ echo "root" > username.txt; echo "password" >
password.txt
$ kubectl create secret generic mysecret --from-
file=username.txt --from-file=password.txt
$ kubectl get secret
$ kubectl describe secret mysecret
apiVersion: v1
kind: Pod
metadata:
name: myvolsecret
spec:
containers:
- name: c1
image: centos
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 5 ; done"]
volumeMounts:
- name: testsecret
mountPath: "/tmp/mysecrets" # the secret files will be
mounted as ReadOnly by default here
volumes:
- name: testsecret
secret:
secretName: mysecret
-----------------------
apiVersion: v1
kind: Pod
metadata:
name: myenvsecret
spec:
containers:
- name: c1
image: centos
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 5 ; done"]
env:
- name: MYENVUSER # env name in which value of the
key is stored
valueFrom:
secretKeyRef:
name: mysecret # name of the secret created
key: username.txt # name of the key
--------------------------------
$ kubectl create configmap mymap --from-file=sample.conf
$ kubectl get cm
$ kubectl describe configmaps mymap
$ kubectl get configmap mymap -o yaml
apiVersion: v1
kind: Pod
metadata:
name: myvolconfig
spec:
containers:
- name: c1
image: centos
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 5 ; done"]
volumeMounts:
- name: testconfigmap
mountPath: "/tmp/config" # the config files will be
mounted as ReadOnly by default here
volumes:
- name: testconfigmap
configMap:
name: mymap # this should match the config map name
created in the first step
items:
- key: sample.conf # the name of the file used during
creating the map
path: sample.conf
-------------------------namespace------------
apiVersion: v1
kind: Namespace
metadata:
name: demo
labels:
name: development
$ kubectl get ns
$ kubectl get pods -n demo
$ kubectl apply -f pod1.yml -n demo
$ kubectl delete -f pod1.yml -n demo
$ kubectl config set-context $(kubectl config current-context)
--namespace=demo
$ kubectl config view | grep namespace:
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
------------
apiVersion: v1
kind: Pod
metadata:
name: resources
spec:
containers:
- name: resource
image: centos
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 5 ; done"]
resources: # Describes the type of
resources to be used
requests:
memory: "64Mi" # A mebibyte is 1,048,576 bytes, ex:
64Mi
cpu: "100m" # CPU core split into 1000 units (milli =
1000), ex: 100m
limits:
memory: "200Mi" # ex: 128Mi
cpu: "200m" # ex: 200m
----------resourcequota.yml--------
apiVersion: v1
kind: ResourceQuota
metadata:
name: myquota
spec:
hard:
limits.cpu: "400m"
limits.memory: "400Mi"
requests.cpu: "200m"
requests.memory: "200Mi"
----------
kind: Deployment
apiVersion: apps/v1
metadata:
name: deployments
spec:
replicas: 3
selector:
matchLabels:
objtype: deployment
template:
metadata:
name: testpod8
labels:
objtype: deployment
spec:
containers:
- name: c00
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 5 ; done"]
resources:
requests:
cpu: "200m"
------limitrange---------
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
--------
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-2
spec:
containers:
- name: default-cpu-demo-2-ctr
image: nginx
resources:
limits:
cpu: "1"
----------
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-3
spec:
containers:
- name: default-cpu-demo-3-ctr
image: nginx
resources:
requests:
cpu: "0.75"
-------------------------
apiVersion: v1
kind: LimitRange
metadata:
name: mem-min-max-demo-lr
spec:
limits:
- max:
memory: 1Gi
min:
memory: 500Mi
type: Container
---------
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo
spec:
containers:
- name: constraints-mem-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
requests:
memory: "600Mi"
------------
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-2
spec:
containers:
- name: constraints-mem-demo-2-ctr
image: nginx
resources:
limits:
memory: "1.5Gi"
requests:
memory: "800Mi"
----------
apiVersion: v1
kind: Pod
metadata:
name: constraints-mem-demo-3
spec:
containers:
- name: constraints-mem-demo-3-ctr
image: nginx
resources:
limits:
memory: "800Mi"
requests:
memory: "100Mi"
-------daemonset---------------
apiVersion: apps/v1
kind: DaemonSet # Type of Object
metadata:
name: demodaemonset
namespace: default
labels:
env: demo
spec:
selector:
matchLabels:
env: demo
template:
metadata:
labels:
env: demo
spec:
containers:
- name: demoset
image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Hello-
Adam; sleep 8 ; done"]
Daemonset
---------
* Use this when you need your pod to run in each and every
machine in the cluster
* we do not give the replicas
* we cannot scale the replicas
-------------STATEFULSET------------------------
# Creating Statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: webapp
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
# Headless Service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
$ kubectl get sc
$ kubectl get sts
$ kubectl exec webapp-0 -- sh -c 'echo POD0 >
/usr/share/nginx/html/index.html'
$ kubectl exec webapp-1 -- sh -c 'echo POD1 >
/usr/share/nginx/html/index.html'
$ kubectl exec webapp-1 -- curl webapp-0.nginx
$ kubectl exec webapp-0 -- curl webapp-1.nginx
$ kubectl delete pvc -l app=nginx
# podname.headless-servicename
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f
https://raw.githubusercontent.com/argoproj/argo-cd/stable/
manifests/install.yaml
$ kubectl patch svc argocd-server -n argocd -p '{"spec":
{"type": "NodePort"}}’
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
-------------------HELM-------------------
$ curl https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz >
helm.tar.gz
$ tar xzvf helm.tar.gz
$ mv linux-amd64/helm /usr/local/bin
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
Create a user for Prometheus on your system
$ useradd -rs /bin/false prometheus
$ vi /etc/prometheus/prometheus.yml
global:
scrape_interval: 5s
evaluation_interval: 1m
# A scrape configuration scraping a Node Exporter and the
Prometheus server itself
scrape_configs:
# Scrape Prometheus itself every 10 seconds.
- job_name: 'prometheus'
scrape_interval: 10s
static_configs:
- targets: ['localhost:9090']
-----
$ vi /etc/prometheus/prometheus.yml
- job_name: 'BuildMachine01'
static_configs:
- targets: ['172.31.44.44:9100']
- Create a Dockerfile
FROM tomcat
RUN mkdir /data
COPY catalina.sh /usr/local/tomcat/bin/catalina.sh
ADD jmx_prometheus_javaagent-0.12.0.jar
/data/jmx_prometheus_javaagent-0.12.0.jar
ADD prometheus-jmx-config.yaml /data/prometheus-jmx-
config.yaml
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
$ helm repo add prometheus-community
https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm pull --untar prometheus-community/kube-
prometheus-stack
1. Edit kube-prometheus-stack/charts/grafana/values.yaml,
set the values under Service key:
type: NodePort
port: 3000
2. Edit kube-prometheus-stack/values.yaml, search for
"Configuration for Prometheus service" under this set the
value of service(s)
type: NodePort
3. Remove charts/kube-state-metrics dir & entry from
Charts.yaml
Edit values.yaml, set the below values for rbac & kibana
service type:
rbac:
enabled: true
=========================================HPA======
=======
kind: Deployment
apiVersion: apps/v1
metadata:
name: mydeploy
spec:
replicas: 1
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod8
labels:
name: deployment
spec:
containers:
- name: c00
image: httpd
ports:
- containerPort: 80
resources:
limits:
cpu: 500m
requests:
cpu: 200m
Respobilities:
-------------
* Support Continous Development
1. Support Gitlab maintenance - group,project,user,branches
management
* Automating Infrastructure Management using IAC
* Automating Configuration Management of different servers
like Dev, QA, Build using Ansible
* Automate Continuous Integration, Nightly Build pipelines
* Automate Continuous Delivery & Deployment pipelines
* Containerization of Products - Develop BaseImage, App
Image
* Automated Kubernetes Deployments using Gitops & Helm
* Continuous Monitoring of Build server, Deployment server,
Kubernetes Cluster, Application, Services
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++