Niraja AWSDevOps
Niraja AWSDevOps
+1-774-333-2278 | [email protected]
https://www.linkedin.com/in/niraja-ivar-9a7961126/
AWS DevOps Engineer & Cloud Infrastructure Manager.
Summary
12+, years of strong experience covering functional industry domains:
Manufacturing, Hospitality and others. 7 years, specialized in DevOps
implementation and crafting AWS cloud strategies, leveraging Amazon Web
Services (AWS).
As DevOps engineer experienced in design, implementation and maintain tools and
process for Continuous Integration delivery and deployment of software.
Very good understanding and experience on SDLC. Worked closely with Developer ,
Testers and Admin team to ensure entire life cycle of software is smooth, error free
and efficient.
Design and Implement solutions for deploying, managing and monitoring
applications on AWS, both in automated and manual modes.
Develop and Maintain CI/CD pipelines for the applications running on AWS.
Design and Implement automated solutions for provisioning, configuring and
managing AWS services using tools like, Terraform Infrastructure as Code
(IAC) and Cloud Formation Template (CFT).
Design and Implement automated solutions for monitoring and logging of AWS
services.
Design and Maintain automation scripts for Continuous Integration using AWS
Code Build, AWS Code Commit and deploying applications on AWS Code
Deploy.
Design and Maintain automation solutions for scaling applications on AWS.
Design and Implement automated solutions for security and compliance of AWS
services.
Proficient in AWS services including EC2, VPC, EBS, AMI, SNS, RDS,
CloudWatch, CloudTrail, CloudFormation, Auto Scaling, CloudFront, IAM,
S3, and Route 53.
Skilled in configuring AWS tools like CloudWatch, CloudTrail, and SNS for real-
time monitoring and alerting, as well as implementing high availability using Elastic
Load Balancer (ELB) across multiple zones.
Expertise in working on Static Code Analysis tools like SonarQube for scanning the
code with standard rules. Configured SonarQube plugin for Jenkins and installed
SonarQube server in EC2 instances.
Expertise usage of configuration management tools like ANSIBLE deploying code
through servers like Apache Tomcat and WebSphere. As Devops engineer defined
desired state of infrastructure or applications using (YAML) in play books which
contains series of tasks to be performed on Target systems.
Experienced in GitHub for source code management for seamless
collaborations, templates, integrations, and repositories that give your work
and project the exposure it needs.
Experienced in Git commands(git init,git clone,git add,git commit,git push,git
pull, git branch, git checkout, git merge, git log) for managing source code,
collaborating with teams, and ensuring smooth deployment processes in
facilitating version control, code tracking, and efficient code management.
Proficient with build tools such as MAVEN and writing pom.xml with the
dependencies required for building the code.
Expert in scripting languages like Shell Scripting and Perl Scripting. Experienced
in writing shell scripts for the list of the resources used in AWS. Written shell scripts
to find list of users from Github by passing arguments as Repository name and
username.
Hands-on experience in Agile Development with JIRA for managing projects,
application clusters, Load Balancing, and Failover functionality in clustered
environments.
Familiarity with networking load balancers like Nginx and version control tools such
as SVN and GIT.
Extensive knowledge in designing and implementing CI/CD pipelines through
Jenkins, along with expertise in monitoring using Nagios for issue identification
and troubleshooting on servers.
Skilled in remote system administration using various network protocol tools like
SSH, HTTP, telnet, FTP, SSL, etc., and applying security patches using Red
Hat Linux server.
Experience in installing and configuring Apache/WebLogic on Solaris, Red Hat
Linux and deploying container orchestration systems like AWS ECS, EKS,
and Docker.
Worked on agile projects integrating Jenkins, Ansible, Docker, Terraform, and
AWS for end-to-end CI/CD delivery pipelines.
Design, implement and maintain scalable Kubernetes clusters on AWS and
Azure, optimized Amazon Redshift data warehouse solutions.
Experienced in command line utilities like kubectl, eksctl, aws cli
Experienced in usage of Helm a package manager for Kubernetes that streamlines
the installation and management of applications on Kubernetes clusters.
Technical Skills
AWS Services: EC2, Elastic Beanstalk, EFS, VPC, RDS, S3, Glacier, IAM, CloudFront,
CloudWatch, CloudTrail, CloudFormation, Lambda Function, Glue, Route53, SNS, SQS,
API Gateway, and more.
Configuration Management Tool: Ansible
Build & CI/CD Tools: Maven, Jenkins
Scripting Languages: Python, Shell scripting, YAML
Version Control Tools: Git, GitHub
Monitoring Tools: Microfocus, CloudWatch, ServiceNow, ITSM, Prometheus, Grafana,
IBM Maximo
Containers & Orchestration: Docker, Kubernetes
Infrastructure & Configuration Tools: CloudFormation, Terraform
OS Platforms: Redhat Linux, Windows
Projects Experience
About Company: Apple Leisure Group (ALG) is an American Travel and Hospitality group
with focus on travel, destination and resort brand management, coupled with IT Solutions.
Role and Responsibilities: DevOps Engineer at ALG will perform the project which creates
dedicated security for the instances in production environment.
To improve resiliency we deploy servers in availability zones by using auto scaling group and
application load balancer. EKS Cluster is built with Fargate profile using region and default
output. Using Nginix Controller , application is accessed to outside world.
Tools & Resources: AWS, Terraform, VPC, subnet, LoadBalancer, TargetGroups,
Lambda,S3,IAM,ELB,NAT Gateway, EKS, Helm, Kubectl, eksctl, kubeconfig, awscli
Application Team has raised request in Jira to create EKS Cluster for the deployment of
the application which has to be accessed from outside the cluster using
AWSLoadBalancerController.
Project was implemented using Agile scrum, where the task was break down into user
stories. During meeting with team and scrum master all the user stories are prioritized
and sprints were planned.
Installed AWS CLI command line interface to connect from On-Premise to AWS Cloud
Installed EKSCTL is a CLI tool for managing and creation of clusters on EKS-Amazon
managed Kubernetes service for EC2
Installed KUBECTL is a CLI tool to communicate with Kubernetes cluster control plane
using the Kubernetes API
Using AWSCLI command line configured AWS with Access Key and Secret Access key
to interact with AWS.
Installed Terraform Infrastructure as Code tool which enable to automate provisioning
of infrastructure in cloud.
Using Visual Code as an editor connected Terraform from On-Premise to AWS cloud
using credentials.
The configuration defines a new VPC to provision the cluster and uses the public EKS
module to create the required resources which includes auto scaling groups, security
groups, IAM roles and policies.
Using Terraform, Created main.tf file to review module configuration, which includes
the cluster with three nodes across two node groups.
Using Terraform, Created provider.tf file to define cloud provider name, source ,
version and region for the cluster creation.
Using aws eks update kubeconfig command configured kubectl to connect amazon EKS
cluster
Using kubectl commands to retrieve cluster information and checked for the nodes.
EKS cluster got created with VPC, Subnets, Security Groups, Node Groups and Nodes.
To associate an IAM OIDC Identity Provider (IDP) with a Kubernetes cluster using
Terraform, you typically need to use the `aws_eks_cluster_auth` resource.
Created module in Terraform lb_role to install AWSLoadBalancerController to grant
access through IAM role, Which will add an IAM policy to the cluster and allow the
creation of AWSLoadBalancerController by providing name for the controller and
namespace to be used for the controller.
Created provider Kubernetes which Configure Kubernetes and the service account to
get it ready for the controller.
Deployed an application on an Amazon EKS cluster using Helm charts in Terraform, we
used the helm provider along with the helm_release resource.
Manifest file yaml is created which includes namespace as kind and name for
namespace, deployment as kind name for deployment, replicas, selector, labels and
container information which includes image to be deployed. A service which specifies
service type, port, target port, protocol to be used are defined. An Ingress resource
which details about namespace, name,annotations, ingress class name and rules are
mentioned in yaml manifest file.
Used Terraform to manage kuberntes resources defined in YAML files by using the
Kubernetes_manifest resource. This resource allows you to apply Kubernetes manifest
directly from within Terraform.
Client: Spectrum Brands
Location: Miramar,FL
Role: DevOps Engineer
Duration: October 2021 to May 2023
Tools : Git, Github, Maven, Jenkins, SonarQube, ArgoCD, shell script, Ansible,
Docker,Kubernetes.
This project uses Terraform (Infrastructure as Code) tool to configure resources in AWS
cloud. As per the business needs a VPC has to be created, the VPC has Public subnet in
two availability zones. InternetGateWay, Route Table is attached to InternetGateWay
which routes traffic from internet to EC2 instance in Public subnet. Application which is
versioned in GitHub Source Code Repository is integrated with Jenkins tool and
deployed onto the Kubernetes using ArgoCD.
All the resources required for this project are created using Terraform tool in AWS
cloud.
Created Main.tf file which has details about the resources that are to be created in AWS
cloud.
Resource VPC is configured with the CIDR block value that has generated VPCID which
will be used for other resource configurations.
Resources InternetGateWay configured with the VPC.
Resource Subnet is configured with VPC, CIDR block, availabilityzone to access Subnet
from anywhere.
Resource RouteTable is configured with VPC, InternetGateWay and CIDR block as
Public Route for Internet Access and another Route as Private for InternalAccess
Configured SubnetAssociations in Routetable to Subnet which is public to access from
any where through internet
Resource EC2 instance is configured with AMI, InstanceType and keypair. Network
settings VPC, Subnet are configured for EC2 instance and made PublicIP enables to
access IP from internet.
Using terraform Ingress rules are defined for the SSH,Jenkins ports.
Varianes.tf file is defined for variables which are used in main.tf and provider.tf files
Tfvars file is defined to initialize values for the variables.
Output.tf file is defined to retrieve the values from cloud.
EC2 instance is configured with Java, Jenkins, SonarQube server software to run CI
pipeline, enabled Inbound rules in SecurityGroup to access Jenkins and SonarQube.
Installed Docker in EC2 instance , once installation is done granted Jenkins and Ubuntu
users permissions to docker daemon.
CI pipeline is written in Groovy scripting and script is stored in Git repository and can
be collaborated with Jenkins tool for Continuous Integration of Source Code with other
tools.
Configured GitHub , DockerHub and Security Token for SonarQube static code analysis
tool to run from Jenkins. Installed necessary Docker Pipeline, SonarQube plugins in
Jenkins.
Using Maven build tool, pom.xml and mvn clean package command all the dependency
files are downloaded and jar file is created.
Once the artifact is created using SonarQube static code analysis tool URL and
Authentication token which generates report.
Docker is used as an agent where the image is stored in DockerHub. Docker Container
is created as soon as Pipeline is triggered all the stages in the pipeline are executed
and Docker Container is deleted by which the resources are releases and instances are
freed up.
Using Docker credentials Docker Image is build and pushed to DockerHub, once
Jenkins Manifest Repo is pushed ArgoCD can deploy on Kubernetes.
Using shell script retrieve deployment.yaml file from GitHub, replaceImageTag for
DockerImage in Deployment.yaml file with buildnumber for the DockerImage and push
deployment.yaml file back to GitHub.
Using Build Pipeline code in Jenkins is executed , DockerImage is not found initially ,
image is downloaded and DockerContainer is executed. Report is seen on SonarQube
server with server IP and port number, shell script executed and replaced ImageTag
with BuildNumber for the DockerImage and Image is pushed to DockerHub.
Kubernetes cluster is created by using minikube.
Installed ArgoCD controller from operatorhub.io, this is a three step procedure where
Operator Lifecycle Manager tool will be installed which helps to manage operators
running on the cluster. Using yaml file install argoCD-operator
Using kubectl command checked for the argoCD Operator Controller in operators name
space.
Checked status for Kubernetes cluster, created ArgoCD controller using yaml file
About Company: Amgen strives to serve patients by transforming the promise of science
and biotechnology into therapies that have the power to restore health or save the lives.
Role and Responsibilities: As a DevOps Engineer responsible for design, implementation
and maintenance Products / Projects.
Gather the requirements from Business Analyst, discuss with the team for the
requirements and prepare clarification log for the questions on requirements.
Worked on the SQL queries as per the business requirements as part of design.
Identify the tasks to be performed as per the business requirements.
Design tables and required constraints as per business requirements.
construct Perl scripts and SQL scripts as per the requirements and perform unit testing
on the scripts.
Using svn as code repository deliver the scripts to QA team for testing
Attend daily client calls for status update
Education:
Master of Computer Applications, Osmania University, Hyderabad, Telangana, India