0% found this document useful (0 votes)
168 views7 pages

Aditya Devops Engineer

Aditya Kasala has over 14 years of experience as a DevOps Engineer, specializing in AWS, Kubernetes, and Terraform, with expertise in configuration management and automated deployment processes. His professional background includes roles at Okta and Charter Communications, where he implemented CI/CD pipelines, managed cloud infrastructure, and utilized monitoring tools like Prometheus and Grafana. He holds a Bachelor's in Instrumentation Engineering and a Master's in Electrical Engineering, and is proficient in various programming languages and DevOps tools.

Uploaded by

munna.ttllc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
168 views7 pages

Aditya Devops Engineer

Aditya Kasala has over 14 years of experience as a DevOps Engineer, specializing in AWS, Kubernetes, and Terraform, with expertise in configuration management and automated deployment processes. His professional background includes roles at Okta and Charter Communications, where he implemented CI/CD pipelines, managed cloud infrastructure, and utilized monitoring tools like Prometheus and Grafana. He holds a Bachelor's in Instrumentation Engineering and a Master's in Electrical Engineering, and is proficient in various programming languages and DevOps tools.

Uploaded by

munna.ttllc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

ADITYA KASALA

SUMMARY:

Over 14+ years of experience as a Build, Release, Deployment, Configuration Management (CM), and DevOps Engineer,
I specialize in AWS, DevOps, Kubernetes, and Terraform etc. My expertise spans configuration management, automated
build and deployment processes, release management.

 Experienced in Various AWS services like IAM, S3, EC2, EKS, Lambda, ApiGateway, Route53, VPC, Subnets,
Route tables, Cloudwatch and Cloudtrailetc.
 Proficient in managing and administering AWS RDS, DynamoDB, and DocumentDB, including tasks such as
provisioning, configuring, and optimizing database instances for high availability, disaster recovery, and performance.
 Extensive experience in managing and automating cloud infrastructure using Terraform and CloudFormation.
 Experience in managing Kubernetes clusters for scalable and resilient container orchestration.
 Experience in Utilizing EKS Fargate for serverless compute, enabling scalable containerized applications without
managing servers.
 Expertise in Implementing Gitops Deployment Model for EKS clusters.
 Expertise in provisioning and managing cloud resources consistently and repeatably Using IAC tools like Terraform
and CloudFormation
 Experienced in Continuous integration and Continuous Delivery Tools Like GitLab, Jenkins Etc.
 Designed and deployed serverless functions like lambda to reduce operational overhead and improve scalability.
 Implemented monitoring solutions for insights into application performance, resource utilization, and system health
using Datadog, Prometheus and Grafana
 Experienced in Setting up ELK Stack for centralized logging, enabling efficient troubleshooting and log analysis.
 Experienced in Deploying and managing microservices using Docker containers for portability and consistency.
 Experienced in implementing Cost effective Solutions in the AWS cloud using AWS cost explorer and aws trusted
Advisor.
 Extensive experience in setting up service mesh tools like Istio, LinkerD in Kubernetes clusters.
 Strongh experience in monitoring & observability using Splunk & Datadog.
 Strong problem-solving skills with a track record of delivering efficient, reliable, and scalable systems.
 Experienced in Python scripting extensively in a DevOps environment to automate deployment processes,Proficient in
creating custom Python scripts for continuous integration and delivery pipelines, enabling efficient and scalable
software delivery practices

Education:

 Bachelors in Instrumentation Engineering, JNTU University, India - 2007


 Master’s in Electrical Engineering, Northwestern Polytechnic University, California - 2010

SKILL SET:

Source Code Management Subversion, Perforce, ClearCase, GIT


Configuration Management Ansible, Chef, Puppet
Build/Release Management ANT, Maven, Gradle, UCD, Cruise Control, Anthill Pro

Change/Defect Management ClearQuest, JIRA, Bugzilla


Scripts Perl, Python, Unix shell scripting, GO
Web/Application Servers Tomcat, Web logic
Languages C, C++, Java, XML, HTML, CSS
Operating Systems 2003 server, UNIX (Solaris), Win XP/NT/2000/9x, Linux
and MS-DOS
Development Tools Eclipse, Various IDEs
Databases Oracle, MySQL, Cassandra
Cloud computing Platform AWS, Azure, GCP
AWS Services Ec2, S3, Route 53, VPC, RDS, IAM, Cloudwatch,
DyanmoDB, DocumentDB
CI/CD GitLab, GitHub actions, ArgoCD, Jenkins
GitOps tools ArgoCD, Flux
Container Orchestration Kubernetes, AWS ECS, Openshift, AKS
Serverless API gateway, Lambdas, Beanstalk, Fargate
Messaging RabbitMQ, Kafka, SNS, SQS
Networking VPC, Subnets, Route Tables, NAT gateways, Transit
gateways, Direct Connect
Monitoring & Observability Splunk, DataDog, Grafana, Priometheus
IAAC Terraform, CloudFormation

PROFESSIONAL WORK EXPERIENCE:


Role: Sr DevOps Engineer / SRE
OKTA June’2024 – Jan’2025
Okta, Inc. (formerly SaaSure Inc.) is an American identity and access management company based in San Francisco. It
provides cloud software that helps companies manage and secure user authentication into applications, and for developers
to build identity controls into applications, website, web services, and devices. Okta sells six services, including including
a single sign on service that allows users to log into a variety of systems using a single centralized process. For example,
the company claims the ability to log into Gmail, Workday, Salesforce and Slack with one login. It also offers API
authentication services

 Developed and maintained interactive dashboards in Grafana to visualize and monitor application performance
metrics, enhancing data-driven decision-making.
 Integrated Grafana with various data sources, including Prometheus, to create real-time monitoring solutions for
system performance and availability.
 Configured alerting mechanisms in Grafana, enabling proactive incident management and reducing response times to
system anomalies.
 Implemented Prometheus for metrics collection and monitoring, enabling detailed analysis of system performance and
reliability.
 Utilized Terraform for infrastructure as code (IaC) to Create various resources in AWS, Also created terraform
templates to spinup EKS cluster and install addons.
 Created and maintained infrastructure using Terraform, resulting in a more streamlined and efficient deployment
process.
 Implemented Disaster Recovery solutions using Terraform IaC tools to automate deployment across regions.
 Designed and implemented automated infrastructure provisioning using tools like Terraform or Ansible, reducing
deployment time and ensuring consistent environments for agro-related applications.
 Utilized Terraform to provision and manage Vault infrastructure, such as clusters, storage, and networking
components.
 Designed and implemented scalable and resilient infrastructure on AWS using EKS and Fargate.
 Designed and configured Prometheus exporters to gather metrics from various services and applications, ensuring
comprehensive monitoring coverage.
 Managed and administered AWS RDS instances for relational databases such as MySQL, PostgreSQL, and SQL
Server, ensuring high availability, scalability, and performance optimization.
 Deploying infrastructure on AWS utilizing services such as EC2, RDS, VPC and Managed Network and Security,
Route 53, Direct Connect, IAM, Cloud Formation, AWS Ops Works (Automate operations), Elastic Beanstalk, AWS
S3, Glacier, (Storage in the cloud) and Cloud Watch Monitoring Management.
 Working knowledge in deploying CI/CD system using AWS DevOps on Kubernetes container environment, and for
the runtime environment of CI/CD system to build, test and Deployment we have utilized Kubernetes and Docker
Designed and automated AWS Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), SaaS capabilities
which includes virtual Machine, container services, virtual network and cloud services.
 Optimized Prometheus queries and dashboards, improving performance and response times for large-scale data sets.
 Automated the deployment of Prometheus using Kubernetes, ensuring scalable and resilient monitoring solutions.
 Migrated Dashboards & alerts between different environments.
 Designed detailed Grafana dashboards to visualize essential Kubernetes metrics (e.g., CPU/memory utilization,
request latency, deployment rollouts) for effective monitoring.
 Configured Prometheus to collect metrics from Kubernetes deployments, pods, nodes, and the API server using
specific scraping targets.
 Created Prometheus alerting rules to monitor Kubernetes resources, including high CPU/memory usage, pod restarts,
and container crashes.
 Deployed and managed Istio in production-grade Kubernetes clusters to enhance service discovery, traffic
control, and security.
 Configured replica sets, demon sets, deployments, services on the kubernetes cluster to host containerized micro
services.
 Configured Prometheus to collect metrics from Kubernetes deployments, pods, nodes, and the API server using
specific scraping targets.
 Created Prometheus alerting rules to monitor Kubernetes resources, including high CPU/memory usage, pod restarts,
and container crashes.
 Integrated Alertmanager with Prometheus to route and group alerts based on severity and source.

Client: CHARTER Communications May’ 2015 – May’ 2024


Role: Principal Dev-Ops Engineer/Kubernetes Admin

Charter Communications is an American cable telecommunications company, which offers their services to consumers
and businesses under the branding of Charter Spectrum. Providing services to 5.9 million customers in 29 states, it is the
fourth-largest cable operator in the United States by subscribers, behind Comcast, Time Warner Cable & Cox
Communications and by residential subscriber lines it is the tenth-largest telephone provider.

Responsibilities
 Utilized Terraform for infrastructure as code (IaC) to Create various resources in AWS, Also created terraform
templates to spinup EKS cluster and install addons.
 Designed and implemented scalable and resilient infrastructure on AWS using EKS and Fargate.
 Implemented Karpenter for optimizing Kubernetes cluster resource usage and cost-effective scaling.
 Developed and maintained CI/CD pipelines using GitLab CI for automated build, test, and deployment processes.
 Designed and maintained solution architecture documentation, technical designs, and implementation plans to ensure
clarity and alignment with business objectives.
 Provided technical leadership by evaluating and recommending optimal cloud solutions on AWS for scalability and
security.
 Designed and implemented Kubernetes-based solutions using Amazon EKS to orchestrate microservices architecture
for enhanced operational efficiency.
 Automated infrastructure provisioning and management using Terraform, enabling consistent and repeatable
deployments.
 Implemented GitOps practices with ArgoCD to streamline application deployments and manage cluster state
effectively.
 Managed and deployed Amazon WorkSpaces for secure, scalable, and cost-effective virtual desktop environments.
 Configured AutoStop and AlwaysOn WorkSpaces to optimize cost and performance based on user needs.
 Monitored WorkSpaces performance and availability using Amazon CloudWatch Metrics and forwarded metrics to
Datadog.
 Integrated AWS WorkSpaces with Active Directory (AD) for seamless user authentication and access control.

 Monitored application performance and system health using Datadog, ensuring proactive incident management and
root cause analysis.
 Collaborated with multi-disciplinary teams to ensure alignment with enterprise architecture principles and
frameworks.
 Evaluated and optimized cost-effectiveness of proposed solutions by analyzing AWS resource utilization and
recommending improvements.
 Ensured compliance with security standards by implementing IAM policies, securing Kubernetes clusters, and
leveraging AWS services like Secrets Manager for sensitive data.
 Facilitated cross-functional collaboration with stakeholders to gather requirements, develop project roadmaps, and
align solutions with business goals.
 Integrated ArgoCD for continuous delivery, enabling seamless application deployment and version control in
Kubernetes.
 Deployed RabbitMQ as Stateful set in EKS also Configured and maintained RabbitMQ for reliable messaging
between microservices
 Deployed serverless functions using AWS Lambda to reduce operational overhead and improve scalability.
 Configured API Gateway to create and manage APIs, providing secure and scalable access to backend services.
 Implemented Fluentd in Kubernetes cluster to stream the container logs to splunk, and Created DashBoards in splunk
to visualize applications health.
 Provisioned EKS Kubernetes clusters using CAPI (Cluster API), CAPA (Cluster API Provider for AWS) and Argocd
in GitOps declarative way.
 Deployed helmcharts in Kubernetes cluster in GitOps way using the ArgoCD.
 Configured replica sets, demon sets, deployments, services on the kubernetes cluster to host containerized micro
services.
 Provisioned EKS clusters using terraform modules.
 Deployed and configured Istio and Linkerd as a service mesh to enhance security, observability, and traffic
management within Kubernetes clusters.
 Designed and deployed microservices architecture using Istio for secure and efficient service-to-service
communication.
 Strong understanding of sidecar proxy architecture and Envoy integration.
 Hands-on experience with Kubernetes-based service mesh deployments.
 Deployed and managed Istio in production-grade Kubernetes clusters to enhance service discovery, traffic
control, and security.
 Implemented security best practices, including network policies and role-based access controls (RBAC) within
Kubernetes.
 Collaborated with cross-functional teams to gather requirements, design solutions, and ensure successful project
delivery.
 Documented infrastructure designs, deployment procedures, and operational guidelines to support knowledge sharing
and onboarding of new team members.
 Implemented Canary and Blue/Green Deployment strategies in AWS cloud.
 Involved in troubleshooting of various pod related issues and networking (calico) related issues in kubernetes cluster.
 Created A, CNAME, ALIAS records in route 53 and configured various routing policies like simple routing, failover
routing, weighted routing, latency-based routing etc.
 Implemented open-source robust monitoring stack leveraging Prometheus, Grafana, Alertmanagerto obtain
comprehensive insights into the health and performance of Kubernetes clusters and applications.
 Configured Prometheus to collect metrics from Kubernetes deployments, pods, nodes, and the API server using
specific scraping targets.
 Created Prometheus alerting rules to monitor Kubernetes resources, including high CPU/memory usage, pod restarts,
and container crashes.
 Designed detailed Grafana dashboards to visualize essential Kubernetes metrics (e.g., CPU/memory utilization,
request latency, deployment rollouts) for effective monitoring.
 Integrated Alertmanager with Prometheus to route and group alerts based on severity and source.
 Administered MongoDB and DocumentDB for NoSQL database solutions, focusing on data storage, retrieval, and
performance tuning.
 Managed and administered AWS RDS instances for relational databases such as MySQL, PostgreSQL, and SQL
Server, ensuring high availability, scalability, and performance optimization.
 Implemented backup and recovery strategies for AWS RDS databases using automated snapshots and maintained
disaster recovery plans.
 Monitored and troubleshooted AWS RDS performance using Amazon CloudWatch metrics, Enhanced Monitoring,
and Performance Insights.
 Implemented DynamoDB Streams for real-time data processing and change capture, enabling event-driven
architectures
 Leveraged Python for scripting and automation tasks, including infrastructure management, deployment automation,
and the development of custom DevOps tools.
 Implemented monitoring solutions in EKS using Datadog for insights into application performance, resource
utilization, and system health.
 Strong experience in monitoring & observability using Splunk & Datadog.
 Installed Datadog agents and ensure log forwarding from cloudwatch to datadog.
 Designed API monitoring metrics using service container logs on Datadog.
 Setup Dashboards on Datadog including APM metrics, System metrics, monitoring & alerts.
 Strong experience in monitoring & observability using Splunk & Datadog.
 Strong experience in operating production web applications like spectrum.net, spectrumbusiness.net &
spectrumenterprise.net as a 24/7 production support engineer.
 Strong experience in dealing with production issues being a first responder for production grade application and
customer facing applications by continuously monitoring during oncall rotation.
 Strong experience driving and leading production issues and providing RCA (root cause analysis) DOA(department of
authority) to leadership with data metrics on impacts and solutions for mitigating the risk for future.
 Extensive experience in leading oncall rotation and been part of oncall rotation and managing oncall rotation with
tools like Xmatters and PagerDuty.
 Integrated Datadog alerts with Xmatters.
Custom Datadog Dashboards based on business requirements.
Environment:EC2, S3, IAM, VPC, Cloud Watch, Cloud Formation, terraform, SNS, SQS, EBS, Route 53, ELB,
ALB, Ansible, Shell scripting, Docker,Maven, Ant, Jenkins, Gitlab-ci, helm node.js, AppDynamics ,Instana , Splunk ,
zuul router , eureka discovery services , Kubernetes , LinkerD , calico, Amazon Linux , CentOS Linux, Rancher,
EKS, Karpenter, Datadog, Prometheus, Grafana, kafka, Python, RDS, Azure.

Client: Lowe's Companies, Inc. October’14 – April’15


Role: Sr. Devops Engineer

Lowe's Companies, Inc. Often shortened to Lowe's, is an American retail company specializing in home improvements.
Headquartered in Mooresville, North Carolina, the company operates a chain of retail stores in the United
States and Canada. As of February 2021, Lowe's and its related businesses operates 2,197 home improvement and
hardware stores in North America.
Lowe's is the second-largest hardware chain in the United States (previously the largest in the U.S. until surpassed by The
Home Depot in 1989) behind rival The Home Depot and ahead of Menards. It is also the second-largest hardware chain in
the world, also behind The Home Depot but ahead of European retailers Leroy Merlin, B&Q, and OBI.
.
Responsibilities
 Implement and maintain highly automated build and deployment process.
 Created Automated build process for Rational team concert using python & selenium scripts.
 Integrated the automated build scripts on Jenkins for driving daily and nightly builds.
 Installation of UCD server and agents on required boxes for automating deployment process.
 Automated the deployment process using UCD (urban code deploy) on various environments i.e Dev, IST, QA &
Perf.
 Created component process and application process templates making them reusable for other applications.
 Created ant targets for publishing the artifacts to code station of UCD to required components for deployment.
 Responsible for working in complete automation process with various team like middleware Websphere team to
understand the manual deployment process to drive it through automation.
 Created components on UCD which help publish and copy artifacts to multiple servers and made it a generic reusable
component for any application.
Environment: Jenkins, Selenium, python, UCD, ant

Client: JPMorgan Chase December’10 –September’14


Role: SCM, Build/Release Engineer

JPMorgan Chase & Co. is an American multinational banking corporation of securities, investments and retail. It is the
largest bank in the United States by assets and market capitalization. It is a major provider of financial services, with
assets of $2 trillion and according to Forbes magazine is the world's largest public company based on a composite
ranking.

Responsibilities
 Implement and maintain highly automated build and deployment process.
 Lead the application teams in adopting the best practice of source code management and traceability.
 Assist with supporting source code management tools and automation builds by Maven.
 Ensure proper management of the product release life cycle.
 Develop deployment plans and schedules for the Change Review meeting.
 Responsible for maintaining and managing the software configuration on various environments.
 Responsible for maintaining the integrity between development, test & production environments.
 Maintained Dev/QC/PROD application environments to ensure all business rules, print logic and compliance issues
are well-managed and documented prior to pushing to production.
 Work with development team to resolve code and integration issues while maintaining the integrity for various
environments.
 Created the Deployment notes along with the development team and released the deployment instructions to
Application Support
 Maintained Defect Fix Deployments and documented the deployed files in the appropriate Environment Migration
log.
 Automated the build & deploy process using JENKINS continuous integration tool.
 Work with other dependency teams to resolve environmental & configuration issues.
 Maintain and keep track for all the configuration files in Dev/IST/QA region.
 Responsible for preparing configurations & code revisions for PROD readiness.
 Responsible for leading the weekly build and deployment windows to ensure smooth deployment process without any
issues to all environments.
 Responsible for sanity tests after the code drop is done to Dev/IST/QA region and track defects in every weeks QA
drop .
 Responsible for automated hotfix deploymentsusingjenkins job in Dev/IST/QA region.
Automated the build & deploy process using JENKINS continuous integration tool.
Environment:Subversion, Maven, ANT, Jenkins, W2k/NT, Windows 2003, UNIX, SUN Solaris, HP UX, Agile,Mercury
Quality Center, Apache Tomcat, Java, Weblogic, Oracle.

You might also like