AWS DevOps Engineer Exam Guide
AWS DevOps Engineer Exam Guide
AWS CERTIFIED
DEVOPS
ENGINEER
PROFESSIONAL
TABLE OF CONTENTS
INTRODUCTION 7
AWS CERTIFIED DEVOPS ENGINEER PROFESSIONAL EXAM OVERVIEW 9
Exam Details 10
Exam Domains 11
Exam Domain I: SDLC Automation 12
Exam Domain II: Configuration Management and IaC 14
Exam Domain III: Resilient Cloud Solutions 15
Exam Domain IV: Monitoring and Logging 17
Exam Domain V: Incident and Event Response 19
Exam Domain VI: Security and Compliance 20
Old DOP-C01 vs the New DOP-C02 Exam Version 22
Exam Scoring System 23
Related Exam Topics 24
Excluded Exam Topics 27
Exam Benefits 27
AWS CERTIFIED DEVOPS ENGINEER PROFESSIONAL EXAM - STUDY GUIDE AND TIPS 28
Study Materials 28
AWS Services to Focus On 30
Common Exam Scenarios 31
Validate Your Knowledge 37
Sample Question 1 38
Sample Question 2 40
Domain 1: Software Development Life Cycle (SDLC) Automation 44
Overview 45
What is DevOps? 46
A Brief History of the DevOps Exam in AWS 48
Why Automate? 49
Types of Blue Green Deployment via ELB, Route 53, Elastic Beanstalk 50
AWS Lambda Function Alias Traffic Shifting 55
Basic Blue/Green Deployment using Route 53 58
AWSCodeCommitFullAccess, AWSCodeCommitPowerUser, AWSCodeCommitReadOnly - Permissions 60
Lifecycle Event Hook Availability (CodeDeploy Concept) 61
Automatically Run CodeBuild Tests After a Developer Creates a CodeCommit Pull Request 64
Managing Artifacts in AWS CodeBuild and CodePipeline 67
https://portal.tutorialsdojo.com/ 1
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 2
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 3
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 4
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 5
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 6
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
EC2 Instance Health Check vs ELB Health Check vs Auto Scaling and Custom Health Check 390
Elastic Beanstalk vs CloudFormation vs CodeDeploy 393
Service Control Policies vs IAM Policies 395
FINAL REMARKS AND TIPS 396
ABOUT THE AUTHORS 398
INTRODUCTION
s more companies build their DevOps practices, there will always be a growing demand for certified IT
A
Professionals that can do agile software development, configuration management, task automation, and
continuous integration/continuous delivery (CI/CD). This Study Guide and Cheat Sheets eBook for AWS
Certified DevOps Engineer - Professional aims to equip you with the necessary knowledge and practical skill
sets needed to pass the latest version of the AWS Certified DevOps Engineer – Professional exam.
his eBook contains the essential concepts, exam domains, exam tips, sample questions, cheat sheets, and
T
other relevant information about the AWS Certified DevOps Engineer – Professional exam. This study guide
begins with the presentation of the exam structure, giving you an insight into the question types, exam
domains, scoring scheme, and the list of benefits you'll receive once you pass the exam. We used the official
AWSexam guideto structure the contents of this guide,where each section discusses a particular exam
domain. Various DevOps concepts, related AWS services, and technical implementations are covered to
provide you with an idea of what to expect on the actual exam.
on't forget to read the boxed "exam tips" (likethis one) scattered throughout the eBook, as these are the
D
key concepts that you will likely encounter on your test. After covering the six domains, we have added a
bonus section containing a curated list of AWS Cheat Sheets to fast-track your review. The last part of this
guide includes a collection of articles that compares two or more similar AWS services to supplement your
knowledge.
he AWS Certified DevOps Engineer - Professional certification exam is a difficult test to pass; therefore,
T
anyone who wants to take it must allocate ample time for review. The exam registration cost is not cheap,
which is why we spent considerable time and effort to ensure that this study guide provides you with the
essential and relevant knowledge to increase your chances of passing the DevOps exam.
https://portal.tutorialsdojo.com/ 7
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
* *Note:This eBook is meant to be just a supplementaryresource when preparing for the exam. We highly
recommend working onhands-on sessionsandpractice examsto further expand your knowledge and improve
your test-taking skills.
https://portal.tutorialsdojo.com/ 8
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
his professional-level AWS certification exam validates your technical expertise in provisioning, operating, and
T
managing distributed systems and services on the AWS Cloud. It also verifies your ability to complete the
following DevOps tasks:
● Implement and manage continuous delivery systems and methodologies on AWS.
● Implement and automate security controls, governance processes, and compliance validation.
● Define and deploy monitoring, metrics, and logging systems on AWS.
● Implement systems that are highly available, scalable, and self-healing on AWS.
● Design, manage, and maintain tools to automate operational processes
efore we discuss the details of the new exam, it’s important to know the history of this certification test to
B
better understand the changes it entails. We will go back in time and re-discover the history of the AWS
Certified DevOps Engineer Professional exam and other exam-related information.
mazon Web Services (AWS) started its Global Certification Program in 2013, which was about a decade ago.
A
The primary purpose of this program is to validate the necessary technical skills and knowledge required for
building secure and reliable cloud-based applications using the AWS Cloud. By passing the AWS Certification
exam, IT professionals can prove their expertise and knowledge in the AWS Cloud to their current employers or
even to the prospective companies they wish to apply for. AWS unveiled the Professional and Specialty-level
certifications in an effort to expand its certification program and continuously release new updates.
hese Professional-level exams have covered various domains, namely monitoring, security, SDLC,
T
Infrastructure as Code(IaC), data analytics, advanced networking, machine learning, and many others. There
are a bunch of new and updated versions of AWS certification exams that are released on a regular basis to
include the new services offered by AWS and as well as to incorporate the new knowledge areas.
here are basically two Professional-level exams offered by AWS, which are the AWS Certified Solutions
T
Architect – Professional and the AWS Certified DevOps Engineer – Professional. The first version of the AWS
Certified Solutions Architect Professional exam (SAP-C00) was released in May 2014. This was followed by the
first version of the AWS Certified DevOps Engineer Professional exam (DOP-C00) on February 2015. After 4
years, an updated version of the AWS Certified DevOps Engineer — Professional certification was launched on
February 2019 with an exam code of DOP-C01.
https://portal.tutorialsdojo.com/ 9
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
fter 4 years, the AWS Certification and Training team released yet another version of this certification test
A
with an exam code of DOP-C02. The latest version of the AWS Certified DevOps Engineer — Profesional
certification exam was released on the 7th day of March 2023. Based on this trend, it can be assumed that the
new version of the DevOps Professional exam will be coming in around 2025 or 2026 with an exam code of
DOP-C03.
Exam Details
he AWS Certified DevOps Engineer Professional certification is intended for IT Professionals who perform a
T
Solutions Architect or DevOps role and have substantial years of hands-on experience designing available,
cost-efficient, fault-tolerant, and scalable distributed systems on the AWS platform. It is composed of
scenario-based questions that can either be in multiple-choice or multiple-response formats. The first question
type has one correct answer and three incorrect responses, while the latter has two or more correct responses
out of five or more options. You can take the exam from a local testing center or online from the comfort of
your home.
xam Code:
E OP-C02
D
Release Date: March 2023
Prerequisites: None
No. of Questions: 75
Score Range: 100 - 1000
Passing Score: 750/1000
Time Limit: 3 hours (180 minutes)
Format: Scenario-based. Multiple choice/multipleanswers.
Delivery Method: Testing center or online proctoredexam.
on’t be confused if you see in your Pearson Vue booking that the duration is 190 minutes since they included
D
an additional 10 minutes for reading the Non-Disclosure Agreement (NDA) at the start of the exam and the
survey at the end of it.
https://portal.tutorialsdojo.com/ 10
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Exam Domains
he AWS Certified DevOps Engineer Professional (DOP-C02) exam has 6 different domains, each with
T
corresponding weight and topic coverage. The exam domains are as follows:
et’s The list of exam domains can be found on the official Exam Guide for the AWS Certified DevOps Engineer
L
- Professional exam. Each exam domain is comprised of several task statements. A task statement is a
sub-category of the exam domain that contains the required cloud concepts, knowledge, and skills for you to
accomplish a particular task or activity in AWS.
https://portal.tutorialsdojo.com/ 11
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Knowledge of:
S
● oftware development lifecycle (SDLC) concepts, phases, and models
● Pipeline deployment patterns for single- and multi-account environments
Skills in:
● onfiguring code, image, and artifact repositories
C
● Using version control to integrate pipelines with application environments
● Setting up build processes (for example, AWS CodeBuild)
● Managing build and deployment secrets (for example, AWS Secrets Manager, AWS Systems Manager
Parameter Store)
Determining appropriate deployment strategies (for example, AWS CodeDeploy)
●
Knowledge of:
● D ifferent types of tests (for example, unit tests, integration tests, acceptance tests, user interface tests,
security scans)
● Reasonable use of different types of tests at different stages of the CI/CD pipeline
Skills in:
● R unning builds or tests when generating pull requests or code merges (for example, AWS CodeCommit,
CodeBuild)
● Running load/stress tests, performance benchmarking, and application testing at scale
● Measuring application health based on application exit codes
● Automating unit tests and code coverage
● Invoking AWS services in a pipeline for testing
https://portal.tutorialsdojo.com/ 12
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Knowledge of:
A
● rtifact use cases and secure management
● Methods to create and generate artifacts
● Artifact lifecycle considerations
Skills in:
● C reating and configuring artifact repositories (for example, AWS CodeArtifact, Amazon S3, Amazon
Elastic Container Registry [Amazon ECR])
● Configuring build tools for generating artifacts (for example, CodeBuild, AWS Lambda)
● Automating Amazon EC2 instance and container image build processes (for example, EC2 Image
Builder)
Task Statement 4: Implement deployment strategies for instance, container, and serverless environments.
Knowledge of:
● D eployment methodologies for various platforms (for example, Amazon EC2, Amazon Elastic Container
Service [Amazon ECS], Amazon Elastic Kubernetes Service [Amazon EKS], Lambda)
● Application storage patterns (for example, Amazon Elastic File System [Amazon EFS], Amazon S3,
Amazon Elastic Block Store [Amazon EBS])
● Mutable deployment patterns in contrast to immutable deployment patterns
● Tools and services available for distributing code (for example, CodeDeploy, EC2 Image Builder)
Skills in:
● C onfiguring security permissions to allow access to artifact repositories (for example, AWS Identity and
Access Management [IAM], CodeArtifact)
● Configuring deployment agents (for example, CodeDeploy agent)
● Troubleshooting deployment issues
● Using different deployment methods (for example, blue/green, canary)
https://portal.tutorialsdojo.com/ 13
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ask Statement 1: Define cloud infrastructure and reusable components to provision and manage systems
T
throughout their lifecycle.
Knowledge of:
Skills in:
● C omposing and deploying IaC templates (for example, AWS Serverless Application Model [AWS SAM],
AWS CloudFormation, AWS Cloud Development Kit [AWS CDK])
● Applying AWS CloudFormation StackSets across multiple accounts and AWS Regions
● Determining optimal configuration management services (for example, AWS Systems Manager, AWS
Config, AWS AppConfig)
● Implementing infrastructure patterns, governance controls, and security standards into reusable IaC
templates (for example, AWS Service Catalog, CloudFormation modules, AWS CDK)
ask Statement 2: Deploy automation to create, onboard, and secure AWS accounts in a
T
multi-account/multi-Region environment.
Knowledge of:
● AWS account structures, best practices, and related AWS services
Skills in:
S
● tandardizing and automating account provisioning and configuration
● Creating, consolidating, and centrally managing accounts (for example, AWS Organizations, AWS
Control Tower)
● Applying IAM solutions for multi-account and complex organization structures (for example, SCPs,
assuming roles)
● Implementing and developing governance and security controls at scale (AWS Config, AWS Control
Tower, AWS Security Hub, Amazon Detective, Amazon GuardDuty, AWS Service Catalog, SCPs)
https://portal.tutorialsdojo.com/ 14
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Task Statement 3: Design and build automated solutions for complex tasks and large-scale environments.
Knowledge of:
A
● WS services and solutions to automate tasks and processes
● Methods and strategies to interact with the AWS software-defined infrastructure
Skills in:
● A utomating system inventory, configuration, and patch management (for example, Systems Manager,
AWS Config)
● Developing Lambda function automation for complex scenarios (for example, AWS SDKs, Lambda,
AWS Step Functions)
● Automating the configuration of software applications to the desired state (for example, OpsWorks,
Systems Manager State Manager)
● Maintaining software compliance (for example, Systems Manager)
Task Statement 1: Implement highly available solutions to meet resilience and business requirements.
Knowledge of:
● ulti-AZ and multi-Region deployments (for example, compute layer, data layer)
M
● Service Level Agreements (SLAs)
● Replication and failover methods for stateful services
● Techniques to achieve high availability (for example, Multi-AZ, multi-Region)
Skills in:
T
● ranslating business requirements into technical resiliency needs
● Identifying and remediating single points of failure in existing workloads
● Enabling cross-Region solutions where available (for example, Amazon DynamoDB, Amazon RDS,
Amazon Route 53, Amazon S3, Amazon CloudFront)
● Configuring load balancing to support cross-AZ services
● Configuring applications and related services to support multiple Availability Zones and Regions while
minimizing downtime
https://portal.tutorialsdojo.com/ 15
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Task Statement 2: Implement solutions that are scalable to meet business requirements.
Knowledge of:
● ppropriate metrics for scaling services
A
● Loosely coupled and distributed architectures
● Serverless architectures
● Container platforms
Skills in:
● Identifying and remediating scaling issues
● Identifying and implementing appropriate auto-scaling, load balancing, and caching solutions
● Deploying container-based applications (for example, Amazon ECS, Amazon EKS)
● Deploying workloads in multiple AWS Regions for global scalability
● Configuring serverless applications (for example, Amazon API Gateway, Lambda, AWS Fargate)
Knowledge of:
D
● isaster recovery concepts (for example, RTO, RPO)
● Backup and recovery strategies (for example, pilot light, warm standby)
● Recovery procedures
Skills in:
● T esting failover of Multi-AZ/multi-Region workloads (for example, Amazon RDS, Amazon Aurora, Route
53, CloudFront)
● Identifying and implementing appropriate cross-Region backup and recovery strategies (for example,
AWS Backup, Amazon S3, Systems Manager)
● Configuring a load balancer to recover from backend failure
https://portal.tutorialsdojo.com/ 16
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Task Statement 1: Configure the collection, aggregation, and storage of logs and metrics.
Knowledge of:
● ow to monitor applications and infrastructure
H
● Amazon CloudWatch metrics (for example, namespaces, metrics, dimensions, and resolution)
● Real-time log ingestion
● Encryption options for at-rest and in-transit logs and metrics (for example, client-side and server-side,
AWS Key Management Service [AWS KMS])
Security configurations (for example, IAM roles and permissions to allow for log collection)
●
Skills in:
● ecurely storing and managing logs
S
● Creating CloudWatch metrics from log events by using metric filters
● Creating CloudWatch metric streams (for example, Amazon S3 or Amazon Data Firehose options)
● Collecting custom metrics (for example, using the CloudWatch agent)
● Managing log storage lifecycles (for example, S3 lifecycles, CloudWatch log group retention
● Processing log data by using CloudWatch log subscriptions (for example, Kinesis, Lambda, Amazon
OpenSearch Service)
● Searching log data by using filter and pattern syntax or CloudWatch Logs Insights
Configuring encryption of log data (for example, AWS KMS)
●
Task Statement 2: Audit, monitor, and analyze logs and metrics to detect issues.
Knowledge of:
A
● nomaly detection alarms (for example, CloudWatch anomaly detection)
● Common CloudWatch metrics and logs (for example, CPU utilization with Amazon EC2, queue length
with Amazon RDS, 5xx errors with an Application Load Balancer)
● Amazon Inspector and common assessment templates
● AWS Config rules
● AWS CloudTrail log events
https://portal.tutorialsdojo.com/ 17
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Skills in:
● uilding CloudWatch dashboards and Amazon QuickSight visualizations
B
● Associating CloudWatch alarms with CloudWatch metrics (standard and custom)
● Configuring AWS X-Ray for different services (for example, containers, API Gateway, Lambda
● Analyzing real-time log streams (for example, using Kinesis Data Streams)
● Analyzing logs with AWS services (for example, Amazon Athena, CloudWatch Logs Insights)
Knowledge of:
● E vent-driven, asynchronous design patterns (for example, S3 Event Notifications or Amazon
EventBridge events to Amazon Simple Notification Service [Amazon SNS] or Lambda)
● Capabilities of auto scaling a variety of AWS services (for example, EC2 Auto Scaling groups,
● RDS storage auto scaling, DynamoDB, ECS capacity provider, EKS autoscalers)
● Alert notification and action capabilities (for example, CloudWatch alarms to Amazon SNS, Lambda,
EC2 automatic recovery)
● Health check capabilities in AWS services (for example, Application Load Balancer target groups, Route
53)
Skills in:
● C onfiguring solutions for auto scaling (for example, DynamoDB, EC2 Auto Scaling groups, RDS storage
auto scaling, ECS capacity provider)
● Creating CloudWatch custom metrics and metric filters, alarms, and notifications (for example, Amazon
SNS, Lambda)
● Configuring S3 events to process log files (for example, by using Lambda), and deliver log files to
another destination (for example, OpenSearch Service, CloudWatch Logs)
● Configuring EventBridge to send notifications based on a particular event pattern
● Installing and configuring agents on EC2 instances (for example, AWS Systems Manager Agent [SSM
Agent], CloudWatch agent)
● Configuring AWS Config rules to remediate issues
● Configuring health checks (for example, Route 53, Application Load Balancer)
https://portal.tutorialsdojo.com/ 18
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Task Statement 1: Manage event sources to process, notify, and take action in response to events.
Knowledge of:
● A WS services that generate, capture, and process events (for example, AWS Health, EventBridge,
CloudTrail)
● Event-driven architectures (for example, fan out, event streaming, queuing)
Skills in:
Integrating AWS event sources (for example, AWS Health, EventBridge, CloudTrail)
●
● Building event processing workflows (for example, Amazon Simple Queue Service [Amazon SQS],
Kinesis, Amazon SNS, Lambda, Step Functions)
Knowledge of:
F
● leet management services (for example, Systems Manager, AWS Auto Scaling)
● Configuration management services (for example, AWS Config)
Skills in:
A
● pplying configuration changes to systems
● Modifying infrastructure configurations in response to events
● Remediating a non-desired system state
Knowledge of:
A
● WS metrics and logging services (for example, CloudWatch, X-Ray)
● AWS service health services (for example, AWS Health, CloudWatch, Systems Manager OpsCenter)
● Root cause analysis
https://portal.tutorialsdojo.com/ 19
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Skills in:
● A nalyzing failed deployments (for example, AWS CodePipeline, CodeBuild, CodeDeploy,
CloudFormation, CloudWatch synthetic monitoring)
● Analyzing incidents regarding failed processes (for example, auto-scaling, Amazon ECS, Amazon EKS)
Task Statement 1: Implement techniques for identity and access management at scale.
Knowledge of:
● A ppropriate usage of different IAM entities for human and machine access (for example, users, groups,
roles, identity providers, identity-based policies, resource-based policies, session policies)
● Identity federation techniques (for example, using IAM identity providers andAWS IAM Identity Center)
● Permission management delegation by using IAM permissions boundaries
● Organizational SCPs
Skills in:
● esigning policies to enforce least privilege access
D
● Implementing role-based and attribute-based access control patterns
● Automating credential rotation for machine identities (for example, Secrets Manager)
● Managing permissions to control access to human and machine identities (for example, enabling
multi-factor authentication [MFA], AWS Security Token Service [AWS STS], IAM profiles)
Task Statement 2: Apply automation for security controls and data protection.
Knowledge of:
● N etwork security components (for example, security groups, network ACLs, routing, AWS Network
Firewall, AWS WAF, AWS Shield)
● Certificates and public key infrastructure (PKI)
● Data management (for example, data classification, encryption, key management, access
controls)
https://portal.tutorialsdojo.com/ 20
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Skills in:
● A utomating the application of security controls in multi-account and multi-Region environments (for
example, Security Hub, Organizations, AWS Control Tower, Systems Manager)
● Combining security controls to apply defense in depth (for example, AWS Certificate Manager [ACM],
AWS WAF, AWS Config, AWS Config rules, Security Hub, GuardDuty, security groups, network ACLs,
Amazon Detective, Network Firewall)
● Automating the discovery of sensitive data at scale (for example, Amazon Macie)
● Encrypting data in transit and data at rest (for example, AWS KMS, AWS CloudHSM, ACM)
Knowledge of:
● S ecurity auditing services and features (for example, CloudTrail, AWS Config, VPC Flow Logs,
CloudFormation drift detection)
● AWS services for identifying security vulnerabilities and events (for example, GuardDuty,
Amazon Inspector, IAM Access Analyzer, AWS Config)
● Common cloud security threats (for example, insecure web traffic, exposed AWS access keys,
S3 buckets with public access enabled or encryption disabled)
Skills in:
● Implementing robust security auditing
● Configuring alerting based on unexpected or anomalous security events
● Configuring service and application logging (for example, CloudTrail, CloudWatch Logs)
● Analyzing logs, metrics, and security findings
https://portal.tutorialsdojo.com/ 21
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he biggest exam domain is still the SDLC (Software Development Lifecycle) Automation domain which retains
T
its 22% percent exam coverage. The same goes for the Monitoring and Logging domain, which still has 15
percent. This is followed by the Configuration Management and Infrastructure as Code (IaC) domain which is
down to only 17% exam coverage from the previous 19% percent. The Incident and Event Response domain
has a huge 4% decline as it only has 14% coverage coming from an 18% high on the previous version.
You can also notice that two exam domains have changed their name:
● The“High Availability, Fault Tolerance, and DisasterRecovery”domain has been renamed and is now
called the“Resilient Cloud Solutions”domain.
● The“Policies and Standards Automation”domain isnow“S
ecurity and Compliance”
https://portal.tutorialsdojo.com/ 22
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he concept of resiliency is related to High Availability, Fault Tolerance, and Disaster Recovery. This is the
T
primary reason why AWS renamed this lengthy domain as “Resilient Cloud Solutions” for brevity. From 16%,
this exam domain has a slight decrease of coverage at 15% percent.
ecurity in AWS can be implemented through IAM Policies, Service Control Policies (SCPs), Bucket Policies,
S
VPC Endpoint Policies, and other types of policies. The term “standards” is synonymous with the word
“compliance” in the IT industry. The name of the Policies and Standards Automation exam domain was
simplified and is now officially the Security and Compliance domain. It’s interesting to note that on the previous
DOP-C01 version, this domain has the lowest exam coverage at 10%, but now, it has become the second
largest exam domain for DOP-C02 with 17% coverage.
s you can notice, the DevOps Pro exam has significantly included many security-related topics based on its
A
new exam domain content distribution. This means that you have to focus on various security topics and
security services offered by AWS.
ou can get a score from 100 to 1,000 with a minimum passing score of750when you take the DevOps
Y
Engineer Professional exam. AWS uses a scaled scoring model to equate scores across multiple exam types
that may have different difficulty levels. The complete score report will be sent to you by email after a few days.
Right after you complete the actual exam, you’ll immediately see a pass or fail notification on the testing
screen. A“Congratulations! You have successfullypassed...”message will be shown if you pass the exam.
Individuals who unfortunately do not pass the AWS exam must wait 14 days before they are allowed to retake
the exam. Fortunately, there is no hard limit on exam attempts until you pass the exam. Take note that on each
attempt, the full registration price of the AWS exam must be paid.
ithin 5 business days of completing your exam, your AWS Certification Account will have a record of your
W
complete exam results. The score report contains a table of your performance at each section/domain, which
indicates whether you met the competency level required for these domains or not. AWS is using a
compensatory scoring model, which means that you do not necessarily need to pass each and every individual
section, only the overall examination. Each section has a specific score weighting that translates to the number
of questions; hence, some sections have more questions than others. The Score Performance table highlights
your strengths and weaknesses that you need to improve on.
https://portal.tutorialsdojo.com/ 23
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he new AWS Certified DevOps Engineer – Professional exam (DOP-C02) is focused on the various tools,
T
services, and knowledge areas that revolve around DevOps in AWS. The official exam guide provides a list of
AWS services, general tools, and technologies that are grouped according to their primary functions. Keep in
mind that even though some of these topics will likely be covered more than others on the exam, the
placement or order of these exam topics/ AWS services in this list is not an indication of any relative weight or
importance.
The relevant exam topics that you should be familiar with on your upcoming DOP-C02 exam are:
https://portal.tutorialsdojo.com/ 24
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ere is the list of relevant AWS services that are covered in the AWS Certified DevOps Engineer – Professional
H
(DOP-C02) exam based on the official exam guide. You must focus on these AWS services and their respective
features for your upcoming test:
https://portal.tutorialsdojo.com/ 25
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Developer Tools:
● mazon GuardDuty
A
● AWS Cloud Development Kit (AWS CDK) ● AWS Identity and Access Management (IAM)
● AWS CloudShell ● Amazon Inspector
● AWS CodeArtifact ● AWS Key Management Service (AWS KMS)
● AWS CodeBuild ● Amazon Macie
● AWS CodeCommit ● AWS Network Firewall
● AWS CodeDeploy ● AWS Resource Access Manager (AWS RAM)
● Amazon CodeGuru ● AWS Secrets Manager
● AWS CodePipeline ● AWS Security Hub
● AWS Command Line Interface (AWS CLI) ● AWS Security Token Service (AWS STS)
● AWS Fault Injection Simulator ● AWS Shield
● AWS SDKs and Tools ● AWS IAM Identity Center
● AWS X-Ray ● AWS WAF
Storage:
● AWS Backup
● Amazon Elastic Block Store (Amazon EBS)
● AWS Elastic Disaster Recovery (AWS DRS)
● Amazon Elastic File System (Amazon EFS)
● Amazon FSx for Lustre
● Amazon FSx for NetApp ONTAP
● Amazon FSx for OpenZFS
● Amazon FSx for Windows File Server
● Amazon S3
● Amazon S3 Glacier
● AWS Storage Gateway
https://portal.tutorialsdojo.com/ 26
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ust a friendly reminder that the following AWS services and features do not represent each and every AWS
J
offering that is excluded from the DOP-C02 exam content. This list is only a hint of what topics are not covered
on the AWS Certified DevOps Engineer — Professional exam, which you should not focus on:
● Machine Learning
● Internet-of-Things (IoT)
● Frontend development for mobile apps
● 12-factor app methodology
● AWS Direct Connect
Exam Benefits
If you successfully passed any AWS exam, you will be eligible for the following benefits:
● E
xam Discount- You’ll get a 50% discount voucherthat you can apply for your recertification or any
other exam you plan to pursue. To access your discount voucher code, go to the “Benefits” section of
your AWS Certification Account, and apply the voucher when you register for your next exam.
● A
WS Certified Store- All AWS-certified professionalswill be given access to exclusive AWS Certified
merchandise. You can get your store access from the “Benefits” section of your AWS Certification
Account.
● C
ertification Digital Badges - You can showcase yourachievements to your colleagues and employers
with digital badges on your email signatures, Linkedin profile, or on your social media accounts. You
can also show your Digital Badge to gain exclusive access to Certification Lounges at AWS re:Invent,
regional Appreciation Receptions, and select AWS Summit events. To view your badges, simply go to
the “Digital Badges” section of your AWS Certification Account.
● E
ligibility to join AWS IQ -With the AWS IQ program,you can monetize your AWS skills online by
providing hands-on assistance to customers around the globe. AWS IQ will help you stay sharp and be
well-versed in various AWS technologies. You can work in the comforts of your home and decide when
or where you want to work. Interested individuals must have an Associate, Professional, or Specialty
AWS Certification and be over 18 of age.
https://portal.tutorialsdojo.com/ 27
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
enerally, AWS recommends that you first take (and pass) both AWS SysOps Administrator Associate and AWS
G
Developer Associate certification exams before taking on this certification. Previously, it was a prerequisite
that you obtain the associate level certifications before you are allowed to go for the professional level. Last
October 2018, AWS removed this ruling to provide customers with a more flexible approach to the
certifications.
Study Materials
or virtual classes, you can attend theDevOps Engineeringon AWSandSystems Operations on AWSclasses
F
since they will teach you concepts and practices that are expected to be in your exam.
.
1 unning Containerized Microservices on AWS
R
2. Microservices on AWS
3. Infrastructure as Code
4. Introduction to DevOps
5. Practicing Continuous Integration and Continuous Delivery on AWS
6. Jenkins on AWS
7. Blue/Green Deployments on AWS whitepaper
8. Development and Test on AWS
lmost all online training you need can be found on the AWS web page. One digital course that you should
A
check out is theExam Readiness: AWS Certified DevOpsEngineer – Professionalcourse. This digital course
https://portal.tutorialsdojo.com/ 28
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ontains lectures on the different domains of your exam, and they also provide a short quiz right after each
c
lecture to validate what you have just learned.
astly, do not forget to study the AWS CLI, SDKs, and APIs. Since DevOps Pro is also an advanced certification
L
for Developer Associate, you need to have knowledge of programming and scripting in AWS. Go through the
AWS documentation to review the syntax of the CloudFormation template, Serverless Application Model
template, CodeBuild buildspec, CodeDeploy appspec, and IAM Policy.
https://portal.tutorialsdojo.com/ 29
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ince this exam is a professional level one, you should already have a deep understanding of the AWS services
S
listed under our SysOps Administrator Associate and Developer Associate review guides. In addition, you
should familiarize yourself with the following services since they commonly come up in the DevOps Pro exam:
. AWS CloudFormation
1
2. AWS Lambda
3. Amazon CloudWatch
4. AmazonEventBridge
5. Amazon CloudWatch Alarms
6. AWS CodePipeline
7. AWS CodeDeploy
8. AWS CodeBuild
9. AWS CodeCommit
10.AWS Config
11.AWS Systems Manager
12.Amazon ECS
13.Amazon Elastic Beanstalk
14.AWS CloudTrail
15.AWS Trusted Advisor
he FAQs provide a good summary for each service, however, the AWS documentation contains more detailed
T
information that you’ll need to study. These details will be the deciding factor in determining the correct choice
from the incorrect choices in your exam. To supplement your review of the services, we recommend that you
take a look atTutorials Dojo’s AWS Cheat Sheets.Their contents are well-written and straight to the point,
which will help reduce the time spent going through FAQs and documentation.
https://portal.tutorialsdojo.com/ 30
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Scenario Solution
In an AWS Lambda application deployment, only et up Canary deployment for AWS Lambda.
S
10% of the incoming traffic should be routed to the Create a Lambda Alias pointed to the new
new version to verify the changes before eventually Version. Set Weighted Alias value for this Alias as
allowing all production traffic. 10%.
https://portal.tutorialsdojo.com/ 31
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 32
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 33
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 34
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 35
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
In the event of an AWS Region outage, you have to reate a copy of your deployment on the backup
C
make sure that both your application and database AWS region. Set up an RDS Read-Replica on the
will still be running to avoid any service outages. backup region.
https://portal.tutorialsdojo.com/ 36
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
fter your review, you should take some practice tests to measure your preparedness for the real exam. AWS
A
offers a sample practice test for free which you can findhere.You can also opt to buy the longer AWSsample
practice test at aws.training and use the discount coupon you received from any previously taken certification
exams. Be aware though, that the sample practice tests do not mimic the difficulty of the real DevOps Pro
exam.
herefore, we highly encourage using other mock exams such as our very ownAWS Certified DevOps Engineer
T
Professional Practice Examcourse, which containshigh-quality questions with complete explanations on
correct and incorrect answers, visual images and diagrams, YouTube videos as needed, and also contains
reference links to official AWS documentation as well as our cheat sheets and study guides. You can also pair
our practice exams with ourAWS Certified DevOps EngineerProfessional Exam Study Guide eBookto further
help in your exam preparations.
https://portal.tutorialsdojo.com/ 37
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Sample Question 1
n application is hosted in an Auto Scaling group of Amazon EC2 instances with public IP addresses in a
A
public subnet. The instances are configured with a user data script that fetches and installs the required
system dependencies of the application from the Internet upon launch. A change was recently introduced to
prohibit any Internet access from these instances to improve the security, but after its implementation, the
instances could not get the external dependencies anymore. Upon investigation, all instances are properly
running, but the hosted application is not starting up completely due to the incomplete installation.
hich of the following is the MOST secure solution to solve this issue and also ensure that the instances do
W
not have public Internet access?
1. D ownload all of the external application dependencies from the public Internet and then store them in
an S3 bucket. Set up a VPC endpoint for the S3 bucket and then assign an IAM instance profile to the
instances in order to allow them to fetch the required dependencies from the bucket.
2. Deploy the Amazon EC2 instances in a private subnet and associate Elastic IP addresses on each of
them. Run a custom shell script to disassociate the Elastic IP addresses after the application has been
successfully installed and is running properly.
3. Use a NAT gateway to disallow any traffic to the VPC which originated from the public Internet. Deploy
the Amazon EC2 instances to a private subnet then set the subnet's route table to use the NAT gateway
as its default route.
4. Set up a brand new security group for the Amazon EC2 instances. Use a whitelist configuration to only
allow outbound traffic to the site where all of the application dependencies are hosted. Delete the
security group rule once the installation is complete. Use AWS Config to monitor the compliance.
Correct Answer: 1
VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint
A
services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS
Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with
resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
ndpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components
E
that allow communication between instances in your VPC and services without imposing availability risks or
bandwidth constraints on your network traffic.
here are two types of VPC endpoints:interface endpointsandgateway endpoints. You can create the type of
T
VPC endpoint required by the supported service. S3 and DynamoDB are using Gateway endpoints, while most
of the services are using Interface endpoints.
https://portal.tutorialsdojo.com/ 38
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can use an S3 bucket to store the required dependencies and then set up a VPC Endpoint to allow your
Y
EC2 instances to access the data without having to traverse the public Internet.
ence, the correct answer is the option that says:Download all of the external application dependenciesfrom
H
the public Internet and then store them to an S3 bucket. Set up a VPC endpoint for the S3 bucket and then
assign an IAM instance profile to the instances in order to allow them to fetch the required dependencies
from the bucket.
he option that says:Deploy the Amazon EC2 instancesin a private subnet and associate Elastic IP
T
addresses on each of them. Run a custom shell script to disassociate the Elastic IP addresses after the
application has been successfully installed and is running properlyis incorrect because it is possiblethat the
custom shell script may fail and the disassociation of the Elastic IP addresses might not be fully implemented,
which will allow the EC2 instances to access the Internet.
he option that says:Use a NAT gateway to disallowany traffic to the VPC which originated from the public
T
Internet. Deploy the Amazon EC2 instances to a private subnet then set the subnet's route table to use the
NAT gateway as its default routeis incorrect becausealthough a NAT Gateway can safeguard the instances
from any incoming traffic that were initiated from the Internet, it still permits them to send outgoing requests
externally.
he option that says:Set up a brand new securitygroup for the Amazon EC2 instances. Use a whitelist
T
configuration to only allow outbound traffic to the site where all of the application dependencies are hosted.
https://portal.tutorialsdojo.com/ 39
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
elete the security group rule once the installation is complete. Use AWS Config to monitor the complianceis
D
incorrect because this solution has a high operational overhead since the actions are done manually. This is
susceptible to human error, such as in the event that the DevOps team forgets to delete the security group. The
use of AWS Config will just monitor and inform you about the security violation, but it won't do anything to
remediate the issue.
eferences:
R
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html
Sample Question 2
ue to the growth of its regional e-commerce website, the company has decided to expand its operations
D
globally in the coming months ahead. The REST API web services of the app are currently running in an Auto
Scaling group of EC2 instances across multiple Availability Zones behind an Application Load Balancer. For its
database tier, the website is using a single Amazon Aurora MySQL database instance in the AWS Region where
the company is based. The company wants to consolidate and store the data of its offerings into a single data
source for its product catalog across all regions. For data privacy compliance, they need to ensure that the
personal information of their users, as well as their purchases and financial data, are kept in their respective
regions.
hich of the following options can meet the above requirements and entails the LEAST amount of change to
W
the application?
1. S et up a new Amazon Redshift database to store the product catalog. Launch a new set of Amazon
DynamoDB tables to store the personal information and financial data of their customers.
2. Set up a DynamoDB global table to store the product catalog data of the e-commerce website. Use
regional DynamoDB tables for storing the personal information and financial data of their customers.
3. Set up multiple read replicas in your Amazon Aurora cluster to store the product catalog data. Launch
an additional local Amazon Aurora instances in each AWS Region for storing the personal information
and financial data of their customers.
4. Set up multiple read replicas in your Amazon Aurora cluster to store the product catalog data. Launch a
new DynamoDB global table for storing the personal information and financial data of their customers.
Correct Answer: 3
n Aurora global database consists of one primary AWS Region where your data is mastered, and one
A
read-only, secondary AWS Region. Aurora replicates data to the secondary AWS Region with typical latency of
https://portal.tutorialsdojo.com/ 40
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
nder a second. You issue write operations directly to the primary DB instance in the primary AWS Region. An
u
Aurora global database uses dedicated infrastructure to replicate your data, leaving database resources
available entirely to serve application workloads. Applications with a worldwide footprint can use reader
instances in the secondary AWS Region for low-latency reads. In the unlikely event, your database becomes
degraded or isolated in an AWS region, you can promote the secondary AWS Region to take full read-write
workloads in under a minute.
he Aurora cluster in the primary AWS Region where your data is mastered performs both read and write
T
operations. The cluster in the secondary region enables low-latency reads. You can scale up the secondary
cluster independently by adding one or more DB instances (Aurora Replicas) to serve read-only workloads. For
disaster recovery, you can remove and promote the secondary cluster to allow full read and write operations.
https://portal.tutorialsdojo.com/ 41
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
nly the primary cluster performs write operations. Clients that perform write operations connect to the DB
O
cluster endpoint of the primary cluster.
ence, the correct answer is:Set up multiple readreplicas in your Amazon Aurora cluster to store the product
H
catalog data. Launch an additional local Amazon Aurora instances in each AWS Region for storing the
personal information and financial data of their customers.
he option that says:Set up a new Amazon Redshiftdatabase to store the product catalog. Launch a new set
T
of Amazon DynamoDB tables to store the personal information and financial data of their customersis
incorrect because this solution entails a significant overhead of refactoring your application to use Redshift
instead of Aurora. Moreover, Redshift is primarily used as a data warehouse solution and is not suitable for
OLTP or e-commerce websites.
he option that says:Set up a DynamoDB global tableto store the product catalog data of the e-commerce
T
website. Use regional DynamoDB tables for storing the personal information and financial data of their
customersis incorrect because although the use ofGlobal and Regional DynamoDB is acceptable, this solution
still entails a lot of changes to the application. There is no assurance that the application can work with a
NoSQL database, and even so, you have to implement a series of code changes in order for this solution to
work.
he option that says:Set up multiple read replicasin your Amazon Aurora cluster to store the product catalog
T
data. Launch a new DynamoDB global table for storing the personal information and financial data of their
customersis incorrect because although the use ofRead Replicas is appropriate, this solution still requires you
to do a lot of code changes since you will use a different database to store your regional data.
eferences:
R
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-glob
al-database.advantages
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.h
tml
https://portal.tutorialsdojo.com/ 42
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
At this point, you should already be very knowledgeable on the following topics:
.
1 ontinuous Integration/Continuous Delivery (CI/CD)
C
2. Application Development
3. Automation
4. Configuration Management and Infrastructure as Code
5. Monitoring and Logging
6. Incident Mitigation and Event Response
7. Implementing Resilient Cloud Solutions
8. Security and Compliance
s an AWS DevOps practitioner, you shoulder a lot of roles and responsibilities. Many professionals in the
A
industry have attained proficiency through continuous practice and producing results of value. Therefore, you
should properly review all the concepts and details that you need to learn so that you can also achieve what
others have achieved.
he day before your exam, be sure to double-check the schedule, location, and items to bring for your exam.
T
During the exam itself, you have 180 minutes to answer all questions and recheck your answers. Be sure to
manage your time wisely. It will also be very beneficial for you to review your notes before you go in to refresh
your memory. The AWS DevOps Pro certification is very tough to pass, and the choices for each question can
be very misleading if you do not read them carefully. Be sure to understand what is being asked in the
questions and what options are offered to you. With that, we wish you all the best in your exam!
https://portal.tutorialsdojo.com/ 43
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 44
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Overview
he first domain of the AWS Certified DevOps Engineer Professional exam checks your preparedness on how
T
well you understand the integration between the AWS services necessary for code development and
deployment, such as AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. You will
also need a working knowledge of how they complement each other for software development as well as
integrations with Amazon EventBridge (Amazon CloudWatch Events), Amazon S3, and AWS Lambda. A big part
of a normal workday for a DevOps Engineer deals with the software development life cycle.
oughly 22% of questions in the actual DevOps exam revolve around these topics. The Software Development
R
Life Cycle (SDLC) Automation domain is the biggest exam domain of the DOP-C02 exam, so ensure that you
allocate ample time reviewing the topics under this section.
In this chapter, we will cover all of the related topics for SDLC automation in AWS that will likely show up in your
DevOps Professional exam.
https://portal.tutorialsdojo.com/ 45
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
What is DevOps?
o you ever wonder what DevOps is and why it is so popular in the IT Industry today? Is it a tool, a process, a
D
corporate culture, or a combination of all of these? Why are companies giving competitive salaries for this kind
of role?
If you typed the word “DevOps” on any job market website today, you would see many available positions that
require knowledge of both programming and infrastructure management. You will usually see an
advertisement looking for a candidate who knows how to program in Python or any other language. The
requirements include being capable of managing servers with configuration management tools such as
Ansible, Chef, or Puppet, as well as the provisioning of entire cloud environments using Infrastructure-as-code
tools like Terraform or CloudFormation. The salary range offered for these positions is remarkably high too!
raditional IT companies have a dedicated Development (Dev) team that builds enterprise applications and an
T
Operations (Ops) team that handles the servers and network infrastructure. These two teams are often siloed
or isolated from each other. While the Dev team writes the software code, the Ops team prepares the server,
database, and other infrastructure needed to run the soon-to-be-released application. In this setup, the
developers are entirely oblivious to what the system operators are doing and vice versa. A lot of time is wasted
waiting for the Dev Team to fix minor bugs while developing new features and for the Ops team to provision,
deploy and scale the needed server resources. When bugs and incompatibility issues are detected in the
development cycle, the Ops team waits for the Dev team to address the issue since it is strictly the job of the
Developers to fix it. The same is true when there are issues during deployments when the Ops are not familiar
with the application and make wrong assumptions which can cause further delays in the deployment targets.
Due to this lack of coordination, both the business and its customers are impacted. This is where DevOps
comes in!
evOps is not just the combination of Development (Dev) and Operations (Ops). DevOps is the fusion of
D
practices, processes, tools, and corporate culture that expedite the organization’s ability to deliver applications
and services at a higher velocity, faster than traditional software development processes. It’s not merely a tool
or a process that your team adopts, but a synergy of values, corporate structure, and internal processes to
attain the digital transformation of the business enterprise. It tears down the traditional and isolated silos of
the Development, Operations, IT Security, and other teams, enabling collaboration and improving overall
business performance. With DevOps, Developers are empowered to directly influence the deployment life cycle,
and the IT Operations folks have the ability to report and fix possible bugs or incompatibilities in the
application.
evOps is not just a framework, rather, it’s a cultural approach and a mindset combining operations and
D
development skills and delivering a product (or service) from inception to retirement. Company executives also
play a crucial role in allocating budgets and adopting this new status quo within their respective organizations.
https://portal.tutorialsdojo.com/ 46
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ith the advent of Cloud Computing, companies can easily unify their software development and system
W
operation processes. AWS enables organizations to rapidly build, deliver, and manage their products, following
DevOps practices with just a click of a button. The efficiency of provisioning new resources, managing
infrastructure, deploying application code, automating software release processes, and many other tasks in
AWS contributes to overall productivity and business profitability. Because of this massive benefit, companies
are willing to pay competitive remuneration for their DevOps Engineers, especially those who are AWS Certified.
https://portal.tutorialsdojo.com/ 47
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
In 2013, Amazon Web Services (AWS) began the Global Certification Program to validate the technical skills
and knowledge for building secure and reliable cloud-based applications using the AWS platform. The first-ever
certification launched by Amazon is the AWS Certified Solutions Architect – Associate, followed by SysOps
Administrator and Developer Associate. A year later, AWS released the first Professional-level certification:
AWS Certified Solutions Architect - Professional, and in February 2015, they released the AWS Certified DevOps
Engineer Professional.
he AWS Certified DevOps Engineer Professional certification enables technology professionals to showcase
T
their DevOps skills, and it allows companies to identify top candidates to lead their internal DevOps initiatives.
It validates your technical expertise in provisioning, managing, and operating distributed application systems
on the AWS Cloud platform. It tests your ability to implement and manage Continuous Integration/Continuous
Delivery (CI/CD) systems and methodologies on AWS following the industry’s best practices, as well as to
automate security controls and handle governance processes and meet compliance. The exam also covers
core topics such as Software Development Lifecycle (SDLC) automation, security, compliance, monitoring,
logging, configuration management, and incident/event response.
s Amazon Web Services continue to evolve, new and updated versions of the AWS certification exams are
A
released regularly to reflect the service changes and to include new knowledge areas. Four years after its initial
release, an updated version of the AWS Certified DevOps Engineer - Professional certification was launched in
February 2019 with an exam code of DOP-C01.
he latest version of the AWS Certified DevOps Engineer - Professional certification exam was unveiled on
T
March 2023 with an exam code of DOP-C02. AWS is continuously adding more services and features to help
organizations and companies to improve their DevOps processes.
https://portal.tutorialsdojo.com/ 48
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Why Automate?
utomation is at the heart of every DevOps engineer. Automation is a key highlight of DevOps practice. There is
A
a saying in the DevOps community to “Automate Everything” and it starts from code inception, releasing to
production, application retirement, and everything in between. Eliminating repetitive tasks, reducing toil, and
minimizing manual work are the key aspects that you want to solve through automation. Automation in DevOps
fosters speed, greater accuracy, consistency, reliability, and rapid delivery.
● S peed –to innovate your product faster and adaptto changing market trends. Team members are
empowered to make changes quickly as needed, either on the development side or the operational side.
● Rapid delivery –increase the pace of your releasesby automating your entire deployment pipeline. This
is the concept of “fail-fast, iterate faster” in which your companies are incentivized to release minor
changes as often as possible, which keeps them on top of competitors.
● Reliability –continuous integration and continuousdelivery processes allow you to reliably and
consistently deliver your product to end-users. This also reduces human error as automation rarely
makes mistakes as humans do.
● Scale- infrastructure as code helps you manage yourenvironments in a repeatable and more efficient
manner and scale easily as needed. It gives you a robust system to manage your infrastructure no
matter how big or small it is.
● Improved collaboration –reduce inefficiencies whencollaborating with teams. Automation allows the
easier integration of development, testing, and deployment processes. It facilitates faster collaboration
between Dev and Ops, which results in an improved turnaround time for bug fixing, deployment, etc.
● Security -reduces risk through integrated securitytesting tools, automating adoption of compliance
requirements. It allows you to declare and script your security compliance requirement and make sure
they are applied to needed resources in your environments.
https://portal.tutorialsdojo.com/ 49
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Types of Blue Green Deployment via ELB, Route 53, Elastic Beanstalk
lastic Beanstalk, by default, performs an in-place update when you deploy a newer version of your
E
application. This can cause a short downtime since your application will be stopped while Elastic Beanstalk
performs the application update.
● N o downtime during deployment because you are deploying the newer version on a separate
environment
● CNAMEs of the environment URLs are swapped to redirect traffic to the newer version.
● Route 53 will swap the CNAMEs of the application endpoints.
● Fast deployment time and quick rollback since both old and new versions are running at the same
time, you just have to swap back the URLs if you need to rollback.
● Useful if your newer version is incompatible with the current platform version of your application. (ex.
Jumping from major versions of NodeJS, Python, Ruby, PHP, etc.)
● Your RDS Database instance should be on a separate stack because the data will not transfer to your
second environment. You should decouple your database from the web server stack.
o implement a Blue/Green deployment for your Elastic Beanstalk application, you can perform the following
T
steps:
1. C
reate another environment on which you will deploy the newer version of your application. You can
clone your current environment for easier creation.
https://portal.tutorialsdojo.com/ 50
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. O
nce the new environment is ready, deploy a new version of your application. Perform your tests on the
URL endpoint of your new environment.
3. After testing, select your Production environment, click Actions > Swap environment URLs.
4. O
n the Swap Environment URLs page, select the newer environment and click Swap to apply the
changes.
https://portal.tutorialsdojo.com/ 51
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 52
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can also implement Blue/Green deployments on your Lambda functions. The concept is the same as in
Y
Elastic beanstalk blue/green deployment i.e. you will need to create two versions of your Lambda function and
use function Aliases to swap the traffic flow.
ambda versions– lets you publish a new version ofa function that you can test without affecting the current
L
application accessed by users. You can create multiple versions as needed for your testing environments. The
ARN of Lambda version is the same as the ARN of the Lambda function with added version suffix.
arn:aws:lambda:aws-region:acct-id:function:helloworld:$LATEST
ambda aliases– Aliases are merely pointers to specificLambda versions. You can’t select a Lambda alias
L
and edit the function. You need to select the LATEST version if you want to edit the function. Aliases are helpful
for blue/green deployments because it allows you to use a fixed ARN and point it to a particular Lambda
version that you want to deploy.
Remember the difference between Lambda $LATEST, Lambda Versions, and Lambda Aliases:
$LATEST - this is the latest version of your Lambda function. You can freely edit this version.
Lambda Version - fixed version of your function. You can’t edit this directly.
ambda Alias - a pointer to a specific Lambda version. You can perform blue/green deployment with Aliases
L
by pointing to a newer version.
The following steps will show how blue/green deployment can be done on Lambda functions.
1. T
he current version of your Lambda function is deployed on Version 1. Create another version and
make your changes, this will be Version 2.
https://portal.tutorialsdojo.com/ 53
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. C
reate an Alias that will point to the current production version. Use this alias as your fixed production
ARN.
3. C
reate another Alias that you will use for your newer version. Perform your testing and validation on
this newer version. Once testing is complete, edit the production alias to point to the newer version.
Traffic will now instantly be shifted from the previous version to the newer version.
ources:
S
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html
https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html
https://docs.aws.amazon.com/lambda/latest/dg/configuration-versions.html
https://portal.tutorialsdojo.com/ 54
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
n EC2 instances, you can perform a canary deployment by deploying a newer application version on a single
O
EC2 instance and analyzing the production traffic flowing through it. If you are satisfied with it, you’ll proceed to
deploy the newer version on all EC2 instances. However, for deployments on your Lambda functions, you can’t
use a canary deployment since you don’t deploy your application directly on EC2 instances.
o provide similar functionality as a canary deployment, AWS Lambda gives you the ability to use Function
T
Aliases to shift the percentage of traffic from one version to another. Essentially, you will create an Alias that
points to the current version of the Lambda function, then use a weighted alias to define a newer version of
the Lambda function. You can then define the weight (percent of traffic) that you want to forward to this
version. After validation, you can completely shift traffic to the newer version.
ou can consult the previous section (Types of Blue/Green deployment - AWS Lambda) on how to create AWS
Y
Lambda Versions and Aliases. Here’s an example of how to control that percentage of traffic flowing to
different Lambda functions using function alias. This is similar to the way a canary deployment works.
1. Select the function alias pointing to the current production version.
https://portal.tutorialsdojo.com/ 55
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. O
n the weighted alias section, select the newer version of your Lambda function and assign the
percentage of traffic to shift to the newer version. You can repeat this step multiple times if you want to
slowly shift traffic from the older version to the newer version.
If you are using AWS CodeDeploy to deploy your Lambda functions, CodeDeploy uses Aliases to shift traffic to
the newer version. As you can see on the options on the deployment configurations, CodeDeploy can automate
this gradual traffic shifting for your Lambda Functions.
https://portal.tutorialsdojo.com/ 56
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ource:
S
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-ali
as-traffic-shifting/
https://portal.tutorialsdojo.com/ 57
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
lue/green deployment on the AWS platform provides a safer way to upgrade production software. This
B
deployment usually involves two environments, the production environment (blue) and the new updated
environment (green).
nce the new version is deployed on the green environment, you can validate the new software before going
O
live. Then, you start shifting traffic away from the blue environment and sending it to the green one. Normally,
you’d use Route 53 weighted routing policy because it gives you an easy way to push incremental traffic to the
green environment or revert traffic back to the blue environment in case of issues. If you want to, you can
switch the traffic immediately by updating the production Route 53 record to point to the green endpoint. Users
will not see that you changed the endpoint since from their perspective, the production URL is the same.
https://portal.tutorialsdojo.com/ 58
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can also shift a small portion (like 10%) of traffic on the Green environment by using a weighted routing
Y
policy on Route 53. This way, you can test live traffic on the new environment, analyze the new logs, and then
you can easily revert to the original environment if you find any problems. This process is also called a canary
deployment.
ource:
S
https://aws.amazon.com/blogs/startups/upgrades-without-tears-part-2-bluegreen-deployment-step-by-step-on
-aws/
https://portal.tutorialsdojo.com/ 59
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CodeCommit can be used to collaborate code deployment with several teams. Access to AWS
A
CodeCommit requires credentials. And those credentials need to have specific permissions on the level of
access allowed for each individual or team.
or example, you have three teams – the Admin team (merge requests and approve production deployments,
F
create/delete repositories), the Development team (handle code development on their respective branches),
and the Reviewers (review code changes on the repository).
WS has predefined policies for common use cases such as these groups. Going to the exam, you need to
A
know the key differences between each policy.
● AWSCodeCommitFullAccess - full Admin policy for CodeCommit
● AWSCodeCommitPowerUser - users can’t create or deleteCodeCommit repositories
●
AWSCodeCommitReadOnly - read-only access for users
https://portal.tutorialsdojo.com/ 60
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CodeDeploy gives you several lifecycle event hooks to perform actions at different stages of the
A
deployment. You can run scripts on your desired stage to run specific actions needed for that stage.
or example, you can use theBeforeInstalllifecycleevent hook to create a backup of your current version
F
before CodeDeploy installs the new version on your instances. You can use theBeforeAllowTrafficlifecycle
event hook to perform some tasks or run scripts on the instances before they are registered on the load
balancer.
nother example is when using blue/green deployment, and you want to run validation scripts after the new
A
instances have been registered on the load balancer. You want to validate the new version before you remove
the old version instances. For this scenario, you will use theAfterAllowTrafficlifecycle event hook.
he stages are available depending on which deployment method you have chosen, such as in-place
T
deployment or blue-green deployment.
The following table lists the lifecycle event hooks available for each deployment and rollback scenario.
https://portal.tutorialsdojo.com/ 61
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Here’s a summary of the Lifecycle event hooks and what actions are performed on that stage.
ApplicationStop– This deployment lifecycle eventoccurs even before the application revision is downloaded.
ownloadBundle– During this deployment lifecycleevent, the CodeDeploy agent copies the application
D
revision files to a temporary location.
eforeInstall– You can use this deployment lifecycleevent for preinstall tasks, such as decrypting files and
B
creating a backup of the current version.
I nstall– During this deployment lifecycle event,the CodeDeploy agent copies the revision files from the
temporary location to the final destination folder.
fterInstall– You can use this deployment lifecycleevent for tasks such as configuring your application or
A
changing file permissions.
pplicationStart– You typically use this deploymentlifecycle event to restart services that were stopped
A
during ApplicationStop.
alidateService– This is the last deployment lifecycleevent. It is used to verify that the deployment was
V
completed successfully.
eforeBlockTraffic– You can use this deployment lifecycleevent to run tasks on instances before they are
B
deregistered from a load balancer.
lockTraffic– During this deployment lifecycle event,internet traffic is blocked from accessing instances that
B
are currently serving traffic.
fterBlockTraffic– You can use this deployment lifecycleevent to run tasks on instances after they are
A
deregistered from a load balancer.
eforeAllowTraffic– You can use this deployment lifecycleevent to run tasks on instances before they are
B
registered with a load balancer.
https://portal.tutorialsdojo.com/ 62
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
llowTraffic– During this deployment lifecycle event, internet traffic is allowed to access instances after a
A
deployment.
fterAllowTraffic– You can use this deployment lifecycleevent to run tasks on instances after they are
A
registered with a load balancer.
oing to the exam, you don’t have to remember all stages, but you need to know the important lifecycle hooks
G
such asBeforeInstall,BeforeAllowTraffic, andAfterAllowTraffic.
hooks:
deployment-lifecycle-event-name
:
- location:
script-location
timeout:
timeout-in-seconds
runas:
user-name
ource:
S
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#refe
rence-appspec-file-structure-hooks-availability
https://portal.tutorialsdojo.com/ 63
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Automatically Run CodeBuild Tests After a Developer Creates a CodeCommit Pull Request
WS CodeCommit allows developers to create multiple branches, which can be used for developing new
A
features or fixing application bugs. These changes can then be merged on the master branch, which is usually
used for the production release. To merge the changes to the master branch on CodeCommit, the developers
create a pull request. But these code changes need to be validated in order to make sure that the changes
integrate properly with the current code base.
o validate your changes on the code base, you can run an AWS CodeBuild project to the build and test your
T
pull request, and based on the result, decide on whether to accept the merging of the code or reject the pull
request.
ou can automate the validation of AWS CodeCommit pull requests with AWS CodeBuild and AWS Lambda
Y
with the help ofAmazon EventBridge. Basically,AmazonEventBridgewill detect the pull requests on your
CodeCommit repository and then trigger the AWS CodeBuild project with the Lambda function updating the
comments on the pull request. The results of the build are also detected byAmazon EventBridgeand will
trigger the Lambda function to update the pull request with the results.
he following diagram shows an example workflow on how you can automate the validation of a pull request
T
with AWS CodeCommit, AWS CodeBuild, CloudWatch Event, and AWS Lambda.
https://portal.tutorialsdojo.com/ 64
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
1. T
he AWS CodeCommit repository contains two branches, the master branch that contains approved
code, and the development branch, where changes to the code are developed.
2. Push the new code on the AWS CodeCommit Development branch.
3. Create a pull request to merge their changes to the master branch.
4. C
reateanAmazon EventBridge (Amazon CloudWatch Events)rule to detect the pull request and have it
trigger an AWS Lambda function that will post an automated comment to the pull request that indicates
that a build to test the changes is about to begin.
5. W
ith the sameAmazon EventBridge (Amazon CloudWatchEvents) rule, have it trigger an AWS
CodeBuild project that will build and validate the changes.
https://portal.tutorialsdojo.com/ 65
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. C
reate another Amazon EventBridge (Amazon CloudWatch Events) rule to detect the output of the
build. Have it trigger another Lambda function that posts an automated comment to the pull request
with the results of the build and a link to the build logs.
ased on this automated testing, the developer who opened the pull request can update the code to address
B
any build failures, and then update the pull request with those changes. The validation workflow will run again
and will produce the updated results.
nce the pull request is successfully validated, you can accept the pull request to merge the changes to the
O
master branch.
ources:
S
https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html
https://aws.amazon.com/blogs/devops/validating-aws-codecommit-pull-requests-with-aws-codebuild-and-aw
s-lambda/
https://portal.tutorialsdojo.com/ 66
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
In AWS CodeBuild, you can run build projects that will build and test your code. After the build process, you
have the option to store the artifact on an Amazon S3 bucket which can then be used by AWS CodeDeploy to
deploy on your instances. You can create an AWS Pipeline for the whole process to be automated – from
building, testing, and validation, up to the deployment of your artifact.
ere’s the reference snippet of code on your CodeBuild buildspec.yml file showing how to declare the output
H
artifact file.
phases:
install:
runtime-versions:
java: corretto11
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn install
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- target/CodeDeploySample.zip
hebuildspec.ymlfile should be in the same folderas your source code. The source code will be built based
T
on the contents of thebuildspec.ymlfile, and theoutput will be sent to the S3 bucket that you have specified
on the build project.
his is an example of a JSON file that you can use when creating a CodeBuild project. Notice that the input and
T
output buckets are specified.
{
name": "sample-codedeploy-project",
"
"source": {
"type": "S3",
"location": "
my-codepipeline-website-bucket/CodeDeploySample.zip
"
},
https://portal.tutorialsdojo.com/ 67
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
"artifacts": {
"type": "S3",
"location": "
my-codepipeline-website-bucket
",
"packaging": "ZIP",
"name": "
CodeDeployOutputArtifact.zip
"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "aws/codebuild/standard:4.0",
"computeType": "BUILD_GENERAL1_SMALL"
},
"serviceRole": "arn:aws:iam::account-ID:role/role-name",
"encryptionKey": "arn:aws:kms:region-ID:account-ID:key/key-ID"
}
If you are using the AWS CodeBuild, this is where you specify the S3 bucket for the artifact.
https://portal.tutorialsdojo.com/ 68
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
fter the build runs, you can use the output artifact to be deployed by AWS CodeDeploy manually. Or using
A
CodePipeline, you can create another stage on which CodeDeploy will automatically pick up this artifact and
run the deployment on your desired instances.
https://portal.tutorialsdojo.com/ 69
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Here’s how it should look on your CodePipeline run with multiple stages.
y default, all the artifacts that you upload to an S3 bucket from AWS CodeBuild are encrypted. The default
B
encryption is Amazon S3 Server-side encryption using AES-256.
https://portal.tutorialsdojo.com/ 70
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
If you are using CodePipeline and you reference an Amazon S3 object artifact (like the output from
CodeDeploy), you need to have versioning enabled. When you create your source bucket, make sure that you
enable versioning on the bucket first. When you specify the S3 object name on your artifact parameter, you
can specify the specific version ID that you want to deploy.
lso, remember that when you use the console to create or edit your pipeline, CodePipeline createsan
A
Amazon EventBridgerule that starts your pipelinewhen a change occurs in the S3 source bucket or when the
CodeBuild stage completes and successfully uploads the artifact to S3.
ources:
S
https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-s3deploy.html
https://docs.aws.amazon.com/codebuild/latest/APIReference/API_ProjectArtifacts.html
https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started-output-console.html
https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-S3.html
https://portal.tutorialsdojo.com/ 71
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
projection is the set of attributes that are copied (projected) from a table into a secondary index. These are in
A
addition to the primary key attributes and index key attributes, which are automatically projected.
For example, we can see this base table (GameScores) with a partition key (UserID) and sort key (GameTitle).
ou can create a global secondary index (GameTitleIndex) with a new partition key (GameTitle) and sort key
Y
(TopScore). The base table's primary key attributes are always projected into an index, so the UserID attribute
is also present. This improves searching when not using the primary keys of the base table.
https://portal.tutorialsdojo.com/ 72
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
hen you query an index, Amazon DynamoDB can access any attribute in the projection as if those attributes
W
were in a table of their own. When you create a secondary index, you need to specify the attributes that will be
projected into the index. DynamoDB provides three different options for this:
● K EYS_ONLY– Each item in the index consists only ofthe table partition key and sort key values, plus
the index key values. The KEYS_ONLY option results in the smallest possible secondary index.
● INCLUDE– In addition to the attributes describedin KEYS_ONLY, the secondary index will include other
non-key attributes that you specify.
● ALL– The secondary index includes all of the attributesfrom the source table. Because all of the table
data are duplicated in the index, an ALL projection results in the largest possible secondary index.
ou can project other base table attributes into the index if you want. When you query the index, DynamoDB
Y
can retrieve these projected attributes efficiently. However, global secondary index queries cannot fetch
attributes from the base table. For example, if you query GameTitleIndex as shown in the diagram above, the
query could not access any non-key attributes other than TopScore (although the key attributes GameTitle and
UserID would automatically be projected).
ources:
S
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Projection.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html#GSI.Projections
https://portal.tutorialsdojo.com/ 73
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ere are the steps on how to create a CodeBuild build project and save the artifact on an S3 bucket. We will
H
also demonstrate how CodeBuild integrates with Amazon CloudWatch.
1. Go to AWS Code Build > Build Projects and click Create Build Project. Input your details for this project.
2. C
odeBuild supports several sources for your application code, including Amazon S3, AWS
CodeCommit, GitHub, Bitbucket, etc. Using CodeBuild, select your source repository and branch.
https://portal.tutorialsdojo.com/ 74
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. Use the Amazon Linux runtime since we’ll build this for the Amazon Linux AMI.
https://portal.tutorialsdojo.com/ 75
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. B
y default, the build specification filename is “buildspec.yml”. This should be in the root folder of your
application code.
5. Specify which S3 bucket you are going to send the output artifact of your build.
https://portal.tutorialsdojo.com/ 76
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. O
n the logs part, you can have the option to send the build logs to CloudWatch Log group or send them
to an S3 bucket. The Amazon CloudWatch log group or S3 bucket must exist first before you specify it
here.
https://portal.tutorialsdojo.com/ 77
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
7. Click “Create build project”. Select your project and click the “Start build” button.
https://portal.tutorialsdojo.com/ 78
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
8. A
fter the build, you should see the artifact on the S3 bucket. You will also see the build logs of your
project if you click on the Build run ID of your project.
9. A
WS CodeBuild is also integrated on CloudWatch Logs. When you go to CloudWatch Dashboard, you
can select the pre-defined CodeBuild Dashboard where you will see the metrics such as Successful
builds, Failed builds, etc.
10.You should also be able to see the CloudWatch Log Group you created and that CodeBuild logs are
delivered to it.
https://portal.tutorialsdojo.com/ 79
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
11.Using CloudWatch Event rules, you can also create a rule to detect Successful or Failed builds and then
use the Trigger to invoke a Lambda function to send you a slack notification or use an SNS Topic target
to send you an email notification about the build status.
https://portal.tutorialsdojo.com/ 80
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html
https://docs.aws.amazon.com/codebuild/latest/userguide/create-project-console.html
https://portal.tutorialsdojo.com/ 81
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CodeDeploy allows you to automate software deployments to a variety of services such as EC2, AWS
A
Fargate, AWS Lambda, and your on-premises servers.
or example, after you have produced from AWS CodeBuild, you can have CodeDeploy fetch the artifact from
F
the Amazon S3 bucket and then deploy it to your AWS instances. Please note that the target instances need to
have the AWS CodeDeploy agent installed on them for this to be successful and have proper Tags.
ere are the steps to create a deployment on AWS CodeDeploy, including a discussion on how it integrates
H
with Amazon CloudWatch.
1. G
o to AWS CodeDeploy > Applications and click “Create Application”. Input details of your application
and which platform it is running.
2. S
elect your application and create a Deployment Group. CodeDeploy needs to have an IAM permission
to access your targets as well as read the Amazon S3 bucket containing the artifact to be deployed.
https://portal.tutorialsdojo.com/ 82
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. S
elect your deployment type. You can use In-place deployment if you already have the instances
running. Specify the Tags of those EC2 instances, for example,Environment:Dev. CodeDeploy will use
this as an identifier for your target instances.
https://portal.tutorialsdojo.com/ 83
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. S
elect how you want the new code to be deployed, such as All-at-once, or one-at-a-time, etc. These
deployment settings will be discussed in the succeeding sections.
https://portal.tutorialsdojo.com/ 84
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
5. N
ow on your application Deployment group, create a Deployment. Input details for the artifact source.
On the S3 bucket, you can specify the versionID of the artifact file.
https://portal.tutorialsdojo.com/ 85
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. A
fter you click “Create deployment”, the deployment of the artifact on your EC2 instances will begin.
You should see that the deployment will succeed.
https://portal.tutorialsdojo.com/ 86
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
7. Y
ou can click on “View events” to see the stages of the deployment process. Here you can view the
deployment events as well as the status of the deployment lifecycle hooks you have defined. See the
Lifecycle Event hooks section of this book for the details regarding each event.
8. C
odePipeline is integrated with the Amazon CloudWatch rule. You can createan Amazon EventBridge
(Amazon CloudWatch Events)rule that will detect CodeDeploystatus changes, such as a Successful or
Failed deployment. Then have it invoke a Lambda function to perform a custom action, such as sending
a notification on a Slack channel or setting an SNS topic to send an email to you about the status of
your deployment.
https://portal.tutorialsdojo.com/ 87
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
9. CodeDeploy also has built-in notification triggers. Click “Notify” on your application.
10.For events happening on your deployment, you can have targets much likeAmazon EventBridge
(Amazon CloudWatch Events); however, this is limitedto only an SNS Topic or AWS chatbot to Slack.
https://portal.tutorialsdojo.com/ 88
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
odeDeploy also has built-in notification triggers to notify you of your deployment status, however, this is
C
limited to only an SNS Topic or AWS Chatbot to Slack.
side from EC2 instances, AWS CodeDeploy also supports deployment for ECS instances and Lambda
A
deployments. The general steps for deployment are still the same - create anApplication, create adeployment
groupfor your instances, and create adeploymentfor your application.
https://portal.tutorialsdojo.com/ 89
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
owever, for ECS, the artifact is a Docker image that can be stored from AWS ECR or from DockerHub. For
H
Lambda deployments, the artifact will come from a zip file from an S3 bucket. Be sure to have the proper
filename of your Lambda handler file for successful code deployments.
eployments for ECS also support deployment configurations such as all-at-once, one-at-a-time, and
D
half-of-the-time. These deployment configurations will be discussed in a separate section. Lambda, on the
other hand, supports a percentage of traffic shifting, such as linear or canary deployment. This is also
discussed in a separate section.
ources:
S
https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-create-alarms.html
https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring.html
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-create-console-ecs.html
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-create-console-lambda.html
https://portal.tutorialsdojo.com/ 90
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CodePipeline is a continuous delivery service that helps you automate your release pipelines for fast and
A
reliable application and infrastructure updates. For example, you can use CodePipeline to have specific stages
that build, test, and deploy your code to your target instances.
ou can have CodePipeline trigger a full deployment pipeline when your developer pushes code to your
Y
CodeCommit repository, which will then start the series of pipeline stages:
P
● ull source code from the CodeCommit repository.
● Run the CodeBuild project to build and test the artifact file. Upload the artifact to the Amazon S3
bucket.
● Trigger CodeDeploy deployment to fetch the artifact and deploy it to your instances.
This whole cascading process is triggered only by a single CodeCommit repository push event.
ere are the steps to create a pipeline on AWS CodePipeline as well as its integration with Amazon
H
CloudWatch.
1. G
o to AWS CodePipeline > Pipeline and click Create pipeline. Input the name of the service for
CodePipeline and the S3 bucket that holds the artifacts for this pipeline.
https://portal.tutorialsdojo.com/ 91
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. A
dd a source stage such as CodeCommit repository and branch on which code version you want to
deploy.
https://portal.tutorialsdojo.com/ 92
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. C
reate a build stage, such as CodeBuild, that will build and test the artifact for you. You must have an
existing CodeBuild build project for this.
https://portal.tutorialsdojo.com/ 93
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. A
dd a deploy stage using AWS CodeDeploy. The details such as the Application and Deployment group
must exist first before you proceed here.
https://portal.tutorialsdojo.com/ 94
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
5. A
fter creating the pipeline, you should see your stages, and the pipeline starts the whole process, from
CodeCommit, CodeBuild to CodeDeploy. You should be able to see the status of each stage.
https://portal.tutorialsdojo.com/ 95
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 96
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ithAmazon EventBridge, you can also detect the Statusof each stage of the pipeline. You can have a rule
W
for the Pipeline execution state change, such as a FAILED stage, and have it invoke a Lambda function or an
SNS topic to send you an email notification.
Here’s a screenshot of anEventBridge Events rulethat targets a Lambda function and an SNS topic.
https://portal.tutorialsdojo.com/ 97
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ource:
S
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create.html
https://portal.tutorialsdojo.com/ 98
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
hen you deploy a new application version on your target compute platform (EC2 or ECS or Lambda), you can
W
have several options for shifting network traffic to the newer version.
sing CodeDeploy, you can control how many instances are getting updated at any given time during the
U
deployment. This is important because, during deployments, the application is stopped while the new version
is deployed. You want to make sure that enough instances are online to serve the traffic as well as the ability to
roll-back the changes when an error during the deployment occurs.
n the AWS Console, when you click on CodeDeploy > Deployment configurations, you will see the list of
O
AWS-defined deployment strategies that you can use for your deployments. You can create your own
deployment configuration if you want to, but we’ll discuss the most common ones here, which you will likely
encounter in the exam.
odeDeployDefault.AllAtOnce– this is the fastestdeployment. The application will stop on all EC2 instances
C
and CodeDeploy will install the newer version on all instances. The application will stop serving traffic during
the deployment as all instances are offline.
https://portal.tutorialsdojo.com/ 99
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
odeDeployDefault.OneAtATime– this is the slowest deployment. CodeDeploy will stop only one instance at a
C
time. This will take time to deploy on all instances but the application will remain online since only one instance
is offline at any given time.
odeDeployDefault.HalfAtATime– half, or 50% of theinstances will be offline during the deployment, while the
C
other half is online to serve traffic. This is a good balance between a fast and safe deployment.
ource:
S
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations.html
https://portal.tutorialsdojo.com/ 100
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Elastic Beanstalk provides several options for how deployments are processed, including deployment
A
policies (All at once, Rolling, Rolling with additional batch, Immutable, and Traffic splitting) and options that let
you configure the batch size and health check behavior during deployments. By default, your environment uses
all-at-once deployments.
ll at once –The quickest deployment method. Suitableif you can accept a short loss of service, and if quick
A
deployments are important to you. With this method, Elastic Beanstalk deploys the new application version to
each instance.
olling deployments- Elastic Beanstalk splits theenvironment's Amazon EC2 instances into batches and
R
deploys the new version of the application to one batch at a time. During a rolling deployment, some instances
serve requests with the old version of the application, while instances in completed batches serve other
requests with the new version.
olling deployment with additional batch -launchesnew batches during the deployment. To maintain full
R
capacity during deployments, you can configure your environment to launch a new batch of instances before
taking any instances out of service. When the deployment completes, Elastic Beanstalk terminates the
additional batch of instances.
I mmutable deployments- perform an immutable updateto launch a full set of new instances running the new
version of the application in a separate Auto Scaling group, alongside the instances running the old version.
Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new
instances don't pass health checks, Elastic Beanstalk terminates them, leaving the original instances
untouched.
raffic-splitting deployments- let you perform canarytesting as part of your application deployment. In a
T
traffic-splitting deployment, Elastic Beanstalk launches a full set of new instances just like during an immutable
deployment. It then forwards a specified percentage of incoming client traffic to the new application version for
a specified evaluation period. If the new instances stay healthy, Elastic Beanstalk forwards all traffic to them
and terminates the old ones. If the new instances don't pass health checks, or if you choose to abort the
deployment, Elastic Beanstalk moves traffic back to the old instances and terminates the new ones.
Here’s a summary of the deployment methods, how long each deployment takes, and how rollback is handled.
https://portal.tutorialsdojo.com/ 101
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
olling with
R inimal if first
M Yes Yes anual
M ew and
N
an batch fails; redeploy existing
†
additional otherwise, similar to instances
batch Rolling
raffic
T ercentage of client
P Yes Yes eroute
R ew
N
splitting traffic routed to new traffic and instances
version temporarily terminate
††
impacted new
instances
ources:
S
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.rolling-version-deploy.html
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html
https://portal.tutorialsdojo.com/ 102
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 103
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Overview
he second domain of the AWS Certified DevOps Engineer Professional exam focuses on thetopics related to
T
configuration management and infrastructure-as-code. You should have working knowledge in crafting
CloudFormation templates and how to use it for deployments, operating with ElasticBeanstalk environments,
AWS Lambda deployments, and deployments on the Elastic Container Service. To be an effective DevOps
Engineer, it is important that you understand these key concepts.Roughly 17% of questions in the actual
DevOps exam revolve around these topics.
In this chapter, we will cover all of the related topics for configuration management and infrastructure-as-code
in AWS that will likely show up in your DevOps Professional exam.
https://portal.tutorialsdojo.com/ 104
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
onfiguration management is the process of standardizing resource configurations and maintaining the
C
consistency of your application and server components. Configuration management deals with several areas
such as source code repositories, artifact and image repositories, and configuration repositories.
or example, when deploying your application on EC2 instances, you want to make sure that the correct artifact
F
version is deployed and that the required dependencies are installed on the servers. If your new application
version requires another dependency package, you should configure all related servers to accommodate this
change as well.
If you want to make changes on the OS configuration, for example - an updated logging configuration, you will
want to apply it on all running servers as well as to new servers that will be created in the future, plus have the
ability to roll back the changes in case you find any errors.
calability– you don’t have to manually configureyour servers, such as installation of OS updates, application
S
dependencies, and security compliance configurations. The same process will apply no matter how many
servers you have.
eliability- configuration management offers a reliableway for you to deploy your code. There is central
R
management for your changes and updates so it reduces human errors compared to applying changes
manually across your systems.
isaster Recovery–If you happen to deploy an artifactwith bad code or a new config file causing an error, you
D
will have a quick and easy way to rollback since you can go back to the last working version of your
configuration.
ith proper configuration management tools, you will only have to make changes on your configuration code
W
and it will be applied to all related instances. This process is consistent and scalable in such a way that it’s
applicable from a range of few instances to several hundreds of instances. Automation is a key component for
configuration management so there are several AWS tools available for you.
https://portal.tutorialsdojo.com/ 105
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
I nfrastructure-as-code(IaC)takes the concept ofconfiguration management to the next level. Imagine your
entire AWS infrastructure and resources described inside a YAML or JSON file. Just like how your application
source code outputs an artifact, IaC generates a consistent environment when you apply it.
or example, Infrastructure as Code enables DevOps teams to easily and quickly create test environments that
F
are similar to the production environment. IaC allows you to deliver stability rapidly, consistently, and at scale.
nother example is when you need to create a Disaster Recovery site in another region. With IaC, you can
A
quickly create resources on the new region and be assured that the environment is consistent with the current
live environment because everything is defined and described on your JSON or YAML code. You can also save
your code to repositories and you can version control it to track changes on your infrastructure. AWS
CloudFormation is the main service that you can use if you have codified your infrastructure.
WS CloudFormationgives developers and systems administratorsan easy way to create and manage a
A
collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You
can use templates to describe the AWS resources and any associated dependencies or runtime parameters
required to run your application.
● E xtensibility- Using the AWS CloudFormation Registry,you can model, provision, and manage
third-party application resources alongside AWS resources with AWS CloudFormation.
● Authoring with JSON/YAML- allows you to model yourentire infrastructure in a text file. You can use
JSON or YAML to describe what AWS resources you want to create and configure.
● Safety controls- automates the provisioning and updatingof your infrastructure in a safe and
controlled manner. You can use Rollback Triggers to rollback in case errors are encountered during the
update.
● Preview changes to your environment- Change Setsallow you to preview how proposed changes to a
stack might impact your running resources. For example, whether your changes will delete or replace
any critical resources.
● Dependency management- automatically manages dependenciesbetween your resources during
stack management actions. The sequence of creating, updating, or deleting the dependencies and
resources is automatically taken care of.
● Cross account & cross-region management- AWS StackSetsthat lets you provision a common set of
AWS resources across multiple accounts and regions with a single CloudFormation template.
ources:
S
https://aws.amazon.com/cloudformation/
https://portal.tutorialsdojo.com/ 106
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
loudFormation allows you to reference resources from one CloudFormation stack and use those resources
C
on another stack. This is calledcross-stack reference.It allows for a layering of stacks, which is useful for
separating your resources based on your services. Instead of putting all resources on one stack, you can create
resources from one stack and reference those resources on other CloudFormation stacks.
his also allows you to reuse the same CloudFormation stacks so that you can build faster if you need a new
T
environment with minimal changes.
Example:
N
● etwork stack – contains VPC, public and private subnets, and security groups.
● Web server stack – contains webserver and referencing the public subnets and security groups from
the network stack
● Database stack – contains your database server and referencing the private subnets and security
groups from the network stack
he requirement for cross stack reference is that you need to export the resources that you want to be
T
referenced by other stacks. UseExporton the outputField of your main CloudFormation stack to define the
resources that you want to expose to other stacks. On the other stacks, use theFn::ImportValueintrinsic
function to import the value that was previously exported.
ere’s an example of CloudFormation exporting a subnet and a security group, and referencing it on another
H
CloudFormation stack.
https://portal.tutorialsdojo.com/ 107
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ource:
S
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/walkthrough-crossstackref.html
https://portal.tutorialsdojo.com/ 108
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
sing AWS CloudFormation, you can deploy AWS Lambda functions, which is an easy way to reliably reproduce
U
and version your application deployments.
ere is an example of Node.js lambda function that uses an artifact saved on an S3 bucket on CloudFormation
H
(uses JSON Format).
"AMIIDLookup": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "index.handler",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
},
"Code": {
"S3Bucket": "lambda-functions",
"S3Key": "amilookup.zip"
},
"Runtime": "nodejs12.x",
"Timeout": 25,
"TracingConfig": {
"Mode": "Active"
}
}
}
ote that Changes to a deployment package in Amazon S3 are not detected automatically during stack
N
updates. To update the function code, change the object key or version in the template. Or you can use a
https://portal.tutorialsdojo.com/ 109
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
arameter on your CloudFormation template to input the name of the artifact on S3 that contains the latest
p
version of your code.
If you have a zip file on your local machine, you can use the package command to create a template that
generates a template that you can use.
This will save the output on a json template that you can upload on CloudFormation.
For Node.js and Python functions, you can specify the function code inline in the template.
ote that you need to use theAWS::Lambda::FunctionCodeto define that function and it should be
N
enclosed inside theZipFile: |section to ensure thatCloudFormation correctly parses your code.
Here is an example of Node.js lambda function inline on CloudFormation template using the YAML format.
WSTemplateFormatVersion: '2010-09-09'
A
Description: Lambda function with cfn-response.
Resources:
primer:
Type: AWS::Lambda::Function
Properties:
Runtime: nodejs12.x
Role: arn:aws:iam::123456789012:role/lambda-role
Handler: index.handler
Code:
ZipFile: |
var aws = require('aws-sdk')
var response = require('cfn-response')
exports.handler = function(event, context) {
console.log("REQUEST RECEIVED:\n" + JSON.stringify(event))
// For Delete requests, immediately send a SUCCESS response.
if (event.RequestType == "Delete") {
response.send(event, context, "SUCCESS")
return
}
https://portal.tutorialsdojo.com/ 110
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://docs.aws.amazon.com/lambda/latest/dg/deploying-lambda-apps.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-cli-package.html
https://portal.tutorialsdojo.com/ 111
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
hen using AWS CloudFormation to provision your Auto Scaling groups, you can control how CloudFormation
W
handles updates for your Auto Scaling Group. You need to define the properUpdatePolicyattribute for your
ASG depending on your desired behavior during an update.
utoScalingReplacingUpdate- will create a new autoscaling group with new launch template. This is more
A
like an immutable type of deployment.
utoScalingRollingUpdate- will replace the instanceson the current auto-scaling group. You can control if
A
instances will be replaced “all-at-once” or use a rolling update by batches. The default behavior is to delete
instances first, before creating the new instances.
"UpdatePolicy" : {
"AutoScalingReplacingUpdate" : {
"WillReplace" :
Boolean
}
}
https://portal.tutorialsdojo.com/ 112
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
heAutoScalingRollingUpdatepolicy specifies how AWS CloudFormation handles rolling updates for an Auto
T
Scaling group. Rolling updates enable you to specify whether AWS CloudFormation updates instances that are
in an Auto Scaling group in batches or all at once.
"UpdatePolicy" : {
"
A
utoScalingRollingUpdate" : {
"
M
axBatchSize" :
Integer,
"
M
inInstancesInService " :
Integer
,
"
M
inSuccessfulInstancesPercent " :
Integer
,
"
P
auseTime
" :
String
,
"
S
uspendProcesses" : [
List of processes
],
"
W
aitOnResourceSignals " :
Boolean
}
}
ources:
S
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/
https://portal.tutorialsdojo.com/ 113
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
pplication Discovery Agent and Agentless Discovery Connector are helpful tools in the AWS Application
A
Discovery Service to help you plan your migration from on-premises servers and VMs to AWS cloud. Here’s a
quick description and summary of the differences between the two.
heApplication Discovery Agentis a software thatyou install on on-premises servers and VMs targeted for
T
discovery and migration. The agent is needed by AWS Discovery Service to help you plan your migration to the
AWS cloud by collecting usage and configuration data about your on-premises servers. The Agent captures
system configuration, system performance, running processes, and details of the network connections
between systems.
ou can then view the discovered servers, group them into applications, and then track the migration status of
Y
each application from the AWS Migration Hub console.
If you can’t install the agent on your on-premises servers, AWS Application Discovery Service offers another
way of performing discovery through the AWSAgentlessDiscovery Connector. This agentless discovery is
performed by deploying an OVA file in VMware vCenter.
he Discovery Connector identifies virtual machines (VMs) and hosts associated with vCenter, collects static
T
configuration data such as Server hostnames, IP addresses, MAC addresses, and disk resource allocations.
Additionally, it collects the utilization data for each VM and computes average and peak utilization for metrics
such as CPU, RAM, and Disk I/O.
● A pplication Discovery Agent- agent package is installedon on-premises VMs and servers for
migration.
● Agentless Discovery Connector- standalone VM to bedeployed on on-premises data center to
collect information for migration.
ources:
S
https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-agent.html
https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-connector.html
https://portal.tutorialsdojo.com/ 114
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Elastic Container Service (ECS) allows you to manage and run Docker containers on clusters of EC2
A
instances. You can also configure your ECS to use Fargate launch type which eliminates the need to manage
EC2 instances.
ith CloudFormation, you can define your ECS clusters and task definitions to easily deploy your containers.
W
For the high availability of your Docker containers, ECS clusters are usually configured with an auto scaling
group behind an application load balancer. These resources can also be declared on your CloudFormation
template.
oing on to the exam, be sure to remember the syntax needed to declare your ECS cluster, Auto Scaling
G
group, and application load balancer. TheAWS::ECS::Clusterresource creates an ECS cluster and the
AWS::ECS::TaskDefinitionresource creates a task definitionfor your container. The
AWS::ElasticLoadBalancingV2::LoadBalancer resourcecreates an application load balancer and the
AWS::AutoScaling::AutoScalingGroupresource createsan EC2 auto scaling group.
WS provides an example template that you can use to deploy a web application in an Amazon ECS container
A
with auto-scaling and an application load balancer. Here’s a snippet of the template with the core resources:
{
"AWSTemplateFormatVersion"
:
"
2010-09-09"
,
"Resources"
:
{
"ECSCluster"
:
{
"Type"
:
"AWS::ECS::Cluster"
},
……
taskdefinition"
" :
{
"Type"
:
"AWS::ECS::TaskDefinition"
,
"Properties"
:
{
……
ECSALB"
" :
{
"Type"
:
"AWS::ElasticLoadBalancingV2::LoadBalancer"
,
https://portal.tutorialsdojo.com/ 115
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
"Properties"
:
{
……
ECSAutoScalingGroup"
" :
{
"Type"
:
"AWS::AutoScaling::AutoScalingGroup"
,
"Properties"
:
{
"VPCZoneIdentifier"
:
{
"Ref"
:
"SubnetId"
},
ources:
S
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-ecs.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-service.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-service-loadbalanc
ers.html
https://portal.tutorialsdojo.com/ 116
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 117
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Overview
he third exam domain of the AWS Certified DevOps Engineer Professional test covers the topic of Resilient
T
Cloud Solutions. This particular knowledge area is related to the concepts of High Availability, Fault Tolerance,
and Disaster Recovery. 15% of the DevOps exam questions would be related to the subject of resiliency so
make sure that you allocate ample time studying for this domain.
T
● ranslating business requirements into technical resiliency needs
● Identifying and remediating single points of failure in existing workloads
● Enabling cross-Region solutions for certain scenarios using Amazon DynamoDB Global Tables, Amazon
RDS Read Replica, Amazon Aurora Global Database, Amazon Route 53, Amazon S3, Amazon
CloudFront, Amazon S3 Cross-Region Replication and other replication features.
● Configuring load balancing to support cross-AZ services
● Configuring applications and related services to support multiple Availability Zones and Regions while
minimizing downtime
● Identifying and remediating scaling issues
● Identifying and implementing appropriate auto-scaling, load balancing, and caching solutions
● Deploying container-based applications using Amazon ECS, Amazon EKS and AWS Fargate
● Deploying workloads in multiple AWS Regions for global scalability
● Configuring serverless applications using Amazon API Gateway, AWS Lambda and AWS Fargate
● Testing failover of Multi-AZ/multi-Region workloads using Amazon RDS, Amazon Aurora, Route 53,
CloudFront and other related features
● Identifying and implementing appropriate cross-Region backup and recovery strategies via AWS
Backup, Amazon S3, AWS Systems Manager and the likes.
● Configuring a load balancer to recover from backend failure
Let’s discover how to implement resiliency in your AWS cloud architecture in this section.
https://portal.tutorialsdojo.com/ 118
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
oth High Availability and Fault Tolerance have the same objective of ensuring that your application runs all the
B
time without any system degradation. However, these concepts have unique attributes that differentiate them
from each other. These two differ in cost, design, redundancy level, and behavior on component faults or
failures.
igh Availability aims for your application to run 99.999% of the time. Its design ensures that the entire system
H
can quickly recover if one of its components crashes. It has an ample number of redundant resources to allow
failover to another resource if the other one fails. This concept accepts that a failure will occur but provides a
way for the system to recover fast.
https://portal.tutorialsdojo.com/ 119
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ault Tolerance, on the other hand, has the goal of keeping your application running with zero downtime. It has
F
a more complex design, and higher redundancy to sustain any fault in one of its components. Think of it as an
upgraded version of High Availability. As its name implies, it cantolerateany component fault to avoidany
performance impact, data loss, or system crashes by having redundant resources beyond what is typically
needed. The caveat for implementing a fault-tolerant system is its cost as companies have to shoulder the
capital and operating expenses for running its required numerous resources.
system can be highly available but not fault-tolerant, and it can be both. If an application is said to be
A
fault-tolerant then it is also considered highly available. However, there are situations in which a highly
available application is not considered fault-tolerant.
here are various services, features, and techniques in AWS that you can use to implement a highly available
T
and fault-tolerant architecture. You can ensure high availability by deploying your application to multiple
Availability Zones or several AWS Regions. Auto Scaling can dynamically scale your systems depending on the
incoming demand, and an active-active or active-passive failover policy can be implemented in Route 53 to
reduce downtime. Amazon RDS offers Automated Snapshots, Read Replica, and Multi-AZ Deployments to
strengthen your database tier to remove a single point of failure in your system. Alternatively, you can opt to
use the Amazon Aurora Global database or DynamoDB Global tables for your globally-accessible applications.
You can also leverage on the self-healing capabilities of AWS services to achieve fault tolerance.
https://portal.tutorialsdojo.com/ 120
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can adopt these types of architectures both on your application and database tier. AWS Auto Scaling helps
Y
your web servers handle sudden or expected demand in your application. For your database, you can deploy
one or more Read Replicas to offload the surge of incoming load from the primary database instance. For
Multi-AZ, the OS patching or DB Instance scaling operations are applied first on the standby instance before
triggering the automatic failover to reduce interruptions and downtime.
Aurora: Asynchronous
replication
OPERABILITY Non-Aurora: Only the primary All regions are accessible All read replicas are
instance is active; and can be used for accessible and can bee
reads used for readscaling
Aurora: All instances are active
BACKUPS Non-Aurora: Automated backups Automated backups can No backups configured
are taken from standby; be taken in each region by default
SCOPE Always span at least two Each region can have a Can be within an
https://portal.tutorialsdojo.com/ 121
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AILOVER
F Automatic failover to standby urora allows promotion
A Can be manually
PROCESS (Non-Aurora) or read replica of a secondary region to promooted to a
(Aurora) when a problem is be the master standalone database
detected instance (Non-Aurora) or
to be the primary
instance (Aurora)
. R
1 TO or Recovery Time Objective
2. RPO or Recovery Point Objective
asically, RTO refers to the time and RPO is all about the data point. RTO is the time it takes after a disruption
B
to restore a business process to its service level, as defined by the operational level agreement (OLA).
or example, if a disaster occurs at 12 noon and the RTO is 3 hours, the Disaster Recovery process should
F
restore service to the acceptable service level on or before 3 PM.
PO, on the other hand, is the acceptable amount of data loss measured in time. For example, if a disaster
R
occurs at 12 noon and the RPO is one hour, the system should recover all data that was in the system before
11 AM. The acceptable data loss should only be one hour, between 11:00 AM and 12:00 PM (noon). If you
cannot recover a transaction or data made before 11:00 AM, say 10:30 or 9:30 then this means that you have
failed your RPO.
TO refers to the amount of time for the system to recover from an outage. RPO is the specific point, or state,
R
of your data store that needs to be recoverable.
https://portal.tutorialsdojo.com/ 122
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 123
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Route 53 is a global Domain Name System (DNS) service that allows you to route traffic across
A
various AWS Regions and external systems outside of AWS. It provides a variety of routing policies that you
can implement to meet your required use cases and automatically monitor the state and performance of your
applications, servers, and other resources using health checks. You can combine two or more routing policies
to comply with your company's strict RTO and RPO requirements. It helps simplify the process of setting up an
active-passive or active-active failover for your disaster recovery plan by intelligently routing traffic from your
primary resources to the secondary resources based on the rules you specify.
our globally distributed resources can either be considered active or passive. It's active if it accepts live
Y
production traffic and passive if it is just on standby, which will only be activated during a failover event. You
can set up an active-active failover to improve your systems' fault tolerance and performance. By having
several active environments, you can ensure the high availability and resiliency of your global applications. To
set up an active-active failover, you can use a single or a combination of routing policies such as latency,
geolocation, geoproximity, and others to configure Route 53 to respond to a DNS query using any healthy
record.
Below are the different types of Amazon Route 53 routing policies that you can use in your architecture:
● Simple -This routing policy is commonly used fora single resource that performs a straightforward
function for your domain records. For example, you can use this policy to route traffic from
tutorialsdojo.com apex domain to an NGINX web server running on an Amazon EC2 instance.
● Failover – As the name implies, you can use thispolicy to set up an active-passive failover for your
network architecture.
● Geolocation – Amazon Route 53 can detect the geographicallocation where the DNS queries
originated. This routing policy lets you choose the specific resources that serve incoming traffic based
on your users' geographic location. Say, you might want all user traffic from North America routed to an
Application Load Balancer in the Singapore region. It works by mapping the IP addresses to
geographical areas using the Extension Mechanisms for DNS version 0 (EDNS0).
● Geoproximity – This one is similar to the Geolocationrouting policy except that it uses the Traffic Flow
feature of Route 53 and has an added capability of shifting more or less traffic to your AWS services in
one geographical location using a bias. It concentrates on the proximity of the resource in a given
geographic area rather than its exact location.
https://portal.tutorialsdojo.com/ 124
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● L atency – You can improve the application performance for the benefit of your global users by serving
their requests from the AWS Region that provides the lowest latency. This routing policy is suitable for
organizations that have resources in multiple AWS Regions.
● Multivalue Answer– Unlike theSimplerouting policy,this type can route traffic to numerous resources
in response to DNS queries with up to eight active records selected randomly. This policy is perfect if
you are configuring an active-active failover for your network.
● Weighted – This policy allows you to route trafficto multiple resources in proportions that you specify.
It acts as a load balancer that routes requests to a record based on the relative percentage of traffic or
weight that you specified.
o monitor the system status or health, you can use Amazon Route 53 health checks to properly execute
T
automated tasks to ensure the availability of your system. Health checks can also track the status of another
health check or an Amazon CloudWatch Alarm.
ources:
S
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/disaster-recovery-resiliency.html
https://tools.ietf.org/html/rfc2671
https://portal.tutorialsdojo.com/ 125
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Aurora is a fully managed MySQL and PostgreSQL-compatible relational database that provides high
A
performance, availability, and scalability to your applications. Since it is a fully managed service, Amazon
handles all of the underlying resources in your Aurora database and ensures that your cluster is highly available
to meet your disaster recovery objectives and achieves fault tolerance. Aurora is excellent, but it has certain
limitations, which compels companies to choose Amazon RDS as their database tier. Aurora does not use a
native MySQL or PostgreSQL engine like RDS and can't directly run Oracle and Microsoft SQL Server databases
unless you migrate them using the AWS Database Migration Service (AWS DMS) and AWS Schema Conversion
Tool (AWS SCT). These constraints are the reasons why thousands of companies are still using Amazon RDS
in their cloud architecture.
https://portal.tutorialsdojo.com/ 126
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon RDS is a managed database service. Unlike its "fully managed" counterparts, AWS does not entirely
A
'manage' or control all of the components of an Amazon RDS database compared to what it does for Amazon
Aurora. If you launched an RDS database, you are responsible for making it highly scalable and highly available
by deploying Read Replicas or using Multi-AZ Deployments configurations. You can also improve the data
durability of your database-tier by taking automated or manual snapshots in RDS. For disaster recovery
planning, you can set up a disaster recovery (DR) site to another AWS Region if the primary region becomes
unavailable.
DISASTER RECOVERY
COST SCOPE
RTO RPO
n RDS Read Replica is mainly used to vertically scale your application by offloading the read requests from
A
your primary DB instance. But do you know that it is tremendously useful for disaster recovery too? It uses
asynchronous replication to mirror all the changes from the primary instance to the replica, located on the
same or a different AWS Region. In contrast, the Multi-AZ Deployments configuration uses synchronous
replication to keep its standby instance up-to-date. As its name implies, the standby instance is just onstandby,
meaning it neither accepts read nor write requests. This standby instance can only run on the same AWS
Region, unlike a Read Replica with a cross-region capability. These unique attributes enable the Read Replica to
provide the best RTO and RPO for your disaster recovery plan. You can deploy a Read Replica of your RDS
database to another AWS Region to expedite the application failover if the primary region becomes unavailable
without having to wait for hours to migrate and launch the automated/manual RDS snapshots to the other
region.
ou should also know the difference between automated backups, manual snapshots, and Read Replicas for
Y
your Business Continuity Plan (BCP). Amazon RDS has a built-in automated backups feature that regularly
takes snapshots of your database and stores it on an Amazon S3 bucket that is owned and managed by AWS.
The retention period of these backups varies between 0 and 35 days. It provides a low-cost DR solution for
your database tier but is only limited to a single AWS Region. Manual snapshots are the ones that you manually
take yourself, hence the name. In contrast with the automated backups, the S3 bucket where the snapshots are
stored is owned by you, which means that you can control its retention period and deploy cross-region
snapshots. Since you manage your own RDS snapshots, you can move these across AWS Regions using a shell
script or a Lambda function run byAmazon EventBridge(Amazon CloudWatch Events)regularly.
https://portal.tutorialsdojo.com/ 127
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he advantage of using Read Replicas over automated backups and manual snapshots is its near real-time
T
synchronous replication. To put it into perspective, the replication time of the primary DB instance to the replica
instance is less than a second! Compare that to the time required to move an RDS snapshot across another
region and wait for it to start up. Hence, Read Replicas provide the fastest RTO and the best RPO for your
architecture. The only setback is its high cost since you have to run your replica continuously.
ources:
S
https://aws.amazon.com/blogs/database/implementing-a-disaster-recovery-strategy-with-amazon-rds/
https://d0.awsstatic.com/whitepapers/Backup_and_Recovery_Approaches_Using_AWS.pdf
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html
https://portal.tutorialsdojo.com/ 128
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ith Auto Scaling, you can set the number of EC2 instances that you need depending on the traffic of your
W
application. However, there will be scenarios on the exam where you will need to set a fixed number of
instances.
or example, you have a legacy application hosted on an EC2 instance. The application does not support
F
working on a cluster of EC2 instances so it needs to run on a single EC2 instance and you need to make sure
that the application is available and will heal itself even if that EC2 instance crashes.
In this scenario, you will create an Auto Scaling Group using the EC2 AMI in the Launch template and set the
size of the group to Min 1 and Max 1. This ensures that only instances of the application are running. Auto
Scaling will perform health checks for the EC2 instances periodically. If the EC2 instance fails the health check,
Auto Scaling will replace the instance.
Hence, it will always be available and self-healing. This makes your application fault-tolerant.
To set the MinSize and MaxSize of the Auto Scaling group:
1. Go to the EC2 Console page, on the left pane, choose Auto Scaling Groups.
2. S
elect the check box next to your Auto Scaling group. The bottom pane will show information on the
selected Auto Scaling group.
https://portal.tutorialsdojo.com/ 129
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. O
n the Details tab, view or change the current settings for minimum, maximum, and desired capacity.
Set the Desired capacity to 1, Minimum capacity to 1 and Maximum capacity to 1. Make sure that you
don’t have any Automatic Scaling policies configured for this group.
https://portal.tutorialsdojo.com/ 130
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-maintain-instance-levels.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
https://portal.tutorialsdojo.com/ 131
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
s your Auto Scaling group scales out or scales in your EC2 instances, you may want to perform custom
A
actions before they start accepting traffic or before they get terminated. Auto Scaling Lifecycle Hooks allow
you to perform custom actions during these stages. A lifecycle hook puts your EC2 instance into a wait state
( ending:Waitor
P Terminating:Wait ) until your customaction has been performed or when the timeout
period ends. The EC2 instance stays in a wait state for one hour by default, and then the Auto Scaling group
resumes the launch or terminate process (Pending:Proceedor Terminating:Proceed ).
or example, during the scale-out event of your ASG, you want to make sure that new EC2 instances download
F
the latest code base from the repository and that your EC2 user data has been completed before it starts
accepting traffic. You can use the Pending:Wait hook. This way, the new instances will be fully ready and will
quickly pass the load balancer health check when they are added as targets.
nother example: during the scale-in event of your ASG, suppose your instances upload data logs to S3 every
A
minute. You may want to pause the instance termination for a certain amount of time to allow the EC2 to
upload all data logs before it gets completely terminated.
The following diagram shows the transitions between the EC2 instance states with lifecycle hooks.
https://portal.tutorialsdojo.com/ 132
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
uring the paused state (either launch or terminate), you can do more than just run custom scripts or wait
D
for timeouts. Amazon EventBridge can receive the scaling action and you can define a target to invoke a
Lambda function that can perform custom actions, have it send a notification to your email via SNS, or
trigger an SSM Run Command or SSM Automation to perform specific EC2 related tasks.
ources:
S
https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
https://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-lifecycle-hook.html
https://portal.tutorialsdojo.com/ 133
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
owever, one of the challenges of managing a Kubernetes cluster on any deployment is configuring how the
H
cluster scales up or down depending on demand. Adding or removing more compute nodes to the Kubernetes
cluster is a major function to meet changing application demands.
In this section, we’ll talk about the autoscaling options that are supported by Amazon Elastic Kubernetes
Service (Amazon EKS):
● Kubernetes Cluster Autoscaler
● Karpenter
he Kubernetes Cluster Autoscaler can be deployed on an existing Amazon EKS cluster. But it needs
T
permission to examine and modify EC2 Auto Scaling Groups. Using IAM roles for service accounts or using an
IAM OIDC provider is the recommended approach for providing proper permissions. The following are steps on
deploying Cluster Autoscaler on Amazon EKS.
1. C
reate an IAM policy to allow the Cluster Autoscalerto describe and modify the capacity of the Auto
Scaling groups. Use the below example as the cluster-autoscaler-policy.json.
{
Version": "2012-10-17",
"
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*",
"Condition": {
"StringEquals": {
https://portal.tutorialsdojo.com/ 134
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
"aws:ResourceTag/k8s.io/cluster-autoscaler/my-cluster":
"owned"
}
}
,
}
{
Sid": "VisualEditor1",
"
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeAutoScalingGroups",
"ec2:DescribeLaunchTemplateVersions",
"autoscaling:DescribeTags",
"autoscaling:DescribeLaunchConfigurations",
"ec2:DescribeInstanceTypes"
],
"Resource": "*"
}
]
}
2. Create the IAM policy using AWS CLI command.
aws iam create-policy \
--policy-name AmazonEKSClusterAutoscalerPolicy \
--policy-document file://cluster-autoscaler-policy.json
3. U
se eksctl command to create an IAM role andattach the IAM policy. Update the name of the cluster
and the name of the EKS policy created from the previous step.
ksctl create iamserviceaccount \
e
--cluster=my-cluster \
--namespace=kube-system \
--name=cluster-autoscaler \
--attach-policy-arn=arn:aws:iam::111122223333:policy/eksctl-my-cluster-n
odegroup-ng-xxxxxxxx-PolicyAutoScaling \
--override-existing-serviceaccounts \
--approve
4. Y
ou can deploy the Cluster Autoscaler by downloadingthe YAML file from github and then using
kubectl to apply the deployment. Ensure that the CLUSTER NAME is updated on the YAML file.
url -O
c
https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autosca
ler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
l
kubectl apply -f cluster-autoscaler-autodiscover.yam
5. O
nce deployment is applied, annotate the cluster-autoscalerservice account with the ARN of the IAM
role that you created previously.
ubectl annotate serviceaccount cluster-autoscaler \
k
-n kube-system \
https://portal.tutorialsdojo.com/ 135
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ks.amazonaws.com/role-arn=arn:aws:iam::
e
111122223333:role/AmazonEKSClusterAutoscalerRole
6. Add an annotation to the cluster-autoscalerdeployment using the kubectl patch command:
kubectl patch deployment cluster-autoscaler \
-n kube-system \
-p
'{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernete
s.io/safe-to-evict": "false"}}}}}'
. Edit the Cluster Autoscaler deployment using the kubectl command and add the below container
7
commands to ensure compute nodes are distributed across availability zones.
ubectl -n kube-system edit deployment.apps/cluster-autoscaler
k
--balance-similar-node-groups
--skip-nodes-with-system-pods=false
. Ensure that the image deployed for Cluster Autoscaler is the same version of your Kubernetes cluster. If
8
you need to update the image version, use the following eksctl command:
ubectl set image deployment cluster-autoscaler \
k
-n kube-system \
cluster-autoscaler=registry.k8s.io/autoscaling/cluster-autoscaler:v1.25.n
Karpenter
arpenter is an open-source project from AWS built to handle node-provisioning on Kubernetes clusters.
K
Karpenter simplifies cluster scaling by watching aggregate resource requests of unscheduled pods and making
decisions to launch new nodes. It evaluates scheduling constraints requested by pods to properly provision
compute nodes as they are required. Karpenter can also remove nodes when they are no longer needed or
when there is excess capacity on the cluster.
https://portal.tutorialsdojo.com/ 136
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
arpenter can be deployed on an existing Amazon EKS cluster, and it also needs proper IAM permissions in
K
order to describe and modify the sizes of Auto Scaling groups. The following steps will walk through the
deployment of Karpenter on an existing EKS cluster.
. Karpenter provides a CloudFormation template to create the required IAM policy and roles. Download the
1
template, modify the parameters, and deploy the stack.
curl -fsSL https://karpenter.sh/v0.27.0
/getting-started/getting-started-with-eksctl/cloudformation.yaml >
karpenter-cfn.yaml \
&& aws cloudformation deploy \
--stack-name "Karpenter-Demo" \
--template-file karpenter-cfn.yaml \
--capabilities CAPABILITY_NAMED_IAM \
--parameter-overrides "Karpenter-Demo"
2. Use Helm Charts to deploy Karpenter to the existing EKS cluster. Add the helm repo and install karpenter to
the EKS cluster.
$ helm repo add karpenter https://charts.karpenter.sh
$ helm repo update
$ helm upgrade --install --skip-crds karpenter karpenter/karpenter
--namespace karpenter \
--create-namespace --set serviceAccount.create=false --version 0.5.0 \
--set controller.clusterName=eks-karpenter-demo \
https://portal.tutorialsdojo.com/ 137
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://karpenter.sh/
https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluste
r-autoscaler/
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html
https://portal.tutorialsdojo.com/ 138
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
hen deploying Kubernetes clusters on Amazon Elastic Kubernetes Service (Amazon EKS) a DevOps
W
engineer must understand how to configure Amazon VPC and Kubernetes networking.
Amazon Elastic Kubernetes Service (Amazon EKS) uses a networking model similar to Kubernetes for its pod
networking. This model is a flat overlay network that allows the pods to communicate with each other
regardless of which physical node they are located in the cluster.
his section will provide an overview of how Amazon EKS Networking works and the Network add-ons that
T
extend the features of your Kubernetes cluster.
here are two VPCs created when deploying a Kubernetes cluster on Amazon EKS. The first VPC is AWS
T
managed and is not visible to the user. This VPC hosts the Kubernetes control plane, which contains the
Kubernetes API Server. All user commands using kubectl or any API calls for the cluster are sent to the API
server endpoint. The second VPC hosts the Kubernetes worker nodes, which could be Amazon EC2
instances. The worker nodes host the application workloads in pods. All worker nodes must be able to
communicate to the API server.
See the diagram below for reference:
https://portal.tutorialsdojo.com/ 139
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon EKS creates a public API endpoint in which kubectl commands are received. AWS also creates
A
EKS-managed Elastic Network Interfaces (ENIs) to allow internal communication with the nodes inside the
VPC.
ou can enable it whether you want to use the public endpoint which is reachable via the internet, or use the
Y
private endpoint from inside the VPC. You can enable both endpoints depending on your requirements.
y default, only the public endpoint is enabled and you can control who can connect to the endpoint using
B
CIDR restrictions to allow only certain IPs to connect to the Kubernetes control plane. When you enable private
endpoints, any traffic must come from within the cluster’s VPC or connected network such as AWS VPN or
AWS Direct Connect. You can use AWS PrivateLink to create a private connection from another VPC to
Amazon EKS. You have to create an interface endpoint for AWS PrivateLink and enable the interface for each
subnet in the VPC.
ou can extend the functionality of Amazon EKS networking by using add-ons that provide more functionality
Y
depending on what is required by your application workloads. The following are some of the networking
add-ons that you can install on your Amazon EKS cluster:
he Amazon VPC CNI (container network interface) plugin provides networking for pods. This plugin for
T
Kubernetes is deployed on each node on the cluster as a daemon set. It creates an elastic network interface
that is attached to each Amazon EC2 worker node. It also assigns a private IP address to each service and
pod on the cluster. This plugin is helpful if you require individual IP addresses assigned to the network interface
of each Amazon EC2 node.
his plugin is installed for new clusters in Amazon EKS. If your EKS cluster does not have it installed, you can
T
create the Amazon EKS plugin using the AWS CLI command below.
Replace the fields with the appropriate values
aws eks create-addon --cluster-name my-cluster --addon-name vpc-cni
--addon-version v1.12.5-eksbuild.2 \
--service-account-role-arn
arn:aws:iam::111122223333:role/AmazonEKSVPCCNIRole
You can confirm if the add-on is applied on the EKS cluster using the below command:
aws eks describe-addon --cluster-name my-cluster --addon-name vpc-cni
--query addon.addonVersion --output text
https://portal.tutorialsdojo.com/ 140
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
s we design for highly available systems, pods deployed on the Kubernetes cluster are spread through
A
multiple nodes. And to distribute traffic on the pods, we can use Amazon Application Load Balancers. The
AWS Load Balancer Controller handles the provisioning of ALBs as required by the applications deployed on
the Amazon EKS cluster.
mazon EKS creates an ALB if you create a Kubernetes Ingress object while an NLB is created if you use a
A
service type LoadBalancer.
When installing this add-on you need to create an IAM policy and role that have permission to create ALB’s on
your behalf.
https://portal.tutorialsdojo.com/ 141
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. . You can verify that the add-on is installed using the helm command:
4
kubectl get deployment -n kube-system aws-load-balancer-controller
he CoreDNS Amazon EKS add-on is an extensible DNS server used to provide more flexibility for providing
T
name resolution for all pods in the EKS cluster. If you don’t want to be limited by the functionality of Amazon
Route or if you need a fast DNS service inside the Kubernetes cluster that can resolve internal and external
DNS queries, CoreDNS can provide features depending on specific requirements. It also provides Service
Discovery to automatically add/remove DNS entries for new/delete pods or services.
nce deployed, this add-on creates two replicas of CoreDNS on the cluster regardless of how many nodes are
O
active. Amazon EKS automatically installs CoreDNS as a self-managed add-on for every cluster.
If your cluster does not have it installed, you can create the add-on and apply it on the EKS cluster using the
following commands:
aws eks create-addon --cluster-name my-cluster --addon-name coredns
--addon-version v1.9.3-eksbuild.2
Replace the version with the specific version needed for your cluster.
Once the add-on is installed, you can check it using the following command:
aws eks describe-addon --cluster-name my-cluster --addon-name coredns
--query addon.addonVersion --output text
ube-proxy is the default networking add-on for Kubernetes. It is deployed on every Amazon EC2 compute
k
node on your EKS cluster. This plugin is not deployed on EKS Fargate clusters.
kube-proxy runs as daemon set in the Kubernetes cluster, and it maintains network rules on nodes, thus,
allowing communication between pods, not just inside the cluster but also for external communication.
Using this add-on is no longer recommended by AWS so we won’t discuss further details for this add-on. AWS
recommends using the Amazon EKS type of the add-on to your cluster instead of kube-proxy as they are
robust and provide extended functionalities.
ith Calico network policy enforcement, you can implement network segmentation and tenant isolation. This is
W
useful in multi-tenant environments where you must isolate tenants from each other or when you want to
create separate environments for development, staging, and production.
EKS has built-in support for Calico, providing a robust implementation of the full Kubernetes Network Policy
API. EKS users wanting to go beyond Kubernetes network policy capabilities can make full use of the Calico
Network Policy API.
https://portal.tutorialsdojo.com/ 142
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can also use Calico for networking on EKS in place of the default AWS VPC networking without the need
Y
to use IP addresses from the underlying VPC. Calico is not supported when using Fargate with Amazon EKS.
Install the Tigera Calico operator and custom resource definitions using the Helm chart:
Confirm that all of the pods are running with the following command.
ources:
S
https://docs.aws.amazon.com/eks/latest/userguide/eks-networking.html
https://aws.amazon.com/blogs/containers/de-mystifying-cluster-networking-for-amazon-eks-worker-nodes/
https://docs.aws.amazon.com/eks/latest/userguide/vpc-interface-endpoints.html
https://docs.aws.amazon.com/eks/latest/userguide/eks-networking-add-ons.html
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
https://portal.tutorialsdojo.com/ 143
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Systems Manager offers several tools that help automate and standardize applying OS patches across
A
all your environments.
o configure patching for on-premises servers in a hybrid environment, the first step is to enroll the servers to
T
AWS Systems Manager. Follow these steps to enroll servers to AWS SSM.
1. C reate an IAM service role for a hybrid environmentas they are required in order to communicate
with the AWS Systems Manager service. AWS has a managed IAM policy already created for this.
Just attach theAmazonSSMManagedInstanceCoreto theIAM role with the Trusted entity type:
Systems Manager.
2. On the System Manager console, create a HybridActivation and specify the role that is created on
the first step.
https://portal.tutorialsdojo.com/ 144
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. This will give you an Activation code and Activation ID that you will use for the next step.
4. Install the SSM agent on the on-premises serverand associate them using the Activation Code and
Activation ID. For example, install the SSM agent on an Ubuntu OS:
sudo snap install amazon-ssm-agent --classic
sudo systemctl stop snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo /snap/amazon-ssm-agent/current/amazon-ssm-agent -register -code
"activation-code" -id "activation-id" -region "region"
sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
More detailed instructions for installing on Linux OS can be foundhere, and Windows OS can be foundhere.
anaged instances for hybrid activation will have a prefix ofmi-xxxxxunder Fleet Manager on the Systems
M
Manager page.
o configure standardized patching, you can configureMaintenance Windowsunder the AWS Systems
T
Manager page. Maintenance Windows helps you define a schedule for when to perform potentially disruptive
actions on your nodes such as patching an operating system, updating drivers, or installing software or
patches. It also allows you to set anAutomationtaskthat will run during the specified maintenance window.
To create an automated patching task under a specific maintenance window, follow the below steps.
1. Create a Maintenance window on the AWS SystemsManager page.
https://portal.tutorialsdojo.com/ 145
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. S
pecify the time for your maintenance window,which is preferably out-of-business hours. Click
Create maintenance window.
https://portal.tutorialsdojo.com/ 146
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. S
elect the created maintenance window and click > Actions > Register targets to set which
instances will be associated with this maintenance window. You can choose to specify targets using
tags, resource groups, or selecting instances manually.
4. S
elect the created maintenance window and click> Actions > Register Automation task to set an
OS patching task for this window time.
https://portal.tutorialsdojo.com/ 147
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
This Automation task will apply the OS patches during the specified maintenance window.
ources:
S
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-managed-instance-activation.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html
https://portal.tutorialsdojo.com/ 148
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ost organizations that deploy virtual machines create a golden image used to host different applications in
M
their data center. The golden image acts as a base template for a virtual machine and generally contains the
latest OS patches, proper security configurations, and monitoring or automation agents that the company
requires. Creating and updating these golden images can be time-consuming, especially for large
organizations where the golden image must be shared across multiple AWS accounts.
WS provides an automated way to create, manage, customize, and deploy EC2 server images using theEC2
A
Image Builder service. With EC2 Image Builder, youcan configure pipelines to automate the installation of
patches and updates, as well as configure specific settings on the EC2 instance before generating an Amazon
Machine Image (AMI). You can distribute this AMI to your AWS regions, or you can authorize other AWS
accounts, organizations, and OUs to launch it from your account.
he following procedure outlines how to create an AMI using EC2 Image Builder and share the AMI to AWS
T
accounts under your AWS Organizations using AWS Resource Access Manager (RAM).
. G
1 o to the EC2 Image Builder page on the AWSManagement Console.
2. Click Create image pipeline button and specifythe details of your pipeline.
3. You can set a schedule to run the pipelineat specified intervals, depending on your requirements.
https://portal.tutorialsdojo.com/ 149
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. Choose a recipe to generate the AMI or Dockerimage you want to create.
https://portal.tutorialsdojo.com/ 150
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 151
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. Under the Components section, choose whichadditional packages you want to install.
https://portal.tutorialsdojo.com/ 152
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
7. C
lick Next to select the infrastructure onwhere you want EC2 Image Builder to run your pipeline.
This is an optional step.
https://portal.tutorialsdojo.com/ 153
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
8. C lick Next to select the Distribution settingin order to share the AMI on AWS Accounts or regions.
Specify the region and the AWS account IDs that you want to share your AMI.
9. Once populated, click Next to Review and Createpipeline.
nother way to automate sharing of the AMI to the AWS Organization is by usingAWS Resource Access
A
Manager (RAM).hare the AWS resources that you createin one AWS account with all of the roles and
users in that same account or with other AWS accounts. If you manage your account using AWS
Organizations, you can share resources with all the other accounts in the organization or only those
accounts contained by one or more specified organizational units (OUs).
https://portal.tutorialsdojo.com/ 154
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Follow the steps below to share the AMI with other accounts in the organization.
1. G o to Resource Access Manager in the AWS ManagementConsole and click Create resource
share.
2. Specify the details of the share and selectthe Image Builder Images under the Resources that you
want to share.
3. Click Next to proceed and associate permissionsto your AWS accounts within your organization.
https://portal.tutorialsdojo.com/ 155
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. Click Next to Review the details and clickCreate resource share to save the changes.
he newly created Amazon Machine Images (AMI) from EC2 Image Builder should now be available for use to
T
other AWS Accounts within your organization.
ources:
S
https://docs.aws.amazon.com/imagebuilder/latest/userguide/getting-started-image-builder.html
https://docs.aws.amazon.com/imagebuilder/latest/userguide/start-build-image-pipeline.html
https://docs.aws.amazon.com/imagebuilder/latest/userguide/what-is-image-builder.html
https://docs.aws.amazon.com/ram/latest/userguide/what-is.html
https://portal.tutorialsdojo.com/ 156
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
isaster recovery strategies generally involve copying data to another region and a way to continuously update
D
the data on those backups. The backups can either be stored on cold storage or are asynchronously updated
depending on how fast the recovery requirements are in the event of a disaster.
mazon Aurora MySQL has a built-in solution to createCross Region Read Replicas,which asynchronously
A
replicate data from the RDS instance on the primary region. In the event of a disaster in the primary region, a
Read Replica in the secondary region can be promoted to a primary instance. This strategy allows for a swift
recovery of your RDS instances, which helps meet the business continuity goals or compliance requirements.
Promoting an RDS Read Replica to a primary database requires manual intervention. However, this can be
automated by usingAWS Lambdafunctions invoked byAWS Step Functions.
n the diagram, theAWS Elastic Disaster RecoveryConsoleis used to replicate theAmazon EC2
O
instancesto the secondary region. An Amazon RDS primaryinstance is being replicated to the secondary
region by creating a Cross Regions Read Replica.
https://portal.tutorialsdojo.com/ 157
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
In the event of a disaster on the primary region, theAWS Step Functions State Machinecan be executed to
invoke AWS Functions that - promote the Read Replica instance to the primary instance, check if promotion to
primary is successful, and deploy Amazon EC2 instances on the secondary region.Amazon EventBridgecan
be used to check for events when the Cross Region Read Replica sends notifications on anAmazon SNS
Topic.
. N
1 avigate to the AWS Lambda console and clickCreate a function.
2. ChooseAuthor from scratchwith the runtimePython 3.9.
3. C
lickCreate functionto edit the function.Add the below snippet to your Python code that can
promote the Cross Region Read Replica as a primary instance. ClickDeployto save the changes.
import boto3
rds = boto3.client('rds')
secondary = "rds-drs-crrr-cluster-1"
https://portal.tutorialsdojo.com/ 158
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
rds = boto3.client('rds')
5. R
epeat Step 1 and 2 to create another functionnameddrs_failover. This function will trigger the
creation of Amazon EC2 instances leveraging the setup from the AWS Elastic Disaster Recovery
console.
………
…
#Make a list of all source server IDs
serverItems = []
for i in response_iterator:
serverItems += i.get('items')
serverList = []
for i in serverItems:
serverList.append(i['sourceServerID'])
https://portal.tutorialsdojo.com/ 159
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. Create an AWS Step Function State Machine toinvoke the Lambda functions.
y configuring the State Machine, you can orchestrate the Disaster Recovery flow with a single button. Upon
B
execution, the first Lambda function is invoked, and the Cross Region Read Replica instance is promoted to a
primary instance. An Amazon EventBridge rule will detect the RDS event, it will send a message to an Amazon
SNS Topic to will notify the subscribers about the disaster recovery event. The second Lambda function will
check for the RDS promotion status, and if successful, the third Lambda function will be invoked. It will
provision the Amazon EC2 instances on the secondary region to prepare them to accept traffic.
ources:
S
https://lifesciences-resources.awscloud.com/aws-storage-blog/automating-disaster-recovery-of-amazon-rds-an
d-amazon-ec2-instances
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.ht
ml
https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-creating-lambda-state-machine.html
https://portal.tutorialsdojo.com/ 160
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Lookout for Metricsuses machine learning (ML)to continuously monitor your data, find anomalies,
A
determine root causes, and take action based on these anomalies. For example, it can detect important
metrics for businesses, such as unexpected dips in revenues, transaction failures, subscriptions, conversion
rates, churn rates, or cost spending. These can be implemented even without experience in machine learning.
Amazon Lookout for Metrics allows you to define Alerts based on anomalies it detects and then send
notifications to an Amazon SNS topic or take action by triggering an AWS Lambda function.
It is important to remember the following concepts to understand how Amazon Lookout for Metrics works:
etector– Uses machine learning to monitor the datasetand identify any anomalies. It tries to find patterns in
D
data to distinguish expected values and possible anomalies. You can control the interval on how often it
updates the dataset and look for anomalies.
atasource– A time-series-based data that is beinganalyzed by the detector. Amazon Lookout for Metrics
D
supports a variety of sources, such as Amazon S3, Amazon Redshift, Amazon CloudWatch, and third-party
integration, such as Salesforce, Zendesk, Marketo, etc.
ataset– With the configured Detector interval, themetrics and dimensions from the Datasource are copied to
D
the Dataset for analysis. This continuous copy of data is used to detect anomalies, while the historical data is
used to further improve the machine learning algorithm.
etrics– Fields that are used to measure the dataset.Metrics are a combination of measures and
M
dimensions. Measures are numerical fields that the detector monitors, while Dimensions are categorical fields
that create subgroups of measure based on their values.
lert– When the Detector finds an anomaly, you cancreate an Alert to send notifications using Amazon SNS
A
or invoke an AWS Lambda function. You can create an Alert by defining a severity score over a threshold
which can indicate how far an anomaly is outside the expected range.
https://portal.tutorialsdojo.com/ 161
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he following steps outline the creation of Detector, Datasource, Metrics, and Alerts using Amazon Lookout for
T
Metrics.
. On the Amazon Lookout for Metrics managementconsole > Detectors > click Create detector.
1
2. Input the detector name and interval betweeneach analysis. Click Create.
https://portal.tutorialsdojo.com/ 162
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. C
3 lickAdd a datasetto choose whichdatasourcewhere to find your data.
4. Add a Name and the Timezone for your datasetand then select the datasource – in this case
Amazon S3.
5. A
fter setting the Datasource, the next stepis to set the mapping of fields for the measures and
dimensions.
https://portal.tutorialsdojo.com/ 163
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. C lick Next to review and save the created datasetfor the detector. Go to the Detector details page
and click Activate detector.
7. The last step would be to create Alerts whenanomalies are identified. Click Add alerts and provide
details for your alert.
https://portal.tutorialsdojo.com/ 164
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
8. Y
ou can select an SNS topic as a target tosend a notification for the identified anomaly. You can
also use an Alert to trigger automation using AWS Lambda functions that can take action based on
the detected anomaly.
https://portal.tutorialsdojo.com/ 165
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou may find the metrics and alerting feature for Amazon Lookout for Metrics look similar in feature to Amazon
Y
CloudWatch Metrics. However, there is a significant difference between the two. With CloudWatch Metrics, you
can collect data, create graphs, and set thresholds for alarms, but these aggregated metrics are not
automatically analyzed to make sense of any patterns or outliers, or anomalies in the data. With Amazon
Lookout for Metrics, the data is continuously analyzed, compared to historical data, and detects if any
anomalies are happening. You can extend the functionality of CloudWatch Metrics by using it as a Datasource
for Lookout for Metrics. This way, the collected metrics from CloudWatch can be sent to Lookout for Metrics for
analysis.
ources:
S
https://docs.aws.amazon.com/lookoutmetrics/latest/dev/lookoutmetrics-welcome.html
https://aws.amazon.com/blogs/machine-learning/introducing-amazon-lookout-for-metrics-an-anomaly-detection
-service-to-proactively-monitor-the-health-of-your-business/
https://docs.aws.amazon.com/lookoutmetrics/latest/dev/gettingstarted-concepts.html
https://aws.amazon.com/blogs/machine-learning/build-an-air-quality-anomaly-detector-using-amazon-lookout-f
or-metrics/
https://portal.tutorialsdojo.com/ 166
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Simple Storage Service (Amazon S3)providesobject storage with a high level of availability,
A
scalability, durability, and performance. It is designed to handle large amounts of data for data lakes, web
applications, backups, big data analytics, etc. It can also be used to securely store data and control if you want
to share data publicly or only with specific users. Tags play an important role in managing and identifying
resources in the AWS cloud. Amazon S3 has tagging support for both the bucket itself and the objects
contained in it. Tags allow you not only to identify the resources but can also be used to control who can
access the resources.
or example, by adding object tags, you can control the IAM users that can access specific objects on an
F
Amazon S3 bucket. You can create an object tag calledDataClassificationthat can have a value of either
confidential, private, or public. If you add anOwnertag to each object to identify who owns that object, then
you can use the S3 bucket policy to enforce policies that only allow access to the owner of an object. This is
helpful if you have an object that is classified as “confidential” and you want only the owner to have access to
it.
he following steps outline how to implement an S3 bucket policy and IAM user policy to demonstrate the
T
above example:
1. On the Amazon S3 management console, create an S3 bucket and upload files to it.
https://portal.tutorialsdojo.com/ 167
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. C lick the object you want to tag under the Tags section, and add properDataClassificationand
Ownertags to the objects. Ensure proper tagging inorder to enforce access control on the next
steps.
https://portal.tutorialsdojo.com/ 168
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. O
nce the objects are tagged, go to the S3 bucketPermissions tab, and edit the Bucket policy. Add
the below bucket policy to ensure that s3:GetObject action is allowed only when the object has
aws:ResourceTag/DataClassification equals confidential and s3:ExistingObjectTag/Owner equals
${aws:userid}.
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AccessTags",
"Principal": "*",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource":
"arn:aws:s3:::td-files-shared-2we9ijo/*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/DataClassification": "confidential",
"s3:ExistingObjectTag/Owner": "${aws:userid}"
}
}
}
]
}
4. O nce the bucket policy is saved. You must addpermissions to all IAM users that will need access to
the objects on the S3 bucket. Create an IAM policy and attach it to each IAM user or to an IAM
group.
5. Go to the IAM user management console > Policiesand create the following IAM policy which
allows s3:GetObject permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Allow S3 bucket access",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::td-files-shared-2we9ijo/*"
}
]
}
6. Attach the policy to the IAM user – tduser1.
https://portal.tutorialsdojo.com/ 169
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
pon applying the above Amazon S3 bucket policy and attaching the IAM policy to the IAM user, the
U
“confidential” objects on the S3 bucket can only be accessed by their respective “owner”.
ources:
S
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-with-s3-actions.html#using-with-s3-actions-rel
ated-to-objects
https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-re
sourcetag
https://portal.tutorialsdojo.com/ 170
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Storage Gateway is a service that allows an appliance to run on your on-premises network and
A
connect to the storage infrastructure of AWS. This is designed for a hybrid setup with seamless and secure
integration of on-premises storage solutions to AWS. You can extend your storage solutions to the cloud,
create file shares, or you can use the cloud as a backup solution.
Amazon Storage Gateway provides four types of gateways for different use cases:
● Amazon S3 File Gateway
● Amazon FSx File Gateway
● Tape Gateway
● Volume Gateway
mazon S3 File Gateway– designed to be deployed forstorage solutions that require a file system share
A
using the Network File System (NFS) and Server Message Block (SMB) protocols for data center applications.
The objects stored on the file shares retain ownership, permissions, and timestamps. These file shares directly
store the objects in your configured Amazon S3 bucket, and they can be managed as native S3 objects. You
can also use this to store backups of database backups, log files, or metrics data that you want to use for
machine learning or big data analytics.
mazon FSx File Gateway– designed for Windows-basedinfrastructures. It provides low-latency, scalable
A
file shares using the SMB protocol. Amazon FSx for Windows File Server offers a scalable shared file system
that integrates with your existing environment. With the HDD option, Amazon FSx can present a file storage
with full Windows-native compatibilities, such as NTFS support and ACLs.
ape Gateway– designed for data backups and archivalrequirements that uses iSCSI-based virtual tape
T
libraries (VTL). As most on-premises data centers use cheap tape backups for data, Tape Gateway offers a
fast and durable cloud-based solution. Backups will be automatically stored in Amazon S3. TI can save costs
for long-term archival and reduces maintenance overhead needed for off-site media storage requirements.
olume Gateway– designed for creating iSCSI-basedblock storage devices that can be used by on-premises
V
applications to store data. The data stored in these volumes are asynchronously backed up on Amazon S3.
Volume Gateway has two configurations – stored volumes and cached volumes. Stored volumes provide low
latency access by storing all your data locally, which are then asynchronously backed up to Amazon S3.
Cached volumes only store frequently accessed data locally, while all data are stored on Amazon S3.
https://portal.tutorialsdojo.com/ 171
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
re a lot of new objects uploaded to the S3 bucket, the contents of the inventory cache may get outdated and
a
not show these new objects. S3 File Gateway gives you an automated way to schedule this inventory cache
refresh. See the following overview steps on how to enable this.
1. First, we need to create a gateway. On theStorage Gateway management console, go to
Gateways, and click Create gateway.
2. Give a name to the gateway, select a gatewaytime zone, and select Amazon S3 File Gateway.
3. U
nder the platform options, choose the platformon which you would host the storage gateway
appliance. Click Download image and deploy it to your on-premises infrastructure. Once the
appliance is deployed, click Next.
https://portal.tutorialsdojo.com/ 172
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. Input the IP address or the activation from the storage gateway appliance.
. C
5 lick Next to review and activate the gateway.
6. After activation of the storage gateway, goto theFile sharessection to create a file share.
7. Choose the gateway you created and select NFSor SMB for the share type. Select the Destination
S3 bucket that will store all objects from the file share. Click Create file share.
8. O nce the file share is created, you can configureautomated cache refresh. Click the File share you
just created, and click Actions > Edit file share settings.
9. ForAutomated cache refresh from S3 after,select the check box and set the time in days, hours,
and minutes to refresh the file share's cache using Time To Live (TTL). Once the TTL has elapsed,
the file gateway will automatically run RefreshCache to re-populate its inventory.
https://portal.tutorialsdojo.com/ 173
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
his procedure can also be done by scheduling an AWS Lambda function using Amazon Event Bridge. You
T
can write a Lambda function that will call the RefreshCache API for the S3 File Gateway. You can then create
an Amazon EventBridge rule that is scheduled to run at regular intervals and set the target to trigger the
Lambda function that will run the RefreshCache API.
ources
S
https://aws.amazon.com/storagegateway/features/
https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_RefreshCache.html
https://docs.aws.amazon.com/filegateway/latest/files3/refresh-cache.html
https://docs.aws.amazon.com/filegateway/latest/files3/file-gateway-concepts.html
https://portal.tutorialsdojo.com/ 174
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon CodeGuruuses machine learning to provide developerstools that scan their code for potential
A
security issues, recommendations to improve code quality and identify possible inefficiencies in the source
code.
mazon CodeGuru Reviewerhas the ability to connectto your source code repositories and perform
A
automated code reviews to flag potential defects or bugs in the code. It can also provide suggestions on how to
improve your code. Amazon CodeGuru Reviewer supports Java and Python languages. CodeGuru Reviewer
has been trained with a large data set so it can provide actionable recommendations with a low rate of false
positives, and you can improve its ability to analyze code by providing user feedback.
It continuously scans your repositories and can view the recommendations from the analysis and code reviews
directly in the CodeGuru Reviewer console or as pull request comments in your repository.
mazon CodeGuru Reviewercomes withSecrets Detector,which can scan your code to find and detect
A
secrets in source code such as passwords, git keys, access keys, API keys, SSH keys, or access tokens, etc.
It can integrate withAWS Secrets Managerto findyour unprotected secrets. Once CodeGuru generates the
recommendation and displays the code review recommendation, you can quickly protect the secrets by clicking
the “Protect your credential” in the code review, going to Secrets Manager, and creating new secret values.
he following steps demonstrate how Amazon CodeGuru integrates with source code repositories, generates
T
code recommendations, scans for secrets, and helps protect the secret with AWS Secrets Manager.
1. T
here should be an existing Java or Pythonrepository that is supported by CodeGuru Reviewer –
AWS CodeCommit, Github, Bitbucket, Amazon S3.
https://portal.tutorialsdojo.com/ 175
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. N
avigate to the CodeGuru management console,under Reviewer, click Repositories. Click
Associate repository and run analysis.
3. P
rovide the details of the repository you wantto associate with. Select the provider and the
repository location you want to associate with.
https://portal.tutorialsdojo.com/ 176
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. U
nder the source branch, select the repositorybranch you want to scan and the code review name.
Click Associate repository and run analysis to start the scanning process.
5. Go to Reviewer > Code reviews to view the generatedrecommendations and security analysis.
https://portal.tutorialsdojo.com/ 177
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
6. O
n this sample repository, you can see the recommendation to protect secret keys that are
hardcoded on the source code. ClickProtect your credentialsto open AWS Secrets Manager.
7. On AWS Secrets Manager, create a new secretto protect the AWS credentials from the code.
https://portal.tutorialsdojo.com/ 178
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://aws.amazon.com/blogs/aws/codeguru-reviewer-secrets-detector-identify-hardcoded-secrets/
https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html
https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/how-codeguru-reviewer-works.html
https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/recommendations.html#secrets-detection
https://portal.tutorialsdojo.com/ 179
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon CodeGuru Profileris another tool that usesmachine learning to create a performance profile of your
A
live applications. It continuously collects runtime performance data to identify expensive lines of code. And it
can generate suggestions to improve the performance and efficiency of your source code. It supports
applications written in all Java virtual machine (JVM) languages and Python. Following are the features of
CodeGuru Profiler based on supported languages.
Support for AWS Lambda and other AWS compute platforms Yes Yes
nce enabled on the application code, Amazon CodeGuru Profiler creates a dashboard of profiling data for
O
your application where you can visualize the performance of your application. You can view which specific calls
or routines in the code are consuming the most resources so you can analyze and reduce bottlenecks in your
source code.
Amazon CodeGuru supports profiling AWS Lambda functions, and the only requirement is to start the agent is
the profiling group name and add the
@with_lambda_profiler()decorator to your handler function.
The following steps demonstrate how you can enableAmazon CodeGuru Profileron anAWS Lambda
function by adding the
@with_lambda_profiler()decoratorto your handler function.
1. T o enable Amazon CodeGuru Profiler, go to theAWS Lambda console and open your Lambda
function.
2. Click theConfigurationtab, and clickMonitoringand operations tools. ClickEdit.
https://portal.tutorialsdojo.com/ 180
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. E
nable Amazon CodeGuru Profiler and click Save.This creates a profiling group when a profile is
available to submit.
https://portal.tutorialsdojo.com/ 181
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. Y
ou can also manually add CodeGuru Profiler on your Lambda source code by decorating your
handler function with @with_lambda_profiler(). See the example code below:
with_lambda_profiler(profiling_group_name="MyGroupName")
@
def handler_name(event, context):
return "Hello World"
It is important to note that the decorator should only be added in the handler function and not in other internal
functions. You can pass the profiling group name directly in the decorator or with environment variables.
https://portal.tutorialsdojo.com/ 182
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
You can view the heap size, analyze it and take actions to improve the efficiency of your application.
ources:
S
https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html
https://docs.aws.amazon.com/codeguru/latest/profiler-ug/python-lambda.html
https://docs.aws.amazon.com/codeguru/latest/profiler-ug/python-lambda-command-line.html
https://docs.aws.amazon.com/codeguru/latest/profiler-ug/setting-up-short.html
https://portal.tutorialsdojo.com/ 183
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 184
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Overview
he fourth exam domain of the AWS Certified DevOps Engineer Professional test is all about the process of
T
monitoring and logging resources in AWS. You must learn how to set up advanced monitoring configurations
using Amazon CloudWatch, AWS Systems Manager, AWS CloudTrail, and other related services. The task of
aggregating logs and metrics across multiple AWS accounts and regions is also covered. Roughly 15% of
questions in the actual DevOps exam revolve around these topics.
In this chapter, we will cover all of the related topics for monitoring and logging in AWS that will likely show up
in your DevOps exam.
https://portal.tutorialsdojo.com/ 185
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Config enables you to monitor, audit, and evaluate the configurations of your AWS resources. It allows you
A
to track and review the configuration changes of your resources as well as determine your overall compliance
against your internal IT policies. You can also use AWS Systems Manager Automation to automatically
remediate noncompliant resources that were evaluated by your AWS Config Rules.
his service is a great monitoring and automatic remediation tool for your AWS resources. However, there are
T
certain limitations that you should know about AWS Config. The scope of this service is regional which means
that it can only monitor the AWS resources in a specific region. It is usually enabled on a per-region basis on
your AWS account. This poses a problem if your organization is using multiple AWS regions and accounts.
ou can use the multi-Account, multi-Region data aggregation capability of AWS Config if you want to
Y
centralize the auditing and governance of your ubiquitous cloud resources. This functionality reduces the time
and overhead required to collect an enterprise-wide view of your compliance status. It provides you with a
single, aggregated view of your AWS resources across regions, accounts, and even your AWS Organizations.
To do this, you have to create anAggregatorfirstand specify the regions where you want to collect data from.
nAggregator, as its name suggests, is a resourceof AWS Config that collects or groups data together. It
A
replicates data from the specified source accounts into the aggregator AWS account where the aggregated
view will be used. The aggregator account has access to the resource configuration and compliance details for
multiple accounts and regions. This is a type of AWS Config resource that gathers AWS Config configuration
and compliance data from the following:
A
● single AWS account with resources from multiple AWS Regions.
● Multiple AWS accounts where each account uses multiple AWS Regions.
● The master and member accounts of an organization entity in AWS Organizations.
https://portal.tutorialsdojo.com/ 186
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
You can use an aggregator to view the resource configuration and compliance data recorded in AWS Config.
WS Config is a regional resource. If you want to implement a centralized monitoring system of your
A
resources across various AWS Regions and AWS accounts, you have to set up data aggregation using an
Aggregatorin AWS Config.
emember thatAWS AppConfigandAWS Configare twodifferent services! The former is a capability of
R
AWS Systems Manager that assists you in managing, storing, and deploying your application configurations
to your Amazon EC2 instances at runtime. The latter is a configuration management service that helps you
monitor, audit, and evaluate the configurations of your AWS resources.
PC Flow Logs is a feature in AWS that allows you to capture information about the incoming and outgoing IP
V
traffic of the network interfaces in your Amazon VPC. Flow logs can assist you in properly monitoring and
logging the activities in your VPC. It can diagnose overly restrictive security groups or network ACL rules,
monitor the incoming traffic to your EC2 instances, and determine the flow of traffic to and from the network
interfaces. After you've created a flow log, you can retrieve and view its data in the chosen destination.
arge companies often have multiple AWS accounts and use multiple VPCs for their cloud architecture.
L
Monitoring the IP traffic flow could be difficult for a complex and extensive network architecture since the
scope of the flow logs is only within a single VPC. You can enable flow logs for the VPCs that are peered with
https://portal.tutorialsdojo.com/ 187
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
y our VPC as long as the peer VPC is in your AWS account. However, VPC Peering is still not enough to build
centralized logging for multi-account environments with different types of network configurations.
toring all the log data to Amazon S3 is a strategy that you can adopt to consolidate every flow log from across
S
all VPCs that you own. The flow logs of your VPC can be published to an Amazon S3 bucket that you specify.
The collected flow log records for all of the monitored network interfaces are sent to a series of log file objects
stored in the S3 bucket. In this way, all of your logs are in one place which lessens the management overhead.
he buckets and the objects in Amazon S3 are private by default and only the bucket owner can access the
T
data stored in it. You must grant the proper access and modify the bucket policy to allow the
delivery.logs.amazonaws.com service to send and store the logs to the centralized S3 bucket.
ou can also use Amazon Athena to easily query the flow log records in the log files stored in the centralized
Y
Amazon S3 bucket. Amazon Athena is an interactive query service that simplifies the process of analyzing data
in Amazon S3 using standard SQL. The built-in Amazon S3 Select capability can also be used to fetch the logs
based on a simple structured query language (SQL) statement, but this is quite limited and can only query a
subset of your data. Therefore, using Amazon Athena is the preferred service to analyze the unified data
instead of Amazon S3 Select.
Remember that VPC Flow logs do not capture each and every IP traffic. The following items are not logged:
● Traffic to and from the instance metadata (169.254.169.254)
● Traffic to and from the Amazon Time Sync Service (169.254.169.123)
● Dynamic Host Configuration Protocol (DHCP) traffic.
● For the default VPC router, the traffic to the Reserved IP address is not logged.
● Traffic between a Network Load Balancer (ELB) network interface and an endpoint network interface
(ENI).
● Traffic generated by an Amazon EC2 Windows instance for the Windows license activation.
● Traffic generated by the Amazon EC2 instances when they connect to the Amazon DNS server.
However, if you use your own BIND DNS server, all traffic to that DNS server is logged by the VPC Flow
Logs.
https://portal.tutorialsdojo.com/ 188
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CloudTrail is the primary service used for auditing your AWS resources. It provides an event history of all
A
of your AWS account activity, including the actions taken through the AWS SDKs, AWS CLI, AWS Management
Console, and other AWS services. However, it can only track the API calls made on a single AWS account. If
your company has multiple AWS accounts, you can consolidate them into a single organizational unit (OU)
using AWS Organizations. With this, you can create a trail that will collect all events for all AWS accounts in
that organization. This is often referred to as an “organization trail” in AWS CloudTrail. The organization trail
logs events for the master account and all member accounts in the organization.
owever, some companies have complex cloud architectures that hinder them from using AWS Organizations.
H
Businesses may have two or more external AWS accounts that belong to their subsidiaries or partners. To
support this use case, you can configure AWS CloudTrail to send the log files from multiple AWS accounts into
a single Amazon S3 bucket for centralized logging.
https://portal.tutorialsdojo.com/ 189
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
or example, there are four AWS accounts owned by your company: the Manila account, New York account,
F
Singapore account, and London account, that you want to effectively monitor. These AWS accounts are
separate business units that handle their own billing. Using AWS Organizations to consolidate billing and trail
logs is not applicable due to organizational constraints. But alternatively, you can configure CloudTrail to
deliver log files from all four accounts to an S3 bucket that belongs to a central AWS Account that you specify.
1. E nable AWS CloudTrail in the AWS account where the destination bucket will belong(e.g.
tutorialsdojo-trail S3 bucket in the Manila AWS account).You may refer to this as your “central” or
top-level account. Make sure that CloudTrail is disabled on the other accounts.
2. Modify the S3 bucket policy on the destination bucket to grant cross-account permissions to AWS
CloudTrail.
3. Enable AWS CloudTrail in the other accounts that you want to include. Configure CloudTrail in these
AWS accounts to use the same S3 bucket, which belongs to the AWS account that you specified in the
first step.
ources:
S
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-s3.html
https://aws.amazon.com/blogs/architecture/stream-amazon-cloudwatch-logs-to-a-centralized-account-for-au
dit-and-analysis/
https://portal.tutorialsdojo.com/ 190
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he trail logs produced by AWS CloudTrail are invaluable for conducting security and compliance audits,
T
including forensic investigations. It can attest that a particular user credential performed a specific API activity.
For IT audit activities, the trail logs in CloudTrail can be used as proof that your AWS infrastructure complies
with the specified set of operational guidelines. But what if these logs were modified and deleted? How can
you secure and validate the integrity of your trail logs?
o protect your trail data, you can enable the 'log file integrity validation' feature in the CloudTrail via the AWS
T
Management Console, CloudTrail API, or AWS CLI. This feature verifies whether a trail log file was modified,
removed, or kept unchanged after CloudTrail sent it to the S3 bucket. AWS CloudTrail tracks the changes in
each trail log using a separate digest file, which will also be stored in the S3 bucket. This digest file contains
the digital signatures and hashes used to validate the integrity of the log files.
WS CloudTrail uses SHA-256 for hashing and SHA-256 with RSA for digital signing for log file integrity
A
validation. These industry-standard algorithms make it computationally infeasible to modify and delete the
CloudTrail log files. The digest file is signed by AWS CloudTrail using the private key of a public and private key
pair. The public key can be used to validate the digest file for a specific AWS Region.
https://portal.tutorialsdojo.com/ 191
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Fetching Application Logs from Amazon EC2, ECS and On-premises Servers
pplication logs are vital for monitoring, troubleshooting, and regulatory compliance of every enterprise
A
system. Without it, your team will waste a lot of time trying to find the root cause of an issue that can be easily
detected by simply checking the logs. These files often live inside the server or a container. Usually, you have to
connect to the application server via SSH or RDP before you can view the logs. This manual process seems to
be inefficient, especially for high-performance organizations with hybrid network architectures.
sing the Amazon CloudWatch Logs agent, you can collect system metrics and logs from your Amazon EC2
U
instances and on-premises application servers. Gone are the days of spending several minutes connecting to
your server and manually retrieving the application logs. For Linux servers, you don't need to issue atail - f
command anymore since you can view the logs on the CloudWatch dashboard on your browser in near
real-time. It also collects both system-level and custom metrics from your EC2 instances and on-premises
servers, making your monitoring tasks a breeze.
ou have to manually download and install the Amazon CloudWatch Logs agent to your EC2 instances or
Y
on-premises servers using the command line. Alternatively, you can use AWS Systems Manager to automate
the installation process. For your EC2 instances, it is preferable to attach an IAM Role to allow the application
to send data to CloudWatch. For your on-premises servers, you have to create a separate IAM User to integrate
your server into CloudWatch. Of course, you should first establish a connection between your on-premises data
center and VPC using a VPN or a Direct Connect connection. You have to use a named profile in your local
server that contains the credentials of the IAM user that you created.
https://portal.tutorialsdojo.com/ 192
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
If you are running your containerized application in Amazon Elastic Container Service (ECS), you can view the
different logs from your containers in one convenient location by integrating Amazon CloudWatch. You can
configure your Docker containers' tasks to send log data to CloudWatch Logs by using theawslogslogdriver.
If your ECS task is using a Fargate launch type, you can enable theawslogslog driver and add the required
logConfigurationparameters to your task definition.For EC2 launch types, you have to ensure that your Amazon
ECS container instances have an attached IAM role that containslogs:CreateLogStreamandlogs:PutLogEvents
permissions. Storing the log files to Amazon CloudWatch prevents your application logs from taking up disk
space on your container instances.
ources:
S
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/UseCloudWatchUnifiedAgent.html
https://docs.aws.amazon.com/AmazonECS/latest/userguide/using_awslogs.html
https://portal.tutorialsdojo.com/ 193
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can install CloudWatch Agent on your on-premises instances or EC2 instances to allow them to send
Y
detailed metrics to CloudWatch or send application logs to CloudWatch Logs. The logs will be sent to your
configured CloudWatch log group for viewing and searching.
dditionally, you can use CloudWatch Logs subscriptions to get access to a real-time feed of log events from
A
CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon
Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems. This
way, you can perform near real-time analysis of the logs, further log processing using AWS Lambda, or have
advanced searching capability using Elasticsearch.
o begin subscribing to log events, create the receiving source such as a Kinesis stream or Lambda function
T
where the events will be delivered.
subscription filter defines the filter pattern to use to sort out which log events get delivered to your AWS
A
resource, as well as information about where to send matching log events to.
1. Create a receiving source, such as a Kinesis stream, Elasticsearch cluster, or Lambda function.
2. Install CloudWatch Unified Agent on the EC2 instance (Linux or Windows) and configure it to send
application logs to CloudWatch log group.
3. Create the CloudWatch log group that will access the logs.
https://portal.tutorialsdojo.com/ 194
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. C
reate a subscription filter for the log group and select the Lambda function or Elasticsearch cluster
that you created. If you need to set a Data Firehose stream as the subscription filter, you will need to
use AWS CLI as the web console does not support it yet.
ources:
S
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
https://portal.tutorialsdojo.com/ 195
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ith Trusted Advisor’s Service Limit Dashboard, you can view, refresh, and export utilization and limit data on a
W
per-limit basis. These metrics are published on Amazon CloudWatch in which you can create custom alarms
based on a percentage of service utilization against limits, understand the number of resources by each check,
or view time-aggregate views of check results across service categories.
ou need to understand that service limits are important when managing your AWS resources. In the exam,
Y
you can be given a scenario in which you have several Auto Scaling groups in your AWS account and you
need to make sure that you are not reaching the service limit when you perform your blue/green
deployments for your application. You can track service limits with Trusted Advisor and CloudWatch Alarms.
The ServiceLimitUsage metric on CloudWatch Alarms is only visible for Business and Enterprise support
customers.
ere’s how you can create a CloudWatch Alarm to detect if you are nearing your auto-scaling service limit and
H
send a notification so you can request a service limit increase to AWS support.
. First, head over to AWS Trusted Advisor > Service Limits and click the refresh button. This will refresh the
1
service limit status for your account.
https://portal.tutorialsdojo.com/ 196
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. Go to CloudWatch > Alarms. Make sure that you are in the North Virginia region. Click the “Create Alarm”
2
button and click “Select Metric”.
3. In the “All metrics” tab, click “Trusted Advisor” category and you will see “Service Limits by Region”.
4. Search for Auto Scaling groups on your desired region and click Select Metric.
https://portal.tutorialsdojo.com/ 197
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. We can set the condition for this Alarm so that when your Auto Scaling group reaches 80 for that particular
5
region, the alarm is triggered.
6. You can then configure an SNS topic to receive a notification for this alarm.
https://portal.tutorialsdojo.com/ 198
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
7. Click Next to Preview the alarm and click “Create Alarm” to create the alarm.
ources:
S
https://aws.amazon.com/about-aws/whats-new/2017/11/aws-trusted-advisor-adds-service-limit-dashboard-a
nd-cloudwatch-metrics/
https://aws.amazon.com/blogs/mt/monitoring-service-limits-with-trusted-advisor-and-amazon-cloudwatch/
https://portal.tutorialsdojo.com/ 199
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 200
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Overview
he fifth exam domain of the AWS Certified DevOps Engineer Professional test focuses on incident and event
T
management in the AWS infrastructure. It has the least share among all the exam domains of only 14% in the
overall test coverage so you have to limit the time you spend in this domain.
his exam domain will challenge your knowledge and understanding in various topics on Incident Management
T
and Event Response Management. Make sure that you are familiar with the different AWS services that
generate, capture, and process events such asAWSHealth, EventBridge, and CloudTrailto name a few.
Event-driven architectures namely fan out, event streaming, and queuing are also included. Fleet management
services including Systems Manager and AWS Auto Scaling must be reviewed as well. Ensure that you have
experience in using the configuration management services in the AWS platform; especially the AWS Config
service, plus other AWS metrics and logging services (e.g. CloudWatch, X-Ray). The various AWS service health
services are also covered like AWS Health, CloudWatch, and Systems Manager OpsCenter. Having the
knowledge of doing Root Cause Analysis after a production incident is also helpful.
● Integrating AWS event sources using AWS Health, Amazon EventBridge (Amazon CloudWatch Events),
and AWS CloudTrail.
● Building event processing workflows via Amazon Simple Queue Service (Amazon SQS), Amazon
Kinesis, Amazon SNS, Lambda, and Step Functions
● Applying configuration changes to systems
● Modifying infrastructure configurations in response to events
● Remediating a non-desired system state
● Analyzing failed deployments using AWS CodePipeline, CodeBuild, CodeDeploy, CloudFormation, and
CloudWatch synthetic monitoring
● Analyzing incidents regarding failed processes
https://portal.tutorialsdojo.com/ 201
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS provides a myriad of services and features to automate manual and repetitive IT tasks. Gone are the days
A
of receiving outraged emails, calls, or tweets from your customers because your production server was down
and you're not even aware of it. By using Amazon EventBridge (Amazon CloudWatch Events) and CloudWatch
Alarms, your teams can immediately be notified of any system events or breaches of a specified threshold.
Deployment issues can quickly be resolved or prevented through deployment monitoring and automatic
rollbacks using AWS CodeDeploy, and Amazon EventBridge (Amazon CloudWatch Events). S3 Events enables
you to monitor unauthorized actions in your S3 buckets continuously, and RDS Events keeps you in the know
for any failover, configuration change, or backup-related events that affect your database tier. Amazon
EventBridge can also track all the changes in your AWS services, your custom applications, and external
Software-as-a-Service (SaaS) applications in real time. These AWS features and services complement your
existing security information and event management (SIEM) solutions to manage your entire cloud
infrastructure properly.
n event indicates a change in a resource that is routed by an ‘event bus’ to its associated rule. A custom
A
event bus can receive rules from AWS services, custom applications, and SaaS partner services. Amazon
EventBridge is the ideal service to manage your events.
https://portal.tutorialsdojo.com/ 202
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he process of auditing your applications, systems, and infrastructure services in AWS is also simplified as all
T
events and activities are appropriately tracked. Within minutes, you can identify the root cause of a recent
production incident by checking the event history in AWS CloudTrail. Real-time logs feed on CloudWatch can be
delivered to an Amazon Kinesis stream, an Amazon Data Firehose stream, or AWS Lambda for processing,
analysis, or integration to other systems through CloudWatch Subscription Filters. Security incidents can be
remediated immediately by setting up custom responses to Amazon GuardDuty findings using Amazon
EventBridge (Amazon CloudWatch Events). In this way, any security vulnerability in your AWS resources, such
as an SSH brute force attack on one of your EC2 instances, can immediately be identified.
https://portal.tutorialsdojo.com/ 203
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he Amazon S3 Event Notification feature enables teams to be notified when specific events happen in their S3
T
buckets. You can choose the particular S3 Events that you want to monitor, and Amazon S3 will publish the
notifications to your desired destinations. This feature enables you to have more visibility of your data and
promptly remediate any potential data leaks.
https://portal.tutorialsdojo.com/ 204
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he S3 event notifications are usually transmitted within seconds and are designed to be delivered at least
T
once. You can enable object versioning on your S3 bucket to ensure that an event notification is always sent for
every successful object upload. With versioning, every successful write operation will produce a new version of
your S3 object and send the corresponding event notification. Versioning averts the event notification issue
where only one notification is sent when multiple operations are executed to a non-versioned object.
ource:
S
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-event-notifications.html
https://portal.tutorialsdojo.com/ 205
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon RDS produces numerous events in specific categories that you can subscribe to using various tools
A
such as the AWS CLI, Amazon RDS Console, or the RDS API. Each event category can refer to the parameter
group, snapshot, or security group of your DB instance. Moreover, you can automatically process your RDS
event notifications by using an AWS Lambda function or set an alarm threshold that tracks specific metrics by
creating a CloudWatch Alarm.
ource:
S
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Events.html
https://portal.tutorialsdojo.com/ 206
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS_RISK_CREDENTIALS_EXPOSED Event
tep Functions make it easy to develop and orchestrate components of operational response automation
S
using visual workflows and it can be integrated with anAmazon EventBridge (Amazon CloudWatch Events)
rule.
or example, your Development team is storing code on GitHub, and the developers sometimes forget to
F
remove their personal IAM keys on their code before pushing it to the repository. Since an exposed IAM key is a
security issue, you want to make sure that you are notified of this event and the security issue is automatically
remediated.
WS can monitor popular code repository sites for IAM access keys that have been publicly exposed. Upon
A
detection of an exposed IAM access key, AWS Health generates anAWS_RISK_CREDENTIALS_EXPOSED
event in the AWS account related to the exposed key.
1. Create a State function state machine that does the following:
a. Deletes the exposed IAM access key (to ensure that the exposed key can no longer be used)
b. Summarizes the recent API activity for the exposed key from AWS CloudTrail (to know what
changes were made using the exposed key)
c. Send a summary message to an Amazon SNS topic to notify the subscribers.
https://portal.tutorialsdojo.com/ 207
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
2. C
reate anAmazon EventBridge (Amazon CloudWatch Events)rule for detecting the
AWS_RISK_CREDENTIALS_EXPOSEDevent from the PersonalHealth Dashboard Service.
https://portal.tutorialsdojo.com/ 208
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 209
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. Set the Step function as Target for this Amazon EventBridge (Amazon CloudWatch Events) rule.
ource:
S
https://aws.amazon.com/blogs/compute/automate-your-it-operations-using-aws-step-functions-and-amazon-c
loudwatch-events/
https://portal.tutorialsdojo.com/ 210
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Scheduled maintenance events are listed on AWS Health Dashboard. For example, if AWS needs to
A
perform maintenance on the underlying host of your EC2 instance in which the EC2 instance usually needs to
shut down, you will see an event on your AWS Health Dashboard for it. You canuse Amazon EventBridge
(Amazon CloudWatch Events)to detect these changesand you trigger notifications so you will be notified of
these events. You can then perform the needed actions based on the events.
ou can choose the following types of targets when usinAmazon EventBridge (Amazon CloudWatch Events)
Y
as part of your AWS Health workflow:
● WS Lambda functions
A
● Amazon Kinesis Data Streams
● Amazon Simple Queue Service (Amazon SQS) queues
● Built-in targets (CloudWatch alarm actions)
● Amazon Simple Notification Service (Amazon SNS) topics
or example, you can use a Lambda function to pass a notification to a Slack channel when an AWS Health
F
event occurs. Here are the steps to do this.
1. C
reate a Lambda function that will send a message to your Slack Channel. A Nodejs or Python script
will suffice. The function will call an API URL for the Slack channel passing along the message.
https://portal.tutorialsdojo.com/ 211
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. G
2 o to Amazon EventBridge (Amazon CloudWatch Events)and create a rule.
3. Set a rule for the AWS Health Service, and EC2 Service Event Type.
https://portal.tutorialsdojo.com/ 212
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. A
dd a Target for this event to run the Lambda function and save the EventBridge ( CloudWatch Events)
Rule.
ources:
S
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-notification-scheduled-events/
https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html
https://portal.tutorialsdojo.com/ 213
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
sing AWS Health API and Amazon EventBridge (AmazonCloudWatch Events) for
U
Monitoring AWS-Scheduled Deployments/Changes
WS Personal Health Dashboard provides you with alerts and remediation status when AWS is experiencing
A
Amazon EventBridge events that may impact your resources. These events may be scheduled or unscheduled.
For example, scheduled events such as changes on your underlying EC2 hosts may shut down or terminate
your instances, or AWS RDS scheduled upgrades that may reboot your RDS instance.
ou can monitor these AWS Health events usingAmazonEventBridge by calling the AWS Health API. Then,
Y
you can set a target to an SNS topic to inform you of that Event, or you can trigger a Lambda function to
perform a custom action based on the Event.
1. G o to EventBridge > Rules: Navigate to the EventBridge service in the AWS Management Console and
click on "Rules" to create a new rule.
2. Create a Rule for AWS Health Events: Click on "Create rule" and define your rule pattern. You can specify
the event source as "aws.health" to detect AWS Health Events. Optionally, you can filter events based on
services by specifying the service names in the event pattern.
https://portal.tutorialsdojo.com/ 214
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
3. D efine a Target: After setting up the rule pattern, define a target for the events. You can choose an SNS
Topic to receive notifications or a Lambda function to perform custom actions. Select "SNS Topic" or
"Lambda function" as the target based on your requirements.
4. Review Rule Details: Finally, Review the details of the rule, such as its name, description, event pattern,
target and tags. Once reviewed, create and activate the EventBridge rule.
https://portal.tutorialsdojo.com/ 215
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can use Amazon EventBridge to track the Auto Scaling Events and run a corresponding custom action
Y
using AWS Lambda. Amazon EventBridge integrates internal and external services. It allows you to track the
changes of your Auto Scaling group in near real-time, including your custom applications,
Software-as-a-Service (SaaS) partner apps, and other AWS services.
https://portal.tutorialsdojo.com/ 216
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon EventBridge (Amazon CloudWatch Events) can be usedto send events to the specified target when
A
the following events occur:
● EC2 Instance-launch Lifecycle Action
● EC2 Instance Launch Successful
● EC2 Instance Launch Unsuccessful
● EC2 Instance-terminate Lifecycle Action
● EC2 Instance Terminate Successful
● EC2 Instance Terminate Unsuccessful
he
T EC2 Instance-launch Lifecycle Actionis a scale-outevent in which the Amazon EC2 Auto
Scaling moved an EC2 instance to a
Pending:Waitstatedue to a lifecycle hook. Conversely, the
EC2
Instance-terminate Lifecycle Actionis a scale-inevent in which EC2 Auto Scaling updates an
instance to a
Terminating:Waitstate.
ource:
S
https://docs.aws.amazon.com/autoscaling/ec2/userguide/cloud-watch-events.html
https://portal.tutorialsdojo.com/ 217
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
very data event or log entry in CloudTrail contains essential information about who generated the request to
E
your S3 buckets. This capability allows you to determine whether the S3 request was made by another AWS
service, including the IAM User and temporary security credentials used. Amazon S3 Data Events duly records
all S3 object changes and updates in your production S3 buckets.
ource:
S
https://docs.aws.amazon.com/AmazonS3/latest/dev/cloudtrail-logging.html
https://portal.tutorialsdojo.com/ 218
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can detect and respond to certain changes in your pipeline state in AWS CodePipeline using Amazon
Y
EventBridge (Amazon CloudWatch Events).You can useAWS CodePipeline (aws.codepipeline) as the event
source of your Amazon EventBridge (CloudWatch Events) rule and then associate an Amazon SNS topic to
send a notification or a Lambda function to execute a custom action. Amazon EventBridge(CloudWatch
Events) rule can automatically detect the state changes of your pipelines, stages, or actions in AWS
CodePipeline which improves incident and event management of your CI/CD processes.
https://portal.tutorialsdojo.com/ 219
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou can specify both the state and type of CodePipeline execution that you want to monitor. An item can be in
Y
a STARTED ,
SUCCEEDED , ,
RESUMED ,
FAILED CANCELEDor
SUPERSEDEDstate. Refer to the table below for the
list of available detail types that you can use.
{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Action Execution State Change"
],
"detail": {
"state": [
"FAILED"
],
"type": {
"category": ["Deploy", "Build"]
}
}
}
https://portal.tutorialsdojo.com/ 220
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Action Execution State Change"
],
"detail": {
"state": [
"FAILED"
],
"type": {
"category": ["Approval"]
}
}
}
The following sample event pattern tracks all the events from the specified pipelines (
TD-Pipeline-Manila
,
TD-Pipeline-Frankfurtand
TD-Pipeline-New-York)
{
"source": [
"aws.codepipeline"
],
"detail-type": [
"CodePipeline Pipeline Execution State Change",
"CodePipeline Action Execution State Change",
"CodePipeline Stage Execution State Change"
],
"detail": {
"pipeline": ["TD-Pipeline-Manila",
"TD-Pipeline-Frankfurt",
"TD-Pipeline-New-York"]
}
}
ource:
S
https://docs.aws.amazon.com/codepipeline/latest/userguide/detect-state-changes-cloudwatch-events.html
https://portal.tutorialsdojo.com/ 221
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon EventBridge (Amazon CloudWatch Events) can help you detect any changes in the state of an
A
instance or deployment in AWS CodeDeploy. You can also send notifications, collect state information, rectify
issues, initiate events, or execute other actions using CloudWatch Alarms. This type of monitoring is useful if
you want to be notified via Slack (or in other channels) whenever your deployments fail or push deployment
data to a Kinesis stream for real-time status monitoring. If you integratedAmazon EventBridge (Amazon
CloudWatch Events)in your AWS CodeDeploy operations, you can specify the following as targets to monitor
your deployments:
● AWS Lambda functions
● Kinesis streams
● Amazon SQS queues
● Built-in targets (CloudWatch alarm actions)
● Amazon SNS topics
Integrating AWS CodeDeploy and CloudWatch Alarms provides you an automated way to roll back your release
when your deployment fails or if certain thresholds are not met. You can easily track the minimum number of
healthy instances (
MinimumHealthyHosts ) that shouldbe available at any time during the deployment. The
HOST_COUNTor
FLEET_PERCENTdeployment configurationparameters can also be utilized to monitor the
absolute number or just the relative percentage of healthy hosts respectively.
ource:
S
https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-create-alarms.html
https://portal.tutorialsdojo.com/ 222
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
In AWS CodePipeline, you can control your pipeline's actions and stages to optimize the performance of your
CI/CD processes. The runOrderand PollForSourceChangesparameters can assist you in orchestrating
the various activities in your pipeline. For example, you have a serverless application with independent AWS
Lambda functions, and you want to expedite the pipeline execution time. You can modify your CodePipeline
configuration to execute actions for each Lambda function in parallel by specifying the same runOrdervalue.
The PollForSourceChangesparameter automates the pipelinecreation by tracking source changes from
CodeCommit, Amazon S3, or Github. In this way, you will have more control over the various stages of your
pipeline.
ou can also add a manual approval action in CodePipeline before any change is deployed to your production
Y
environment. This provides a final checkpoint for your release process after all your unit and integration tests
were successfully completed.
https://portal.tutorialsdojo.com/ 223
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CodePipeline offers a way to publish approval notifications for your release managers and other
A
authorized staff. A pipeline can be configured with a manual approval action to push a message to an Amazon
SNS topic when the approval action was invoked. Amazon SNS delivers the message to every endpoint
subscribed to the SNS topic that you specified. Amazon SNS lets the approvers know, via email or SMS, that a
new update is ready to be deployed. You can also forward these notifications to SQS queues or HTTP/HTTPS
endpoints and execute a custom action using a Lambda function.
ources:
S
https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-require
ments
https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals.html
https://portal.tutorialsdojo.com/ 224
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 225
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Overview
he sixth exam domain of the AWS Certified DevOps Engineer Professional exam deals with the automation of
T
policies and standards in your architecture to enforce logging, metrics, monitoring, testing, and security. Since
it covers policies and governance, the related AWS services that you have to review are AWS Organizations,
AWS Control Tower, AWS Config, AWS Security Hub, AWS Service Catalog, and AWS Identity and Access
Management (IAM).
● esigning policies to enforce least privilege access
D
● Implementing role-based and attribute-based access control patterns
● Automating credential rotation for machine identities via AWS Secrets Manager
● Managing permissions to control access to human and machine identities such as enabling
multi-factor authentication (MFA), AWS Security Token Service (AWS STS), IAM profiles and the likes.
● Automating the application of security controls in multi-account and multi-Region environments using
AWS Security Hub, AWS Organizations, AWS Control Tower, and the various AWS Systems Manager
modules.
● Combining security controls to apply defense in depth with the help of AWS Certificate Manager (ACM),
AWS WAF, AWS Config, AWS Config rules, Security Hub, GuardDuty, security groups, network ACLs,
Amazon Detective, Network Firewall and other security services.
● Automating the discovery of sensitive data at scale using Amazon Macie.
● Encrypting data in transit and data at rest on AWS KMS, AWS CloudHSM, and AWS ACM.
● Implementing robust security auditing
● Configuring alerting based on unexpected or anomalous security events
● Configuring service and application logging through AWS CloudTrail and Amazon CloudWatch Logs
● Analyzing logs, metrics, and security findings
https://portal.tutorialsdojo.com/ 226
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS provides various management and governance services to provision, manage, govern, and operate their
A
cloud environments more effectively. You can use AWS Organizations, AWS Config, AWS Service Catalog, AWS
Systems Manager, and other services to enforce operational standards across all your AWS resources and
comply with your corporate IT policies.
heAWS Organizationsservice enables you to governyour AWS accounts and resources centrally. It provides
T
consolidated billing, access control, compliance, and security, as well as the ability to share resources across
your AWS accounts. You can use Service Control Policies (SCPs) to ensure that only authorized users can
execute actions that meet your policy requirements. Central logging can be implemented to monitor all
activities performed across your organization using AWS CloudTrail. You can also aggregate data from all your
AWS Config rules to quickly audit your environment for compliance.
https://portal.tutorialsdojo.com/ 227
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Service Catalogempowers you to set up and centrallymanage catalogs of approved IT services that you
A
specify on AWS. You can manage various IT services, referred to as "products" in Service Catalog then group
them in a portfolio. In AWS Service Catalog, a product could be a machine image, application server, program,
tool, database, or other services that you use for your cloud architecture. AWS Service Catalog assists you in
meeting your compliance requirements and enforces granular access control that allows the deployment of
only approved IT services to your AWS Cloud.
WS Configautomates the compliance assessment ofyour internal policies and regulatory standards by
A
giving you visibility on the existing configurations of your various AWS and third-party resources. It
continuously assesses changes in your resource configuration and compares them against your specified
criteria. You can create rules that detect an EC2 instance running on an unapproved AMI, publicly accessible S3
buckets, and many more. The evaluation can either be triggered periodically or by an actual configuration
change of your AWS resource (e.g., CloudTrail was disabled in one of your accounts).
ou can integrate AWS Config withAmazon EventBridge,and AWS Lambda to keep you updated for any
Y
resource changes in near real-time and to execute custom actions. Remediating noncompliant AWS resources
can be automated by just creating AWS Config Rules and AWS Systems Manager Automation documents.
https://portal.tutorialsdojo.com/ 228
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Systems Manager (SSM)is a suite of services thatprovides you visibility and control of your cloud and
A
on-premises infrastructure. SSM has several features that you can leverage such as the Session Manager,
State Manager, Patch Manager, Automation, Maintenance Window, Run Command, Parameter Store, and many
other sub-modules.
hrough theSSM agent, the AWS Systems Manager servicecan manage both Amazon EC2 instances and
T
on-premises servers in which the former is prefixed with an"i-"and the latter with"mi-"in your AWS
Management Console. ThePatch Managerautomates theprocess of patching the OS of your EC2 instances
and on-premises servers using predefined and custom patch baselines. You can set a scheduled maintenance
window to execute the patching activities to reduce any operational impact. With theState Manager, youcan
control the configuration details or the "state" of your resources, such as server configurations, virtualized
hardware, and firewall settings. You can even associate Ansible playbooks, Chef recipes, PowerShell modules,
and other SSM Documents to your resources. TheSystemsManager Parameter Storeprovides centralized
storage and management of your "parameters" such as passwords, database strings, Amazon Machine Image
(AMI) IDs, license codes, environment variables, et cetera. Store the parameter as aSecureStringdatatypeto
instruct SSM to automatically encrypt it using a customer master key (CMK) in AWS KMS.
mazon Inspectoris an automated security assessmentservice that allows you to identify security issues and
A
enforce standards in your AWS environment. You can install an Amazon Inspector agent using the Systems
Manager Run Command and run security vulnerability assessments throughout your EC2 instances. Agentless
network reachability assessments are also possible. You can run Common Vulnerabilities and Exposures
(CVE), Center for Internet Security (CIS) Operating System configuration benchmarks, Network Reachability,
and other assessments. It can also assess programs in your instances installed via apt, yum, or Microsoft
Installer.
https://portal.tutorialsdojo.com/ 229
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon GuardDutyfortifies the security of your AWSarchitecture with intelligent threat detection and
A
continuous monitoring across your AWS accounts. It aggregates and analyzes the data collected from your
AWS CloudTrail, VPC Flow Logs, and DNS logs to detect various threats such as intra-VPC port scanning,
cryptocurrency mining, malware, backdoor command, control (C&C) activities, and many other vulnerabilities.
You can consolidate your security finding by setting up a master account for GuardDuty and associating other
AWS accounts as member accounts. This integration can be done via AWS Organizations or by manually
sending an invitation to the target member account.
mazon Maciehelps your organizations comply withthe Health Insurance Portability and Accountability Act
A
(HIPAA) and General Data Privacy Regulation (GDPR) regulations by detecting personally identifiable
information (PII) in your Amazon S3 buckets. It also comes with native multi-account support to efficiently
confirm your data security posture across your multiple AWS accounts from a single Macie administrator
account.
https://portal.tutorialsdojo.com/ 230
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Trusted Advisorprovides a set of best practicechecks and recommendations for your AWS infrastructure
A
covering cost optimization, security, fault tolerance, performance, and service limits. You can also fetch the
various Trusted Advisor checks programmatically via web service using the AWS Support API. You can even
use anEventBridgerule to monitor your Trusted Advisorchecks or create a CloudWatch Alarm to notify you of
any status changes in your resources. These integrations to other AWS services make it easy to track
underutilized Amazon EC2 instances in your account or detect any exposed IAM access keys on public code
repositories such as GitHub.
ources:
S
https://aws.amazon.com/blogs/mt/monitor-changes-and-auto-enable-logging-in-aws-cloudtrail/
https://docs.aws.amazon.com/config/latest/developerguide/remediation.html
https://docs.aws.amazon.com/awssupport/latest/user/trustedadvisor.html
build specification file (buildspec.yaml) contains the commands and related settings to instruct AWS
A
CodeBuild on how to build your source code. Adding IAM access keys and database passwords in plaintext in
your specification file is discouraged as these could be easily seen by unauthorized personnel. A better
approach is to create an IAM Role and attach a permissions policy that grants permissions on your resources.
You can store passwords and other sensitive credentials in AWS Systems Manager Parameter Store or AWS
Secrets Manager. You can also leverage using AWS Systems Manager Run Command instead of using scp and
ssh commands that could potentially expose the SSH keys, IP addresses, and root access of your production
servers.
https://portal.tutorialsdojo.com/ 231
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
T
● he AWS access and secret keys are stored in thebuildpsec.yamlfile.
● The database credential (DATABASE_PASSWORD) is storedas an environment variable in plaintext.
● Contains embeddedscpandsshcommands that exposethe SSH keys and server IP addresses.
ource:
S
https://docs.aws.amazon.com/codebuild/latest/userguide/data-protection.html
https://portal.tutorialsdojo.com/ 232
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS provides several AWS CodeCommit managed policies that you can use to provision access to the source
A
code repositories. You can attach AWSCodeCommitFullAccess to your administrators,
AWSCodeCommitPowerUser for managers or developers, and AWSCodeCommitReadOnly to auditors or
external parties.
he AWSCodeCommitPowerUser policy grants users access to all of the functionality of CodeCommit and
T
repository-related resources. However, it does not allow them to delete CodeCommit repositories or create or
delete repository-related resources in other AWS services. Developers who have this policy can directly push
their code to the master branch on all CodeCommit repositories without raising a proper pull request. It falls
short on the principle of granting the least privilege as developers could circumvent the standard Git workflow
or your standard development process.
emember that you can't modify these AWS-managed policies. However, you can add aDenyrule to an IAM
R
Role to block specific capabilities included in these policies and to further customize the permissions. Notice
the following IAM Policy:
https://portal.tutorialsdojo.com/ 233
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he CodeCommitPowerUser managed policy can be affected by another policy that denies several actions,
T
such as pushing code to the master branch. This policy will prevent the developer from pushing, deleting, and
merging code to the master branch of the TutorialsDojoManila CodeCommit repository as shown above.
ource:
S
https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control-iam-identity-based-acc
ess-control.html
https://portal.tutorialsdojo.com/ 234
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
y default, Amazon S3 allows both HTTP and HTTPS requests. As part of security compliance, you may be
B
required to ensure that all data in transit to and from your S3 buckets are encrypted using the HTTPS protocol.
If you have a lot of S3 buckets, you can enable the AWS Config rule “s3-bucket-ssl-requests-only”, which
checks whether S3 buckets have policies that require requests to use Secure Socket Layer (SSL).
o be compliant on the AWS Config rule, your S3 bucket will need to have the proper bucket policy to
T
explicitly deny HTTP requests and only allow HTTPS requests. To determine HTTP or HTTPS requests in a
bucket policy, use a condition that checks for the key "aws:SecureTransport". You can set the action to
“Deny” any requests if the condition “aws:SecureTransport”isfalse.
https://portal.tutorialsdojo.com/ 235
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ere’s an example S3 bucket policy to deny HTTP requests on the bucket, thus forcing all connections to be
H
HTTPS-only.:
Id": "ExamplePolicy",
"
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSSLRequestsOnly",
"Action": "s3:*",
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::exampletutorialsdojobucket",
"arn:aws:s3:::exampletutorialsdojobucket/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
}
]
}
ources:
S
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/
https://docs.aws.amazon.com/config/latest/developerguide/s3-bucket-ssl-requests-only.html
https://portal.tutorialsdojo.com/ 236
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Secrets Manager and Systems Manger Parameter Store offer similar functionalities that allow you to
A
centrally manage and secure your secret information that can then be retrieved by your applications and
resources in AWS.
oth services offer similar web interfaces on which you can declare key-value pairs for your parameters and
B
secrets. So it is important to know the similarities and differences they have in order to choose the right
service for a given situation in the exam.
Here’s a summary of features of SSM Parameter Store and AWS Secrets Manager:
In the exam, the usual situation is when you need to store a database password, you should choose SSM
Parameter Store if you don’t have to automatically rotate the secret regularly, plus it doesn’t cost anything.
https://portal.tutorialsdojo.com/ 237
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
s an additional note, Parameter Store is now integrated with Secrets Manager so that you can retrieve Secrets
A
Manager secrets when using other AWS services that already support references to Parameter Store
parameters. This is helpful if your application is configured to use Parameter Store APIs, but you want your
secrets to be stored in Secrets Manager.
ources:
S
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html
https://aws.amazon.com/about-aws/whats-new/2018/07/aws-systems-manager-parameter-store-integrates-
with-aws-secrets-manager-and-adds-parameter-version-labeling/
https://portal.tutorialsdojo.com/ 238
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS provides you with three types of policies that you can attach when managing permissions for your IAM
A
Users, Groups or Roles –AWS managed policy,customermanaged policyor aninline policy.
nAWS managed policyis a standalone policy thatis created and administered by AWS which can be used to
A
provide permissions for many common use cases or specific job functions. You cannot change the
permissions defined in AWS managed policies. AWS occasionally updates the permissions defined in an AWS
managed policy. For example,arn:aws:iam::aws:policy/IAMReadOnlyAccessis an AWS managed policy as well as,
AmazonDynamoDBFullAccess, IAMFullAccess, or AmazonEC2ReadOnlyAccess.
ustomer managed policiesare standalone policiesthat you manage on your own AWS Account. You can
C
attach these policies to users, groups, or roles the same way as managed policies. You can copy an AWS
managed policy and modify its contents to apply it as a customer managed policy. This gives you much better
control of the permissions you grant to your IAM entities.
https://portal.tutorialsdojo.com/ 239
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ninline policyis a policy that's embedded in anIAM identity (a user, group, or role). When you create an
A
identity in IAM, you can directly attach an inline policy. An inline policy is a strict one-to-one relationship
between a policy and the identity that it's applied to. It can’t be re-used to be attached to other IAM identities.
For example, you want to be sure that the permissions in a policy are not inadvertently assigned to an identity
other than the one they're intended for.
anaged Policies is more flexible than Inline policies. Managed policies are recommended for the following
M
reasons:
R
● eusability- Can be reused by attaching to otheridentities
● Central change management –one policy change canbe applied to all identities.
https://portal.tutorialsdojo.com/ 240
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
V
● ersioning and rollback– can create multiple versions and rollback changes if needed.
● Delegating permissions management –users can attach/detachpolicies on their own, while you
control the permissions on those policies.
● Automatic updates for AWS managed policies– AWS updatesmanaged policies when necessary
ource:
S
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html
https://portal.tutorialsdojo.com/ 241
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ystems Manager Automation allows you to perform maintenance and deployment tasks for EC2 instances.
S
There are several predefined AWS Automation documents that you provide for common use cases but you can
also upload your own Automation documents or share documents with other AWS accounts. Some use cases
for SSM Automation are the following:
● uild Automation workflows to configure and manage instances and AWS resources.
B
● Create custom workflows or use pre-defined workflows maintained by AWS.
● Receive notifications about Automation tasks and workflows by using Amazon EventBridge.
● Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems
Manager console.
mong the automation workflows is the ability to create updated AMIs. This is helpful if for example, you want
A
to apply the latest system patches to your EC2 instances and create an updated AMI so that all new instances
that you will create will have the latest patch applied.
n AWS Systems Manager Automation, select “Execute”, and you can choose the Automation Document
O
AWS-UpdateLinuxAMI.
he next page will present you with an option to fill out the input parameters for this document. The important
T
part here is theSourceAmiId. This value should beyour current AMI ID that you want to use as a base and
https://portal.tutorialsdojo.com/ 242
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
here the patches will be applied. It’s also important to have anAutomationAssumeRolepresent in order to
w
allow SSM Automation to perform the needed actions.
pon clicking the Execute button, SSM Automation will take the following actions based on the automation
U
document:
● Take the AMI and create a new EC2 instance with it
● Execute the document runbook that applies patches to the OS
● Shutdown the instance
● Create an AMI of the EC2 instance
● Terminate the EC2 instance
● Output the new AMI ID
https://portal.tutorialsdojo.com/ 243
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he output AMI ID can now be stored on SSM Parameter Store or be used by a Launch Template. You can also
T
register this Automation task on the SSM Maintenance Windows console page to allow you to set a schedule
for this automation to run regularly.
ources:
S
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-systems-manager-ami-automation/
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html
WS Systems Manager includes the Session Manager service which allows you to manage your EC2
A
instances, on-premises instances and virtual machines (VMs) through an interactive one-click browser-based
shell or through the AWS CLI. With Session Manager, you don’t need to open inbound portsoruse bastion
hosts when you wanttohave Shell access or RDP accessto your EC2 instances.
ession Manager also makes it easy to comply with corporate policies that require controlled access to
S
instances, strict security practices, and fully auditable logs with instance access details. For example, you can
view all sessions performed by users on the Session Manager History page. Or you can send the whole session
log to an S3 bucket or CloudWatch Logs for recording purposes.
emember that SSM Session manager provides a centralized location where users can SSH to EC2
R
instances or on-premises service configured with the SSM agent. For example, you are required by the
company to provide a solution for Developers to SSH on both on-premises instances and EC2 instances and
save all sessions to an S3 bucket (or CloudWatch Logs) which will be available for the security team to
perform an audit. Once you installed and configured the SSM agent on your instances, they will show up on
the Session Manager page.
Here’s how you can use Session Manager and send your session logs to an S3 bucket.
. Be sure that your instance has the SSM agent installed and is registered on the Managed Instances section
1
of Systems Manager.
https://portal.tutorialsdojo.com/ 244
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. Go to Systems Manager > Session Manager and click the “Start session” button. Select your EC2 instance
2
and click “Start Session”.
. You will have a browser tab that provides Shell access to your instance. You can click the “Terminate” button
3
on the upper-right side once you are done with your session.
https://portal.tutorialsdojo.com/ 245
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
4. You can view this session on the Session Manager history page.
. On the preferences page, you can configure Session Manager to send logs to an S3 bucket or CloudWatch
5
Logs. Click the “Edit” button on the Preferences page.
https://portal.tutorialsdojo.com/ 246
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html
https://portal.tutorialsdojo.com/ 247
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS SSM Inventory provides visibility for your Amazon EC2 instances or on-premises servers. You can use
A
SSM Inventory to collect metadata from your managed instances such as installed applications, OS versions,
Windows Updates, Network configuration, running services, etc. You can store this metadata in a central
Amazon S3 bucket, and then use built-in tools to query the data and quickly determine which instances are
running the software and configurations required by your software policy, and which instances need to be
updated.
WS SSM inventory is helpful if you want to make sure that your managed instances have the correct
A
application versions and patches installed. You even filter which instances have outdated system files. On
the exam, you need to remember that SSM Inventory is also helpful if you want to identify details of your
on-premises instances that you want to migrate to AWS. It will help you view and gather details of those
instances to sync the data collected on an S3 bucket. This helps ensure that your new instances on AWS will
have similar configurations to those on your current on-premises environment.
1. Ensure that your instance is registered as a Managed Instance on Systems Manager.
2. Go to Systems Manager > Inventory and click “Setup Inventory”. Set a name for this inventory association.
https://portal.tutorialsdojo.com/ 248
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. Select targets for your inventory. You can choose to select all instances, or use a specific Tag identifier, or
3
manually select your desired instances.
. You can set the schedule on how often the inventory is updated as well as the parameters it will collect on
4
the instances.
https://portal.tutorialsdojo.com/ 249
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
5. You can also specify an S3 bucket on which the inventory will sync the data it collects.
https://portal.tutorialsdojo.com/ 250
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. After clicking the “Setup Inventory” button, the details of your instances will be shown on the SSM Inventory
6
page.
ources:
S
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-inventory.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-configuring.html
https://portal.tutorialsdojo.com/ 251
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS SSM Maintenance Windows allows you to define schedules for when to perform potentially disruptive
A
actions on your instances such as patching an operating system or installing software updates. Each
maintenance window has a schedule, a duration, a set of registered targets, and a set of registered tasks.
When you create a Patch Manager task, it will be shown on the Maintenance Windows Tasks page.
n the exam, you need to remember that Patch Manager allows you to define custom patch baselines. This
O
is helpful if your company has a custom repository of software updates that need to be installed on your
managed instances. You can schedule the application of the patches by creating a Maintenance Window and
assigning your Patching task to that window.
Below are the steps to set up Maintenance Windows and Patch Manager for your managed instances.
1. Ensure that your managed instances are registered on Systems Manager.
2. Go to Systems Manager > Maintenance Windows and click “Create Maintenance Window”.
https://portal.tutorialsdojo.com/ 252
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. Provide a schedule for this maintenance. We’ll set this to run every Sunday morning at 1:00 AM with a 3-hour
4
window time. And one hour before the window closes, SSM will stop initiating tasks.
https://portal.tutorialsdojo.com/ 253
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
5. Click “Create maintenance window”. You will then see your created maintenance window.
. Now go to Patch Manager and click “Configure patching”. Select your instances to patch. You can select by
6
instance tags, patch groups, or by selecting the instances manually.
https://portal.tutorialsdojo.com/ 254
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
7. Select your Patching schedule then select the previously created maintenance window.
. Click “Configure patching” to create this patching task. You should see the
8
“AWS-RedHatDefaultPatchBaseline” for your instance as the default baseline.
https://portal.tutorialsdojo.com/ 255
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
. Go to the Maintenance Windows page again and select your maintenance window. Click the Tasks section
9
and you should see your configured Patching task.
10. SSM Maintenance Windows will now run this task on the specified window time.
ources:
S
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html
https://portal.tutorialsdojo.com/ 256
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS IAM Identity Center allows you to centrally manage access to multiple AWS accounts and applications
A
using identity federation. Some ways you can implement identity federation using IAM Identity Center are:
1. C onnect your existing identity provider, like Active Directory, to IAM Identity Center. This allows you to
continue managing your users in AD and sync the user identities to the IAM Identity Center.
2. Define policies in the IAM Identity Center to control what AWS accounts and resources individual users
or groups can access.
3. Create IAM roles in the AWS accounts and attach the required permissions policies to allow federated
users to assume roles.
4. Users can then access AWS accounts and applications using a single set of credentials through the
IAM Identity Center user portal after signing in using their AD credentials.
5. You can also federate user identities from external identity providers like Okta, PingFederate etc using
SAML 2.0 or OpenID Connect.
6. IAM Identity Center also supports just-in-time access to AWS accounts by allowing users to directly
assume roles without needing long term credentials.
ources:
S
https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html
https://repost.aws/knowledge-center/iam-identity-center-federation
https://portal.tutorialsdojo.com/ 257
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 258
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he Amazon Elastic MapReduce (EMR) service allows you to run different types of big data frameworks in
T
AWS. This is a managed big data platform for processing vast amounts of data using open-source tools such
as Apache Hadoop, Flink, HBase, HCatalog, Hive, Hudi, Hue, Jupyter, Livy, MXNet, Oozie, Phoenix, Pig, Presto,
Spark, Sqoop, TensorFlow, Tex, Zeppelin, ZooKeeper and many more. It is quite obvious that you can run a lot
of open-source data analytics workloads with Amazon EMR.
echnically, Amazon EMR runs your big data framework on Amazon EC2 instances, Amazon Elastic Kubernetes
T
Service clusters, or in your on-premises EMR cluster via AWS Outposts. These compute resources are deployed
in your VPC and then grouped as an Amazon EMR cluster. You can directly access and control the underlying
EC2 instances of your EMR cluster. Take note that this service is not serverless since it is using Amazon EC2
and EKS. The Amazon EMR service just automates the server provisioning and management process for you.
The data in your EMR cluster can also interact with other AWS data stores such as Amazon S3 and Amazon
DynamoDB.
his service also has a more cost-efficient feature called Amazon EMR Serverless, which is a serverless option
T
in Amazon EMR that simplifies the process of running open-source big data analytics frameworks for data
analysts and engineers to run without configuring, managing, and scaling clusters or any virtual servers.
Amazon EMR Serverless provides all the features and benefits of the standard Amazon EMR but without the
management overhead of planning, managing, and maintaining any computing clusters.
Amazon QuickSight
mazon Kinesis Data Streamsis a powerful AWS servicedesigned for real-time processing and analysis of
A
large data streams. It allows you to easily collect, process, and analyze streaming data, enabling applications
to respond quickly to new information.
Features
● Amazon Kinesis Data Streams is ideal for real-time applications like log and event data processing,
real-time analytics, and complex stream processing.
● The fundamental data unit stored in Kinesis Data Streams consists of a sequence number, partition
key, and data blob, which can be up to 1MB.
https://portal.tutorialsdojo.com/ 259
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● S hards are the basic throughput unit of Kinesis Data Streams, capable of handling up to 1MB/s of data
for writing or 1,000 records/s and 2MB/s of data for reading.
● Offers two modes, on-demand and provisioned, to manage and scale the service according to your
needs.
● Data records are stored for a default of 24 hours, with the option to extend storage to 365 days for
long-term analysis (additional charges may apply).
● Producers input data into streams, while consumers process and analyze the data in real-time.
Use Cases
● Ideal for collecting and analyzing infrastructure performance and operational data in real-time.
● Perform analytics on data as it arrives, such as monitoring application activity or user interactions on a
website.
● Support for advanced data processing models, including multi-stage processing and parallel processing
of streams.
Amazon AppFlow
mazon AppFlow is a fully managed integration service that enables you to securely transfer data between
A
Software-as-a-Service (SaaS) applications and your AWS Services. It supports different SaaS apps such as
Salesforce, Marketo, Slack, ServiceNow, and many more. You can also integrate other AWS services like
Amazon S3 and Amazon Redshift in just a few clicks. With AppFlow, you can run your data flows on-demand or
by setting up a schedule. You can also run it as a response to a business event. Amazon AppFlow provides you
with powerful data transformation capabilities like filtering and validation to easily generate rich and
ready-to-use data for your custom applications.
Amazon Eventbridge
mazon Eventbridge is a serverless event bus that makes it easy to connect applications together using data
A
from your own applications, Software-as-a-Service (SaaS) applications, and other AWS services. Under the
hood, Amazon EventBridge is a service that is based onCloudWatch Events with more advanced features.It
actually uses the same service API, endpoint, and underlying service infrastructure as CloudWatch Events.
However, Amazon EventBridge is meant to be used for your own applications, SaaS apps, and other external
sources to complement the data provided by AWS services. It is used for building event-driven applications
that take care of event ingestion and delivery, security, authorization, and error handling for the user.
WS App Runner allows you to easily run your web application, API services, backend web services, and
A
websites on AWS without any infrastructure or container orchestration required. This service allows you to
directly use your existing container image, container registry, source code repository, or existing CI/CD
https://portal.tutorialsdojo.com/ 260
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
orkflow to a fully running containerized web application on the AWS platform and run it in a matter of
w
minutes.
It seamlessly integrates with your development and CI/CD workflow to provide the appropriate level of
automation to launch your code, application, or container image faster. AWS App Runner automates all the
required dependencies which eliminate the need for you to understand, provision, scale, or manage any AWS
compute resources. In addition, this service also manages all the related networking, and routing resources for
your application. App Runner empowers you to run thousands of applications that automatically scale while
providing security and compliance best practices.
AWS App2Container
WS App2Container or A2C is a command-line tool for modernizing .NET and Java applications into
A
containerized applications. A2C analyzes and builds an inventory of all applications running in virtual
machines, on-premises, or in AWS. You just have to choose the application you want to containerize, and A2C
will package the application artifact and dependencies into container images. It also configures the network
ports and generates the ECS task and Kubernetes pod definitions.
AWS Copilot
WS Copilot is also a command-line interface (CLI) that enables you to quickly launch and easily manage
A
containerized applications on AWS. Copilot automates each step in the deployment lifecycle of your containers
– from pushing the images to a container registry, creating a task definition, and creating a container cluster.
mazon Elastic Kubernetes Service or Amazon EKS is a managed service that you can use to run Kubernetes
A
on AWS. It’s like Amazon ECS, but instead of Docker containers, this service is used for running Kubernetes
clusters. Amazon EKS automates the installation, operation, and maintenance of your own Kubernetes control
plane, pods, and nodes.
ou can deploy your Kubernetes cluster in various ways in AWS and can include additional networking add-ons
Y
to improve your containerized architecture. A Kubernetes container can be deployed via
● mazon EKS cluster in your AWS account
A
● Amazon EKS on AWS Outposts
● Amazon EKS Anywhere
● Amazon EKS Distro
https://portal.tutorialsdojo.com/ 261
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he first option allows you to launch a Kubernetes cluster using managed or self-managed Amazon EC2 nodes
T
that you can customize and control. You can also choose to deploy your Kubernetes pods on AWS Fargate to
make the cluster serverless and extremely cost-effective.
mazon EKS on AWS Outposts is a deployment option that uses a physical AWS Outpost rack on your
A
on-premises network to run your Kubernetes workloads. The data plane is also located on-premises, so you
can have more control compared with running it exclusively in AWS. Using Amazon EKS Anywhere is another
way to deploy your containers on-premises. It works like Amazon ECS Anywhere, which allows you to run your
Kubernetes cluster entirely on your own. This means that the hardware, app deployment location, control plane,
and data plane are all controlled on your own physical network. This gives you extensive control over all the
components of your containerized application suite while maintaining official support from AWS.
he other deployment option that you can choose is to deploy your Kubernetes container is through the
T
Amazon EKS Distro option. The word “distro” simply refers to the distribution of the same open-source
Kubernetes software deployed by Amazon EKS in the AWS cloud. Amazon EKS Distro follows the same
Kubernetes version release cycle as Amazon EKS and is provided to you as an open-source project that you
can deploy on your own computer or on-site environment. It’s similar to the Amazon EKS Anywhere option,
except that it does not include support services offered by AWS.
https://portal.tutorialsdojo.com/ 262
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ed Hat OpenShift Service on AWS (ROSA) is a service in the AWS Cloud that is operated by Red Hat and jointly
R
supported with AWS. It provides a fully managed Red Hat OpenShift platform on a pay-as-you-go billing option.
This allows enterprise developers who are familiar with deploying their applications with OpenShift
on-premises to quickly build and deploy applications. ROSA is also integrated with AWS Security Token Service
(STS) support for a more integrated experience. This makes deploying cloud applications with dependencies
on AWS cloud-native services much easier.
WS Database Migration Service or AWS DMS helps you migrate your databases to AWS quickly and securely.
A
The source database remains fully operational during the migration, minimizing downtime to applications that
rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely
used commercial and open-source databases, and it can also be used for continuous data replication with high
availability. You can use AWS Schema Conversion Tool or AWS SCT to make heterogeneous database
migration. You can transform the schema of your source database to a different one. You can convert
PostgreSQL database to MySQL, Oracle to Amazon Aurora, Apache Cassandra to DynamoDB, and many more.
mazon DocumentDB is a fast, scalable, highly available MongoDB-compatible database service. MongoDB is
A
a document-oriented database program that is cross-platform and is also a type of a NoSQL database. In a
MongoDB database, you call a table a “collection”; a row a “document” and a column a “field”. Each document
contains fields and values in JSON format with no rigid schema enforced, unlike in traditional SQL databases.
This is also the same concept in Amazon DocumentDB – it stores, queries, and index JSON documents. That’s
how DocumentDB got its name!
s the name suggests, Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database
A
service in the AWS cloud. It delivers ultra-fast cache performance with microsecond read latency, single-digit
millisecond write latency, high throughput, and Multi-AZ durability for modern applications. Amazon MemoryDB
for Redis is perfect for microservices architectures that require distributed caching and for web applications
that need low latency, high scalability, and flexible data structures. It also has APIs to make your development
experience more agile and simplified.
mazon MemoryDB for Redis stores your entire dataset in memory for faster access compared to storing it on
A
disk drives. It leverages a distributed transactional log to provide both in-memory speed and data durability,
consistency, and recoverability in the event of system outages.
https://portal.tutorialsdojo.com/ 263
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon ElastiCache
Let’s now discuss in-memory databases in AWS that are primarily used for caching.
s its name implies, Amazon ElastiCache is a caching service that allows you to set up, run, and scale
A
open-source in-memory databases like Memcached or Redis. By storing data in-memory, they can read data
more quickly than disk-based databases. If your application experiences performance slowdowns by having
frequent calls to return identical datasets, then you must apply database caching to remove this bottleneck.
You can refactor your application to use Amazon ElastiCache and fetch the data in-memory, instead of fetching
the same exact datasets again and again. Aside from caching, you can use this service in real-time analytics,
distributed session management, geospatial services, and many more. In ElastiCache, there are two types that
you can launch: Amazon ElastiCache for Memcached or Amazon ElastiCache for Redis. Both of these types
provide sub-millisecond latency, data partitioning and require a minimal amount of code to integrate into your
application.
mazon ElastiCache for Memcached is based on the open-source Memcached in-memory data store. This is
A
suitable for building a simple, scalable caching layer for your data-intensive apps. Memcached is
multithreaded, which means it can utilize multiple processing cores. It lets you handle more operations by
scaling up compute capacity. The downside of using Memcached is its lack of data replication capability,
which can affect the availability of your application.
mazon ElastiCache for Redis is based on the open-source Redis in-memory data store. It provides advanced
A
data structures, pub/sub messaging, geospatial, and point-in-time snapshot support. In addition, it also has a
replication feature that is not available in Memcached. So if you need an in-memory database storage solution
that provides high availability using data replication, then this type is the one to use. You just have to enable the
“Cluster Mode” in Redis to have multiple primary nodes and replicas across two or more Availability Zones.
WS Command Line Interface (AWS CLI)is a unifiedtool for managing AWS services. It enables users to
A
manage multiple AWS services via the command line and automate them using scripts. The latest version,
AWS CLI v2, introduces several enhancements, including improved installers, new configuration options like
AWS IAM Identity Center (successor to AWS SSO), and various interactive features. It is designed to simplify
the management and deployment of AWS resources, offering commands for a wide range of AWS services.
Users can perform actions such as launching and managing EC2 instances, publishing messages on SNS
topics, or syncing files to S3 buckets directly from the command line. The AWS CLI also supports file
commands for efficiently managing Amazon S3 objects and providing commands for listing, uploading, and
syncing files. The AWS CLI makes it easier for developers and system administrators to interact with AWS
services through scripting and direct command line access.
https://portal.tutorialsdojo.com/ 264
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
he AWS Cloud Development Kit, or AWS CDK for short, is an open-source software development kit for
T
Amazon Web Services. You can use this to programmatically model your AWS infrastructure using TypeScript,
Python, Java, .NET, or any other programming languages that you prefer.
he AWS CDK Command Line Interface (CLI) can be used to interact with your CDK applications in AWS. CDK
T
CLI is capable of listing the stacks defined in your CDK app, synthesizing the stacks into AWS CloudFormation
templates, determining the differences between running stack instances and the stacks defined in your AWS
CDK code, and deploying stacks to any public AWS Region that you choose.
he CDK framework is primarily used to author AWS CDK projects that are executed to generate AWS
T
CloudFormation templates. Various projects made using AWS CDK can be executed using the AWS CDK
command line or in a continuous delivery system that you own.
AWS CloudShell
WS CloudShell is a browser-based shell that is available in the AWS Management Console. This service
A
makes it easier to securely manage, explore, and interact with your AWS resources via a cloud-based command
line interface. This is pre-authenticated with your IAM user or AWS Management Console credentials.
arious common development and operations tools are pre-installed on CloudShell, so there’s no need to
V
install or set up any software on your local machine. You can easily run scripts with the AWS Command Line
Interface (AWS CLI), experiment with AWS service APIs using the AWS SDKs, or use a plethora of other tools to
fast-track your production process.
WS CloudShell empowers you to automate tasks, manage infrastructure, and interact with various AWS
A
resources. This service can used to clone repositories containing commonly used scripts, make edits to those
scripts, and store them for future reference. The Amazon Elastic Container Service (Amazon ECS) CLI and the
AWS Serverless Application Model (AWS SAM) CLI can be integrated with AWS SDKs to develop applications
and use common CLIs, such as the person to manage your AWS resources.
AWS CodeArtifact
asically, AWS CodeArtifact is a fully managed artifact repository service that can securely store, publish, and
B
distribute software packages. This is beneficial for companies in simplifying their software development
process and application deployment. The AWS CodeArtifact service works with commonly used package
managers and builds tools like Maven and Gradle, Node Package Manager (NPM), yarn, pip, twine, NuGet, and
others.
WS CodeArtifact can be integrated with AWS CodeBuild to improve your CI/CD workflow. The CodeArtifact
A
repositories can be specified as a source/target for consuming and publishing packages in your AWS
https://portal.tutorialsdojo.com/ 265
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
odeBuild project configuration. The CodeBuild images have client tools for all the package types, which
C
makes the integration to CodeArtifact faster.
ou can provision the build’s IAM role in AWS CodeBuild and configure the build tool or package manager to
Y
use the target repository and fetch a CodeArtifact AUTH token at the launch of the build. Once the build
completes, the artifacts can be published to your AWS CodeArtifact repository. The AWS CodeBuild builds can
also be triggered usingAmazon EventBridge emittedby a CodeArtifact repository when one of its contents
changes.
https://portal.tutorialsdojo.com/ 266
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon CodeGuru
mazon CodeGuru is a suite of development services in AWS. It contains different tools and features such as
A
Amazon CodeGuru Reviewer, Amazon CodeGuru Profiler, BugBust, and many more. The primary function of
Amazon CodeGuru Reviewer is to provide intelligent recommendations for improving your application
performance, efficiency, and code quality. It can scan your code and detect a plethora of code defects like bad
exception handling, insecure CORS policy, path traversal, hardcoded credentials, and many more. You can also
integrate this with your CI/CD workflow so you can run code reviews and recommendations to improve your
codebase.
he other module for this service is called the Amazon CodeGuru Profiler. A profiler is basically a component
T
that collects your CPU data and analyzes the runtime performance data from your live applications. This is
helpful in identifying expensive lines of codes that inefficiently use the CPU, which causes CPU bottlenecks.
WS Fault Injection Simulator is a managed service that provides you the capability to perform fault injection
A
experiments or simulations on your AWS workloads. Fault injection is based on the principles of chaos
engineering, which is basically the process of testing a distributed computing system to verify that it can
withstand unexpected disruptions or faults. In AWS Fault Injection, you run “experiments” in your AWS
workloads to stress the applications or the underlying resources by creating disruptive events, which allows
you to observe how your enterprise application responds. The information you gathered here can improve the
performance and resiliency of your applications. These experiments help you create the real-world scenarios
needed to uncover rare application issues that can be quite difficult to see or spot. AWS FIS provides templates
that generate disruptions as well as the needed controls and guardrails to run experiments in production. It
also provides an option to automatically roll back or stop an experiment if certain conditions are met.
WS Control Tower is a service that helps you set up and govern a secure multi-account AWS environment. It
A
automates the setup of your multi-account AWS environment with just a few clicks. The setup uses blueprints
that follow AWS best practices for security and management. Control Tower provides mandatory high-level
rules, called guardrails, that help enforce your policies using service control policies or detect policy violations
using AWS Config rules.
ou can use the AWS Control Tower service to automate the manual process of setting up a new landing zone.
Y
Each landing zone that is launched by AWS Control Tower includes all the relevant best practices, identity
blueprints, federated access, and account structure. The blueprints implemented on AWS Control Tower
include, but are not limited to, the following:
A
● multi-account environment via AWS Organizations
● Cross-account security audits using AWS IAM and AWS IAM Identity Center
https://portal.tutorialsdojo.com/ 267
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
here are particular aspects of your AWS Control Tower landing zone are configurable in the AWS Management
T
Console like the selection of Regions and optional controls while other modifications may be made outside the
console through automation. You can create more extensive customizations of your landing zone with the
Customizations for AWS Control Towercapability, whichis a GitOps-style customization framework that works
with AWS Control Tower lifecycle events as well as AWS CloudFormation templates.
he Customizations for AWS Control Tower (CfCT) feature helps you customize your AWS Control Tower
T
landing zone while maintaining compliant with the best practices in AWS. The different customizations are
implemented via AWS CloudFormation templates and service control policies (SCPs) for a more automated
and granular approach. The AWS Control Tower lifecycle events are fully integrated with CfCT, which allows
your resource deployments to be kept synchronized with your landing zone.
hen a new AWS account is created via an Account Factory, all cloud resources that you configured and
W
attached to the account are deployed automatically. Custom templates and policies can also be deployed to
individual AWS accounts and organizational units (OUs) within your organization.
https://portal.tutorialsdojo.com/ 268
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Lookout for Metrics is one of the services of the Amazon Lookout family for detecting anomalies in
A
your business metrics. An anomaly can be a sudden nosedive in your sales revenue or an unexpected drop in
your customer acquisition rates. It can identify unusual variances in your business metrics and alert you
immediately so you can take the proper course of action.
WS Compute Optimizer is a service that optimizes your computing capacity in AWS by helping you right-size
A
your Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block Store (EBS) volume
configurations, and AWS Lambda function memory sizes. It uses machine learning to identify the optimal AWS
resource configurations by analyzing historical utilization metrics. AWS Compute Optimizer comes with a set
of APIs and an intuitive console experience to reduce your OPEX and efficiently increase workload
performance by recommending the appropriate AWS resources for your AWS workloads.
or example, the AWS Compute Optimizer can recommend that you resize your current Amazon EC2 instance
F
to a smaller instance type if the CPU Utilization of your virtual machine is constantly at 20%, even during peak
hours. Conversely, it can optimize your workloads by suggesting that you vertically scale up your instance if it
keeps breaching its burst capacity.
mazon Managed Grafana is a fully managed service for Grafana. Grafana is an open-source analytics
A
platform that is commonly used to query, visualize, observe, and make use of your system data that are
gathered from multiple sources. When you hear the phrase “Amazon Managed”, that means AWS is managing
the underlying infrastructure required to run an open-source program or a particular tool. For Amazon Managed
Grafana, AWS is the one that provisions and manages the required resources to run your Grafana dashboards,
along with its other dependencies.
mazon Managed Grafana can collect system metrics from multiple data sources in your observability stack,
A
such as Amazon Managed Service for Prometheus, Amazon CloudWatch, and Amazon OpenSearch Service.
System alerts can also be automated by using different notification services in AWS. You can also integrate
this with third-party vendors like Datadog, Splunk et cetera. In addition, you can set up your own self-managed
data source like InfluxDB and integrate it with your Grafana workspace. You also don’t have to worry about the
infrastructure required to run your Grafana visualizations and dashboards since the necessary resources are all
provisioned and managed by AWS itself.
https://portal.tutorialsdojo.com/ 269
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon Managed Service for Prometheus is a fully managed service for the open-source monitoring tool
A
called Prometheus. This is commonly used for monitoring modern cloud-native applications and Kubernetes
clusters. Prometheus can enable you to securely ingest, store, and query metrics from different container
environments. In Amazon Managed Service for Prometheus, the resources required to run the open-source
Prometheus tool are all provisioned and managed by the AWS team themselves. Scaling the underlying
architecture is also handled by Amazon. You have the option to collect system metrics from your container
clusters running in AWS, running on-premises, or even both.
mazon Managed Service for Prometheus allows you to use the open-source Prometheus query language or
A
PromQL. This query language helps you to monitor the performance of your containerized workloads that are
running on the AWS Cloud or on-site. It can also scale automatically as your workloads increase or shrink and
uses AWS security services to enable fast and secure access to data.
Amazon FSx
● mazon FSx for Lustre
A
● Amazon FSx for Windows Server
● Amazon FSx for NetApp ONTAP
● Amazon FSx for OpenZFS
mazon FSx for Lustre is quite similar to Amazon EFS. It is also a POSIX-compliant shared file system that only
A
supports Linux servers. The word “Lustre” actually refers to the open-source file system that this service is
using. Lustre is a parallel file system used for large-scale cluster computing. The name Lustre is basically a
combination of the word Linux and cluster. FSX for Lustre is primarily used for High-Performance Computing,
machine learning, or HPC applications that need high-performance parallel storage for frequently accessed
‘hot’ data. This service can provide a throughput of hundreds of gigabytes per second and millions of IOPS to
support your demanding workloads. You can mount an Amazon FSX for Lustre file share to your EC2 instances
or your containers. You can use the Container Storage Interface (CSI) to connect to your Amazon EKS cluster.
mazon FSx for Windows Server is essentially a fully managed Microsoft Windows file server. Unlike Lustre,
A
which is Linux-based, this service is backed by a fully native Windows file system. There are a lot of
Microsoft-based technologies that you can integrate with this service. You can access this file share using the
Server Message Block protocol or SMB. SMB is a protocol commonly used by Windows servers. You can also
integrate your existing Microsoft Active Directory to provision file system access to your users. The Amazon
FSx for Windows Server can be used as shared file storage for your Microsoft SharePoint, Microsoft SQL
Server database, Windows Container, or any other Windows-based applications.
https://portal.tutorialsdojo.com/ 270
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
mazon FSx for NetApp ONTAP is a storage service in the AWS Cloud that enables customers to launch and
A
run fully managed ONTAP file systems in the cloud. Basically, ONTAP is NetApp’s file system technology that
provides a widely adopted set of data access and data management capabilities for various organizations.
Amazon FSx for NetApp ONTAP provides the same features, performance, and APIs of on-premises NetApp
file systems but is bestowed with greater agility, scalability, and simplicity of a fully managed AWS service.
mazon FSx for OpenZFS is a fully managed file storage service that enables customers to easily launch, run,
A
and scale fully managed file systems that are built on the open-source OpenZFS file system. This service helps
you migrate your on-premises file servers without changing your enterprise applications or even on how you
manage data. You can use FSx for OpenZFS in building new high-performance, data-intensive enterprise
applications on the AWS cloud.
AWS Backup
he function of the AWS Backup service is quite obvious based on its name. AWS Backup is a fully managed
T
backup service that makes it easy to automate your server and database backups. By default, the automated
backups of your RDS databases only have a 7-day retention period. The maximum retention period is 35 days.
By using AWS Backup, you can perform a daily snapshot of the database with the retention set to 90 days, a
year, or even longer! You can also create a lifecycle policy to automatically move your backups to cold storage.
he AWS Elastic Disaster Recovery (AWS DRS) service helps companies and organizations minimize downtime
T
and data loss with fast, reliable recovery of on-premises and cloud-based applications. It uses affordable
storage, minimal compute, and point-in-time recovery to meet your RTO/RPO in a cost-effective manner.
he resiliency of your AWS workloads will further be improved when you use AWS Elastic Disaster Recovery to
T
replicate on-premises or cloud-based applications running on supported OS to various AZs and regions. The
AWS Management Console can be used to configure replication and launch settings, monitor data replication,
and launch instances for Business Continuity Process (BCP) drills or even for recovery procedures. The AWS
Elastic Disaster Recovery can be configured on your source servers to initiate secure data replication to
specific targets you specified. Your custom data is replicated to a staging area subnet of your Amazon VPC
that is located in the AWS Region, and AWS account you select. Replicating the data into a staging area
reduces costs through the use of affordable cloud storage options and minimal compute resources to
maintain the ongoing replication.
https://portal.tutorialsdojo.com/ 271
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS Proton
WS Proton is a service that automates container and serverless deployments in AWS. It empowers your
A
platform teams and developers to have consistent development standards and best practices. This service is
very useful if you have a large number of developers in your organization. AWS Proton enables your developers
to deploy container and serverless applications using pre-approved stacks that your platform team manages. It
balances control and flexibility in your organization by allowing developers to innovate within the set guardrails
that you implement.
It also offers a self-service portal for your developers, which contains AWS Proton templates that they can use
and deploy. A Proton template contains all the information required to deploy your custom environments and
services. You can create an AWS Proton Component as well, which provides flexibility to your service
templates. These components in AWS Proton provide platform teams with a way to extend core infrastructure
patterns and define guardrails for your developers.
AWS CloudHSM
WS CloudHSM is a fully managed, cloud-based hardware security module or HSM. Obviously, the word:
A
“HSM” in CloudHSM means “hardware security module”, which enables you to easily generate and use your
own encryption keys. These encryption keys can be 128-bit or 256-bit, which are used to encrypt your custom
data or other encryption keys. An HSM is just a physical hardware device that performs cryptographic
operations and securely stores cryptographic key material. This key material is basically a random, Base64 or
hexadecimal string in a binary format that is used by your encryption key.
In CloudHSM, the cluster can be accessed or managed using CloudHSM clients, which is installed and hosted
in your Amazon EC2 instances. The CloudHSM cluster is deployed in your Amazon VPC. Your clients can
communicate with your HSM cluster using the elastic network interfaces of your HSMs. Since all of these
resources are in your Amazon VPC and under your control, the CloudHSM cluster only has one user or tenant –
which is you.
his is what single-tenant access means in CloudHSM. This service can be used to offload the SSL Processing
T
for your web servers, enabling Transparent Data Encryption (TDE) for Oracle Databases and protecting the
private keys for an Issuing Certificate Authority (CA). You can also integrate CloudHSM and AWS KMS to create
a custom key store.
https://portal.tutorialsdojo.com/ 272
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS Network Firewall is a managed network firewall service for your Amazon Virtual Private Clouds. This
A
network security service is a managed network firewall that comes with intrusion prevention and detection
capabilities. The AWS Network Firewall service allows you to filter traffic within the perimeter of your Amazon
VPCs.This service is commonly used in various network security use cases such as inspecting VPC-to-VPC
traffic, filtering outbound traffic, securing both AWS Direct Connect connection and VPN traffic as well as
filtering the Internet traffic. AWS Network Firewall also offers fine-grained network security controls for
interconnected VPCs via the AWS Transit Gateway.
ou can also use this to filter your outbound traffic to prevent unwanted data loss, block malware, and satisfy
Y
your strict network security compliance requirement. A single AWS Network Firewall can be configured with
thousands of rules that can filter out network traffic routed to known bad IP addresses or suspicious domain
names. It can also protect the AWS Direct Connect or VPN traffic that originates from client devices and your
on-premises environments. The AWS Network Firewall can ensure that only authorized sources of traffic are
granted access to your mission-critical VPC resources. It is also capable of performing the same activities as
your Intrusion Detection Systems and Intrusion Prevention Systems or IDP/IPS. This is achieved by inspecting
all inbound Internet traffic using features such as ACL rules, stateful inspection, protocol detection, intrusion
prevention, et cetera.
Amazon Detective
mazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security
A
issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources.
What it basically does is collect logs from AWS CloudTrail, Amazon VPC Flow Logs, Amazon GuardDuty
findings, and other AWS services, then use machine learning to analyze and conduct security investigations.
Amazon Cognito
mazon Cognitois a security service for managinguser identities, allowing developers to easily add user
A
sign-in, sign-up, and access control to their web and mobile apps. The service consists of two main
components: User Pools and Identity Pools.
ser Poolsfunction as a user directory managing sign-upand sign-in processes for app users. They enable
U
users to sign in directly with a user pool or through various third-party identity providers (IdPs) like Google,
Facebook, Amazon, and Apple. Amazon Cognito User Pools comply with standards such as OpenID Connect
(OIDC) for authentication and OAuth 2.0 for authorization, offering a secure and scalable user directory that
can be easily integrated into your apps.
https://portal.tutorialsdojo.com/ 273
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
I dentity Pools(Federated Identities) enable developers to grant authenticated users temporary access to AWS
services. By integrating Identity Pools into your application, you can authenticate users through an external IdP
and then provide them with credentials to access AWS resources like S3 buckets or DynamoDB tables,
adhering to fine-grained access controls based on their identity.
Key Features:
● Enhance security by enabling MFA, including SMS-based verification and Time-based One-Time
Passwords (TOTPs).
● Customize user sign-up and sign-in flows with AWS Lambda triggers and adjust authentication flows
according to your business needs.
● Identity providers supported by AWS Cognito include
● Social identity providers such as Google, Facebook and Amazon
● Enterprise identity providers like Microsoft Active Directory, LDAP, and SAML 2.0-compatible
providers
● Custom authentication solutions using OpenID Connect (OIDC) or OAuth 2.0 protocols
● Automatically handle millions of users without requiring upfront infrastructure investment or
management overhead
● Create precise access control policies, assigning roles and permissions based on user attributes,
groups, or custom conditions, ensuring secure resource access for authorized users.
WS Virtual Private Network, or AWS VPN, is a service that enables you to securely connect your on-premises
A
network to AWS. This is basically just a regular VPN, which is an encrypted connection that passes through the
public Internet. It uses the IPsec protocol to authenticate and encrypt your data in transit.
It is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. The Site-to-Site VPN creates
encrypted tunnels between your network and your Amazon VPCs or AWS Transit Gateways. On the other hand,
AWS Client VPN is simply a software that allows your users to connect to AWS or to on-premises resources.
Both of these types have a corresponding endpoint to your VPC. You can create a site-to-site VPN endpoint or
a client VPN endpoint.
AWS PrivateLink
WS PrivateLink provides companies and customers to privately access their AWS services hosted in a highly
A
available and scalable way while keeping all the network traffic within the internal AWS network. Customers
can access services powered by PrivateLink in a private connection from their Amazon Virtual Private Cloud
(VPC) or their on-premises, without using public IPs or requiring traffic to traverse across the public Internet.
Network Load Balancers can also use PrivateLink services to grant access to their services to other AWS
customers.
https://portal.tutorialsdojo.com/ 274
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ou have to create Interface-type VPC endpoints for certain AWS services to take advantage of the PrivateLink
Y
feature. These service endpoints will be shown as Elastic Network Interfaces (ENIs) with private IPs in your
Amazon VPCs. Any traffic destined to these IPs will get privately routed to the corresponding AWS services
internally without traversing the public Internet.
WS License Manager simplifies the process of managing your software licenses from various vendors such
A
as IBM, SAP, Oracle, and Microsoft across your AWS Cloud and on-premises environments. This service lets
administrators craft customized licensing rules that mirror the terms of their licensing agreements. System
Administrators can use these custom rules to avoid licensing violations like using more licenses than their
service level agreement stipulates. These rules in AWS License Manager prevent companies from inadvertently
having licensing breach by stopping the instance from launching or by sending a notification to system
administrators about a potential license infringement. You can gain control and visibility of all your software
licenses with the AWS License Manager dashboard, reducing the risk of non-compliance, misreporting, and
unnecessary costs due to licensing overages.
he AWS Service Health Dashboard is basically just a public dashboard that shows the status of all AWS
T
Services across various regions. The AWS Service Health Dashboard contains the most up-to-the-minute
information on the service availability of each and every AWS service. You can be notified of any service
interruptions by subscribing to an RSS feed.
WS Health API provides programmatic access to the AWS Health information that appears in your AWS
A
Personal Health Dashboard. The AWS Health API is basically a RESTful web service that you can access via
HTTPS and return a response in JSON format. This service is not available by default. You must have a
Business or Enterprise support plan in order to use this service.
WS Resilience Hub can assist companies and organizations to proactively prepare and protect their AWS
A
applications from unexpected service disruptions. This service offers resiliency assessment and validation
that integrate into your software development lifecycle (SDLC) to discover the weak points of your cloud
architecture. AWS Resilience Hub provides the capability to estimate whether or not the recovery time objective
(RTO) and recovery point objective (RPO) for your applications and cloud solution can be met. It also helps
resolve resiliency issues even before they are released into your production environment. You can continue to
use the AWS Resilience Hub even after you deploy your solutions into production to track the resiliency posture
https://portal.tutorialsdojo.com/ 275
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
f your enterprise application. AWS Resilience Hub will send a notification to your team to launch the
o
associated recovery process in the event of a service outage.
https://portal.tutorialsdojo.com/ 276
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 277
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Features
● erver environments calledinstances.
S
● Package OS and additional installations in a reusable template calledAmazon Machine Images
● Secure login information for your instances usingkey pairs
● Storage volumes for temporary data that are deleted when you STOP or TERMINATE your instance,
known asinstance store volumes.
● Persistent storage volumes for your data usingElasticBlock Store volumes(see aws storage services).
● Multiple physical locations for deploying your resources, such as instances and EBS volumes, known as
regionsandAvailability Zones(see AWS overview).
● A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your
instances usingsecurity groups(see aws networkingand content delivery).
● Static IPv4 addresses for dynamic cloud computing, known asElastic IP addresses(see aws
networking and content delivery).
● Metadata, known astags, that you can create and assignto your EC2 resources
● Virtual networks you can create that are logically isolated from the rest of the AWS cloud, and that you
can optionally connect to your own network, known asvirtual private cloudsorVPCs (see aws
networking and content delivery).
● Add a script that will be run on instance boot calleduser-data.
● Host Recovery for Amazon EC2automatically restartsyour instances on a new host in the event of an
unexpected hardware failure on a Dedicated Host.
Instance states
T
● he root device volume contains the image used to boot the instance.
● Instance Store-backed Instances
○ Any data on the instance store volumes is deleted when the instance is terminated (instance
store-backed instances do not support the Stop action) or if it fails (such as if an underlying
drive has issues).
● Amazon EBS-backed Instances
https://portal.tutorialsdojo.com/ 278
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ A n Amazon EBS-backed instance can be stopped and later restarted without affecting data
stored in the attached volumes.
○ By default, the root device volume for an AMI backed by Amazon EBS is deleted when the
instance terminates.
AMI
Pricing
● O n-Demand - pay for the instances that you use by the second, with no long-term commitments or
upfront payments.
● Reserved - make a low, one-time, up-front payment for an instance, reserve it for aone- orthree-year
term, and pay a significantly lower hourly rate for these instances.
● Spot - request unused EC2 instances, which can lower your costs significantly. Spot Instances are
available at up to a 90% discount compared to On-Demand prices.
Security
● Use IAM to control access to your instances (see AWS Security and Identity Service).
○ IAM policies
○ IAM roles
https://portal.tutorialsdojo.com/ 279
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
R
● estrict access by only allowing trusted hosts or networks to access ports on your instance.
● Asecurity groupacts as a virtual firewall that controlsthe traffic for one or more instances.
○ Evaluates all the rules from all the security groups that are associated with an instance to
decide whether to allow traffic or not.
○ By default, security groups allowall outbound traffic.
○ Security group rules arealways permissive; you can'tcreate rules that deny access.
○ Security groups arestateful
● You can replicate the network traffic from an EC2 instance within your Amazon VPC and forward that
traffic to security and monitoring appliances for content inspection, threat monitoring, and
troubleshooting.
Networking
● A nElastic IP addressis a static IPv4 address designedfor dynamic cloud computing. With it, you can
mask the failure of an instance or software by rapidly remapping the address to another instance in
your account.
● If you have not enabled auto-assign public IP address for your instance, you need to associate an
Elastic IP address with your instance to enable communication with the internet.
● An elasticnetwork interfaceis a logical networkingcomponent in a VPC that represents a virtual
network card, which directs traffic to your instance
● Scale withEC2 Scaling Groupsand distributes trafficamong instances usingElastic Load Balancer.
Monitoring
https://portal.tutorialsdojo.com/ 280
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● M onitor your EC2 instances with CloudWatch. By default, EC2 sends metric data to CloudWatch in
5-minute periods.
● You can also enable detailed monitoring to collect data in 1-minute periods.
● I nstance metadatais data about your instance thatyou can use to configure or manage the running
instance.
● View all categories of instance metadata from within a running instance at
http://169.254.169.254/latest/meta-data/
● Retrieve user data from within a running instance athttp://169.254.169.254/latest/user-data
https://portal.tutorialsdojo.com/ 281
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A
managed AWS Docker registry service. Stores your Docker images that you can use to deploy on your
EC2, ECS, or Fargate deployments.
Features
● E CR supports Docker Registry HTTP API V2 allowing you to use Docker CLI commands or your
preferred Docker tools in maintaining your existing development workflow.
● You can transfer your container images to and from Amazon ECR via HTTPS.
Components
● Registry
○ A registry is provided to each AWS account; you can create image repositories in your registry
and store images in them.
○ The URL for your default registry is https://aws_account_id.dkr.ecr.region.amazonaws.com.
● Authorization token
○ Your Docker client needs to authenticate to ECR registries as an AWS user before it can push
and pull images. The AWS CLIget-logincommand providesyou with authentication credentials
to pass to Docker.
● Repository
○ An image repository contains your Docker images.
○ ECR lifecycle policiesenable you to specify the lifecyclemanagement of images in a repository.
● Repository policy
○ You can control access to your repositories and the images within them with repository policies.
● Image
○ You can push and pull Docker images to your repositories. You can use these images locally on
your development system, or you can use them in ECS task definitions.
○ You can replicate images in your private repositories across AWS regions.
https://portal.tutorialsdojo.com/ 282
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
A
● container management service to run, stop, and manage Docker containers on a cluster.
● ECS can be used to create a consistent deployment and build experience, manage, and scale batch and
Extract-Transform-Load(ETL) workloads, and buildsophisticated application architectures on a
microservices model.
Features
Y
● ou can create ECS clusters within a new or existing VPC.
● After a cluster is up and running, you can define task definitions and services that specify which Docker
container images to run across your clusters.
Components
https://portal.tutorialsdojo.com/ 283
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ W hen using the Fargate launch type with tasks within your cluster, ECS manages your cluster
resources.
○ Enabling managed Amazon ECS cluster auto-scaling allows ECS to manage the scale-in and
scale-out actions of the Auto Scaling group.
● Services
○ ECS allows you to run and maintain a specified number of instances of a task definition
simultaneously in a cluster.
○ In addition to maintaining the desired count of tasks in your service, you can optionally run your
service behind a load balancer.
○ There are two deployment strategies in ECS:
■ Rolling Update
■ This involves the service scheduler replacing the current running version of the
container with the latest version.
Blue/Green Deployment with AWS CodeDeploy
■ This deployment type allows you to verify a new deployment of a service before
sending production traffic to it.
■ The service must be configured to use either an Application Load Balancer or
Network Load Balancer.
Container Agent (AWS ECS Agent)
●
○ Thecontainer agentruns on each infrastructure resourcewithin an ECS cluster.
○ It sends information about the resource's current running tasks and resource utilization to ECS,
and starts and stops tasks whenever it receives a request from ECS.
○ Container agent is only supported on Amazon EC2 instances.
AWS Fargate
● Y ou can use Fargate with ECS to run containers without having to manage servers or clusters of EC2
instances.
● You no longer have to provision, configure, or scale clusters of virtual machines to run containers.
● Fargate only supports container images hosted on Elastic Container Registry (ECR) or Docker Hub.
● F argate task definitions require that the network mode is set toawsvpc. Theawsvpcnetwork mode
provides each task with its own elastic network interface.
● Fargate task definitions only support theawslogslog driver for the log configuration. This configures
your Fargate tasks to send log information to Amazon CloudWatch Logs.
● Task storage isephemeral. After a Fargate task stops,the storage is deleted.
https://portal.tutorialsdojo.com/ 284
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Monitoring
● Y ou can configure your container instances to send log information to CloudWatch Logs. This enables
you to view different logs from your container instances in one convenient location.
● With CloudWatch Alarms, watch a single metric over a time period that you specify, and perform one or
more actions based on the value of the metric relative to a given threshold over a number of time
periods.
● Share log files between accounts, and monitor CloudTrail log files in real-time by sending them to
CloudWatch Logs.
https://portal.tutorialsdojo.com/ 285
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A llows you to quickly deploy and manage applications in the AWS Cloud without worrying about the
infrastructure that runs those applications.
● Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling,
and application health monitoring for your applications.
● Elastic Beanstalk supports Docker containers.
● Elastic Beanstalk Workflow:
https://portal.tutorialsdojo.com/ 286
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS Lambda
● serverless compute service. Function-as-a-Service.
A
● Lambda executes your code only when needed and scales automatically.
● Lambda functions are stateless - no affinity to the underlying infrastructure.
● You choose the amount of memory you want to allocate to your functions, and AWS Lambda allocates
proportional CPU power, network bandwidth, and disk I/O.
● F unction– a script or program that runs in Lambda.Lambda passes invocation events to your function.
The function processes an event and returns a response.
● Runtimes– Lambda runtimes allow functions in differentlanguages to run in the same base execution
environment. The runtime sits in between the Lambda service and your function code, relaying
invocation events, context information, and responses between the two.
● Layers– Lambda layers are a distribution mechanismfor libraries, custom runtimes, and other function
dependencies. Layers let you manage your in-development function code independently from the
unchanging code and resources that it uses.
● Event source– an AWS service or a custom servicethat triggers your function and executes its logic.
● Downstream resources– an AWS service that your Lambdafunction calls once it is triggered.
● Log streams– While Lambda automatically monitorsyour function invocations and reports metrics to
CloudWatch, you can annotate your function code with custom logging statements that allow you to
analyze the execution flow and performance of your Lambda function.
● AWS Serverless Application Model
Lambda Functions
● Y ou upload your application code in the form of one or moreLambda functions. Lambda stores code in
Amazon S3 and encrypt it at rest.
● To create a Lambda function, you first package your code and dependencies in a deployment package.
Then, you upload the deployment package to create your Lambda function.
● After your Lambda function is in production, Lambda automatically monitors functions on your behalf,
reporting metrics through Amazon CloudWatch.
● Configurebasic functionsettings,including the description,memory usage, execution timeout, and role
that the function will use to execute your code.
● Environment variablesare always encrypted at restand can be encrypted in transit as well.
● Versions and aliasesare secondary resources thatyou can create to manage function deployment and
invocation.
● Alayeris a ZIP archive that contains libraries,a custom runtime, or other dependencies. Use layers to
manage your function's dependencies independently and keep your deployment package small.
https://portal.tutorialsdojo.com/ 287
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● n open-source framework for building serverless applications.
A
● It provides shorthand syntax to express functions, APIs, databases, and event source mappings.
● You create aJSONorYAMLconfiguration template tomodel your applications.
● During deployment, SAM transforms and expands the SAM syntax intoAWS CloudFormation syntax.
Any resource that you can declare in an AWS CloudFormation template, you can also declare in an AWS
SAM template.
● TheSAM CLIprovides a Lambda-like execution environmentthat lets you locally build, test, and debug
applications defined by SAM templates. You can also use the SAM CLI to deploy your applications to
AWS.
You can use AWS SAM to build serverless applications that useany runtime supported by AWS
●
Lambda. You can also use SAM CLI to locally debugLambda functions written in Node.js, Java, Python,
and Go.
● Commonly used SAM CLI commands
○ Thesam initcommand generates pre-configured AWSSAM templates.
○ Thesam local command supports local invocation andtesting of your Lambda functions and
SAM-based serverless applications by executing your function code locally in a Lambda-like
execution environment.
○ Thesam packageandsam deploy commands let you bundleyour application code and
dependencies into a “deployment package" and then deploy your serverless application to the
AWS Cloud.
○ Thesam logscommand enables you to fetch, tail, andfilter logs for Lambda functions.
○ The output of thesam publishcommand includes a linkto the AWS Serverless Application
Repository directly to your application.
○ Usesam validateto validate your SAM template.
● T he AWS Serverless Application Repository is a central location where users can easily discover, deploy,
and publish serverless applications in the AWS Cloud.
● Deeply integrated with AWS Lambda console for easy serverless computing integration.
ublishing Applications
P
● Publishing applications in the AWS Serverless Application Repository allows developers to share their
serverless applications with the broader community.
Key steps:
○ Users define serverless applications using an AWS Serverless Application Model (AWS SAM)
template. This template describes the application, its resources, and permissions.
https://portal.tutorialsdojo.com/ 288
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ U sers can publish applications using the AWS Management Console, AWS SAM CLI, or an AWS
SDK, which will upload the user's code and SAM template.
○ Users' applications are initially private, only visible to their AWS accounts. They can choose to
share them privately with specific accounts or publicly with all users.
○ When public, the application is available in all AWS regions. The repository copies deployment
artifacts like code to S3 buckets in other regions for easy deployment globally.
○ Applications can be licensed openly under common licenses.
○ Nested applications containing multiple services can also be published like standalone apps.
eploying Applications
D
● Deploying applications from the AWS Serverless Application Repository is straightforward. Users can
browse, search, and filter applications. Once they find an application, they can configure any parameters
and deploy it with a few clicks from the AWS Lambda console.
○ Applications are tested and reviewed by AWS and partners to ensure they work as described.
Some may have a verified author badge linking to the publisher profile.
○ Before deploying, check the application documentation and permissions needed. Make sure it
meets your use case and security requirements.
○ Deploying provisions the necessary AWS resources like Lambda functions, APIs, etc., and
handles all the plumbing. Users don't need serverless expertise.
○ Standard AWS pricing applies to the underlying services. There is no additional cost for
deploying applications from the repository.
○ Post deployment, you can manage and monitor the application like any other AWS resources
using the management console or AWS CLI/SDKs.
eferences:
R
https://aws.amazon.com/serverless/serverlessrepo/faqs/
https://docs.aws.amazon.com/serverlessrepo/latest/devguide/what-is-serverlessrepo.html
https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverlessrepo-consuming-applications.html
https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverlessrepo-publishing-applications.html
https://portal.tutorialsdojo.com/ 289
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon EBS
B
● lock level storagevolumes for use with EC2 instances.
● Well-suited for use as the primary storage for file systems, databases, or for any applications that
require fine granular updates and access to raw, unformatted, block-level storage.
● Well-suited to both database-style applications (random reads and writes), and to throughput-intensive
applications (long, continuous reads and writes).
Amazon EFS
● A
fully-managedfile storage servicethat makes iteasy to set up and scale file storage in the Amazon
Cloud.
Features
● T he service manages all the file storage infrastructure for you, avoiding the complexity of deploying,
patching, and maintaining complex file system configurations.
● EFS supports the Network File System version 4 protocol.
● Multiple Amazon EC2 instances can access an EFS file system at the same time, providing a common
data source for workloads and applications running on more than one instance or server.
● Moving your EFS file data can be managed simply with AWS DataSync - a managed data transfer
service that makes it faster and simpler to move data between on-premises storage and Amazon EFS.
Amazon S3
● 3 stores data as objects withinbuckets.
S
● Anobjectconsists of a file and optionally any metadatathat describes that file.
● Akeyis the unique identifier for an object withina bucket.
● Storage capacity is virtually unlimited.
● Good for storing static web content or media. Can be used to host static websites.
Buckets
https://portal.tutorialsdojo.com/ 290
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
A
○ fter you create the bucket you cannot change the name.
○ The bucket name is visible in the URL that points to the objects that you're going to put in your
bucket.
Y
● ou can host static websites by configuring your bucket for website hosting.
Security
https://portal.tutorialsdojo.com/ 291
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 292
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
VPC endpoint is what you use to privately connect your VPC to supported AWS services, such as Amazon S3.
A
It adds a gateway entry in your VPC’s route table so that communication between your AWS resources, such as
Amazon EC2 instances, and your S3 bucket pass through the gateway instead of the public internet. As a
result, the VPC endpoint is a regional service. You should create the endpoint in the same region as the VPC
you want to link it to.
PC endpoints are best used when you have compliance requirements or sensitive information stored in S3
V
that should not leave the Amazon network. A VPC endpoint is also a better option for private network
connections in AWS, as compared to using a VPN solution or a NAT solution since it is easier to set up and
offers you more network bandwidth at your disposal.
Amazon S3 Glacier
mazon S3 Glacieris a durable, highly secure, cost-effectivecloud storage service created for data archiving
A
and long-term backup. It is renowned for its exceptional durability of 99.999999999%, and it offers
comprehensive security and compliance features to meet stringent regulatory requirements. Additionally, it
enables query-in-place functionality, allowing you to run analytics directly on your archived data.
Storage Classes
S3 Glacier offers three main storage classes to cater to different retrieval needs and cost efficiencies:
● S3 Glacier Instant Retrievalis ideal for archivingdata that is accessed no more than once per quarter
and requires rapid retrieval in milliseconds. It offers significant savings over S3 Standard-Infrequent
Access (S3 Standard-IA), albeit with higher data access costs.
● S3 Glacier Flexible Retrievalis suited for archiveswhere data might occasionally need to be accessed.
It offers expedited retrievals in minutes. Objects have a minimum storage duration of 90 days. This
class allows for free bulk retrievals that can be completed in 5-12 hours.
● S3 Glacier Deep Archiveis the most cost-effectivesolution for long-term archiving data that rarely
needs to be accessed. Its minimum storage duration is 180 days, and the default retrieval time is 12
hours.
Key Features
● A
mazon S3 Glacier offers 99.999999999% durability, providing comprehensive security and compliance
capabilities to meet stringent regulatory requirements.
https://portal.tutorialsdojo.com/ 293
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● It provides flexible retrieval options from a few minutes to several hours, catering to different needs and
cost considerations.
● Amazon S3 Glacier is designed to be an economical choice for data archiving. Pricing varies based on
the storage class, data retrieval time, and amount of data stored. Data transfer into Amazon S3 is free,
while data transfer out is priced by region.
● Amazon S3 Glacier can easily manage archives through the AWS Management Console, AWS CLI, and
AWS SDKs. The service supports backup and restore process automation and includes features like S3
Object Lock for additional data protection.
Usage Tips
● W hen using S3 Glacier, remember that your objects remain in Amazon S3. Direct access through the
separate Amazon S3 Glacier service is not available.
● To avoid unexpected costs, consider the minimum storage duration charges before deleting or
transitioning objects to a different storage class.
● Utilize Amazon S3 Lifecycle policies to automate the transition of S3 objects to Glacier storage classes,
helping manage lifecycle and costs efficiently.
● Consider aggregating files into a single archive for bulk data or numerous small files to minimize
storage overhead and costs.
Pricing
mazon S3 Glacier's pricing is tailored for cost-effectiveness in data archiving and long-term backup,
A
influenced by storage, retrieval, and data transfer costs. Storage pricing varies with S3 Glacier Instant Retrieval
being suitable for quick access data but at higher costs, S3 Glacier Flexible Retrieval designed for infrequent
access with a minimum 90-day storage duration, and S3 Glacier Deep Archive as the lowest-cost option for
rarely accessed data with a minimum 180-day storage duration. Retrieval costs depend on the storage class.
Instant Retrieval provides the quickest access at higher costs. Flexible Retrieval balances speed and cost, and
Deep Archive is the most economical for infrequent access. Data transfer into S3 Glacier is free, but
transferring data out incurs costs based on the AWS region and amount of data. Additionally, operations such
as PUT, COPY, POST requests, and data retrievals contribute to the overall cost, with considerations for
minimum storage duration charges for early deletions or transitions and minimum object size charges in some
storage classes.
eferences:
R
https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/amazon-s3-glacier.html
https://aws.amazon.com/s3/storage-classes/glacier/
https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/amazon-s3-glacier.html
https://portal.tutorialsdojo.com/ 294
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Security
WS Storage Gateway ensures secure data transfer and storage by using SSL encryption for data in transit
A
between your gateway appliance and AWS. It also supports FIPS 140-2 compliant endpoints in specific regions
and integrates with AWS IAM for access management. Data at rest can be encrypted using AWS KMS
integration with either default or custom keys. The service is designed to operate within AWS's high-security
https://portal.tutorialsdojo.com/ 295
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
tandards, following the shared responsibility model, which divides security responsibilities between AWS and
s
the customer.
ricing
P
AWS Storage Gateway uses a pay-as-you-go pricing model. The costs depend on the specific services used
(file, tape, or volume gateway), data storage, transfer volumes, and any additional features such as snapshots
or tape storage. AWS provides a Free Tier for new users, allowing them to ingest up to 100 GB of data through
the Storage Gateway service. This enables businesses to try out the service at no initial cost.
eferences:
R
https://aws.amazon.com/storagegateway/features/
https://docs.aws.amazon.com/storagegateway/latest/vgw/security.html
https://portal.tutorialsdojo.com/ 296
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon Aurora
A
● fully managed relational database engine that's compatible with MySQL and PostgreSQL.
● With some workloads, Aurora can deliver up to five times the throughput of MySQL and up to three
times the throughput of PostgreSQL.
● Aurora includes a high-performance storage subsystem. The underlying storage grows automatically as
needed, up to 64 terabytes. The minimum storage is 10GB.
● DB Clusters
○ An Aurora DB cluster consists of one or more DB instances and a cluster volume that manages
the data for those DB instances.
○ An Aurora cluster volume is a virtual database storage volume that spans multiple AZs, with
each AZ having a copy of the DB cluster data.
○ Cluster Types:
■ Primary DB instance – Supports read and write operations, and performs all of the data
modifications to the cluster volume. Each Aurora DB cluster has one primary DB
instance.
■ Aurora Replica – Connects to the same storage volume as the primary DB instance and
supports only read operations. Each Aurora DB cluster can have up to 15 Aurora Replicas
in addition to the primary DB instance. Aurora automatically fails over to an Aurora
Replica in case the primary DB instance becomes unavailable. You can specify the
failover priority for Aurora Replicas. Aurora Replicas can also offload read workloads
from the primary DB instance.
● Monitoring
○ Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB
cluster, DB cluster snapshot, DB parameter group, or DB security group.
○ Database log files
○ RDS Enhanced Monitoring — Look at metrics in real-time for the operating system.
○ RDS Performance Insights monitors your Amazon RDS DB instance load so that you can analyze
and troubleshoot your database performance.
○ Use CloudWatch Metrics, Alarms, and Logs
● A urora Serverless v2 is an auto-scaling configuration for Amazon Aurora that allows databases to
automatically scale capacity up or down based on real-time usage.
● It supports all features of provisioned Aurora including read replicas, multi-AZ configuration, Global
Database, RDS proxy and Performance Insights.
● Aurora Serverless v2 scales capacity incrementally in small units of 0.5 Aurora Capacity Units (ACUs),
allowing the capacity to closely match the application's needs.
https://portal.tutorialsdojo.com/ 297
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● W ith reader instances, it separates read and write workloads, improving performance and allowing
more flexible scaling of each.
● Some of the advantages of Aurora Serverless v2:
○ Allows all types of database workloads from simple development/test to mission critical
applications.
○ Provides high scalability even during long transactions or table locks.
○ Almost all configuration parameters can be modified as with provisioned clusters.
○ It provides simpler capacity management than provisioned Aurora as it automatically scales
capacity up or down based on usage, reducing the effort needed for manual resizing.
○ It allows faster and easier scaling during periods of high activity with no disruption to the
database.
○ It is more cost-effective during periods of low activity as you only pay for the resources
consumed.
● Some common use cases of Aurora Serverless v2:
○ Development and test environments that have intermittent usage.
○ Applications with unpredictable traffic patterns like websites facing sudden spikes.
○ Mission critical databases that require high availability and scale on demand to handle traffic.
○ Data warehousing workloads where analysis or reporting jobs are run on the database during
certain periods.
○ Serverless applications where the underlying database also needs to scale automatically based
on incoming requests to the application.
eferences:
R
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html
Amazon DynamoDB
N
● oSQL database service that provides fast and predictable performance with seamless scalability.
● Offers encryption at rest.
● You can create database tables that can store and retrieve any amount of data, and serve any level of
request traffic.
● You can scale up or scale down your tables' throughput capacity without downtime or performance
degradation, and use the AWS Management Console to monitor resource utilization and performance
metrics.
Core Components
https://portal.tutorialsdojo.com/ 298
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
D
● AX is a fully managed, highly available, in-memory cache for DynamoDB.
● DynamoDB Accelerator (DAX)delivers microsecond responsetimes for accessing eventually
consistent data.
● It requires only minimal functional changes to use DAX with an existing application since it is
API-compatible with DynamoDB.
● For read-heavy or bursty workloads, DAX provides increased throughput and potential cost savings by
reducing the need to overprovision read capacity units.
● DAX lets you scale on demand.
● DAX is fully managed. You no longer need to do hardware or software provisioning, setup, and
configuration, software patching, operating a reliable, distributed cache cluster, or replicating data over
multiple instances as you scale.
https://portal.tutorialsdojo.com/ 299
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A
fter you enable DynamoDB Streams on a table, associate the DynamoDB table with a Lambda
function. AWS Lambda polls the stream and invokes your Lambda function synchronously when it
detects new stream records.
https://portal.tutorialsdojo.com/ 300
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● Configure the StreamSpecification you want for your DynamoDB Streams:
○ StreamEnabled (Boolean) - indicates whether DynamoDB Streams is enabled (true) or disabled
(false) on the table.
○ StreamViewType (string) - when an item in the table is modified, StreamViewType determines
what information is written to the stream for this table. Valid values for StreamViewType are:
■ KEYS_ONLY - Only the key attributes of the modified items are written to the stream.
■ NEW_IMAGE - The entire item, as it appears after it was modified, is written to the
stream.
■ OLD_IMAGE - The entire item, as it appeared before it was modified, is written to the
stream.
■ NEW_AND_OLD_IMAGES - Both the new and the old item images of the items are written
to the stream.
https://portal.tutorialsdojo.com/ 301
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ources:
S
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_StreamSpecification.html
Amazon RDS
S
● upportsAurora,MySQL, MariaDB, PostgreSQL, Oracle,Microsoft SQL Server.
● You can get high availability with a primary instance and a synchronous secondary instance that you
can fail over to when problems occur. You can also use MySQL, MariaDB, or PostgreSQL Read Replicas
to increase read scaling.
● You can select the computation and memory capacity of a DB instance, determined by itsDB instance
class. If your needs change over time, you can changeDB instances.
● Each DB instance has minimum and maximum storage requirements depending on the storage type
and the database engine it supports.
● You can run your DB instance in several AZs, an option called aMulti-AZ deployment. Amazon
automatically provisions and maintains a secondary standby DB instance in a different AZ. Your
primary DB instance is synchronously replicated across AZs to the secondary instance to provide data
redundancy, and failover support, eliminate I/O freezes, and minimize latency spikes during system
backups.
Security
https://portal.tutorialsdojo.com/ 302
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A resource owneris the AWS account that created a resource. That is, the resource owner is the AWS
account of theprincipal entity(the root account, an IAM user, or an IAM role) that authenticates the
request that creates the resource.
● Apermissions policydescribes who has access to what.Policies attached to an IAM identity are
identity-based policies(IAM policies), and policiesattached to a resource areresource-based policies.
Amazon RDS supports only identity-based policies (IAM policies).
● MySQL and PostgreSQL both supportIAM database authentication.
Read Replicas
U
● pdates made to the source DB instance are asynchronously copied to the Read Replica.
● You can reduce the load on your source DB instance by routing read queries from your applications to
the Read Replica.
● You can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy
database workloads.
● You can create a Read Replica that has a different storage type from the source DB instance.
● Y our DB instance must be in theACTIVE statefor automatedbackups to occur. Automated backups
and automated snapshots don't occur while a copy is executing in the same region for the same DB
instance.
● The first snapshot of a DB instance contains the data for the full DB instance. Subsequent snapshots of
the same DB instance are incremental.
● The default backup retention period is one day if you create the DB instance using the RDS API or the
AWS CLI, or seven days if you use the AWS Console.
● Manual snapshot limits are limited to 100 per region.
● You can copy a snapshot within the same AWS Region, you can copy a snapshot across AWS Regions,
and you can copy a snapshot across AWS accounts.
https://portal.tutorialsdojo.com/ 303
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● W
hen you restore a DB instance to a point in time, the default DB parameter and default DB security
group is applied to the new DB instance.
Amazon Redshift
● A mazon Redshift is a fully managed AWS data warehouse service that make it easy and affordable to
analyze data using SQL and various business intelligence tools.
● With Amazon Redshift, you don't need to worry about provisioning, patching, backup or monitoring the
underlying infrastructure. It offers automatic scaling of storage and compute capacity.
● Some key features of Amazon Redshift includes Redshift Advisor which provides recommendations for
optimizing queries and cluster configuration, integration with other AWS services like S3 and Glue for
data loading and management, and security features like encryption and IAM roles.
● Amazon Redshift enables you to analyze petabytes of structured and semi-structured data using
existing SQL skills, with support for popular business intelligence tools through high-performance
ODBC/JDBC drivers and SQL queries.
● It provides fast performance for both analytical and operational workloads through massively parallel
processing architecture and columnar data storage. Pricing is on a per-second usage basis so you only
pay for what you use.
● C luster- This is the fundamental unit of computeand storage in Amazon Redshift. A cluster consists of
one or more nodes that work together to handle queries and store data.
● Nodes- Each cluster contains leader and compute nodes.The leader node coordinates queries and
manages the cluster, while compute nodes store data and perform query processing.
● Databases- Redshift organizes data into databaseswhich can contain multiple schemas and tables.
● Tables- Data is stored in columns and rows withintables in a database. Redshift uses columnar
storage, which improves query performance.
● Storage- Each node contains disks that store columnardata and can independently scale storage
capacity and throughput.
● Query engine- Redshift uses a massively parallelprocessing (MPP) architecture and a distributed
query engine to process queries across all nodes very fast.
● Security- Authentication is managed through IAM rolesand policies. Data is encrypted at rest and in
transit using KMS and SSL. Network access is governed by security groups and VPC endpoints.
eferences:
R
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
https://portal.tutorialsdojo.com/ 304
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 305
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can set up a cache with customizable keys and time-to-live in seconds for your API data to
avoid hitting your backend services for each request.
○ API Gateway lets you run multiple versions of the same API simultaneously withAPI Lifecycle.
○ After you build, test, and deploy your APIs, you can package them in an API Gateway usage
plan and sell the plan as a Software as a Service (SaaS) product through AWS Marketplace.
○ API Gateway offers the ability to create, update, and delete documentation associated with each
portion of your API, such as methods and resources.
○ Amazon API Gateway offers general availability of HTTP APIs, which gives you the ability to
route requests to private ELBs AWS AppConfig, Amazon EventBridge, Amazon Kinesis Data
Streams, Amazon SQS, AWS Step Functions, and IP-based services registered in AWS
CloudMap such as ECS tasks. Previously, HTTP APIs enabled customers to only build APIs for
their serverless applications or to proxy requests to HTTP endpoints.
○ You can create data mapping definitions from an HTTP API’s method request data (e.g. path
parameters, query string, and headers) to the corresponding integration request parameters and
from the integration response data (e.g. headers) to the HTTP API method response
parameters.
○ Use wildcard custom domain names (*.example.com) to create multiple URLs that route to one
API Gateway HTTP API.
○ You can configure your custom domain name to route requests to different APIs. Using
multi-level base path mappings, you can implement path-based API versioning and migrate API
traffic between APIs according to request paths with many segments.
All of the APIs created expose HTTPS endpoints only. API Gateway does not support unencrypted
●
(HTTP) endpoints.
https://portal.tutorialsdojo.com/ 306
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A
highly available and scalable Domain Name System (DNS) web service used for domain registration,
DNS routing, and health checking.
● U se the Route 53 console to register a domain name and configure Route 53 to route internet traffic to
your website or web application.
● After you register your domain name, Route 53 automatically creates apublic-hosted zonethat has the
same name as the domain.
● To route traffic to your resources, you createrecords,also known asresource record sets, in your
hosted zone.
● You can create special Route 53 records, calledaliasrecords, that route traffic to S3 buckets,
CloudFront distributions, and other AWS resources.
● Each record includes information about how you want to route traffic for your domain, such as:
○ Name - the name of the record corresponds with the domain name or subdomain name that you
want Route 53 to route traffic for.
○ Type - determines the type of resource that you want traffic to be routed to.
○ Value
○ C reate a health check and specify values that define how you want the health check to work,
such as:
■ The IP address or domain name of the endpoint that you want Route 53 to monitor.
■ The protocol that you want Route 53 to use to perform the check: HTTP, HTTPS, or TCP.
■ Therequest intervalyou want Route 53 to send a requestto the endpoint.
■ How many consecutive times the endpoint must fail to respond to requests before Route
53 considers it unhealthy. This is thefailure threshold.
○ You can configure a health check to check the health of one or more other health checks.
○ You can configure a health check to check the status of a CloudWatch alarm so that you can be
notified on the basis of a broad range of criteria.
● D omain Registration Concepts - domain name, domain registrar, domain registry, domain reseller,
top-level domain
● DNS Concepts
○ Alias record- a type of record that you can createto route traffic to AWS resources.
https://portal.tutorialsdojo.com/ 307
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ H osted zone- a container for records, which includes information about how to route traffic for
a domain and all of its subdomains.
○ Name servers- servers in the DNS that help to translatedomain names into the IP addresses
that computers use to communicate with one another.
○ Record(DNS record) - an object in a hosted zone thatyou use to define how you want to route
traffic for the domain or a subdomain.
○ Routing policy -policy on how to redirect users basedon configured routing policy
○ Subdomain- name below the zone apex. Example: portal.tutorialsdojo.com
○ Time to live (TTL) - time that the DNS record is cached by querying servers.
Health Checking Concepts
●
○ DNS failover- a method for routing traffic away fromunhealthy resources and to healthy
resources.
○ Endpoint - the URL or endpoint on which the health check will be performed.
○ Health check - the metric on which to determine if an endpoint is healthy or not.
Records
● C reate records in a hosted zone. Records define where you want to route traffic for each domain name
or subdomain name. The name of each record in a hosted zone must end with the name of the hosted
zone.
● Alias Records
○ Route 53alias recordsprovide a Route 53–specificextension to DNS functionality. Alias records
let you route traffic to selected AWS resources. They also let you route traffic from one record in
a hosted zone to another record.
○ You can create an alias record at the top node of a DNS namespace, also known as the zone
apex.
● CNAME Record
○ You cannot create an alias record at the top node (zone apex) of a DNS namespace using a
CNAME record.
Amazon CloudFront
mazon CloudFrontis a content delivery network (CDN)service that aims to accelerate the distribution of
A
dynamic and static web content to users worldwide, including HTML, CSS, JavaScript, and image files. It
achieves this by utilizing a global network of data centers known asedge locations. CloudFront ensuresthat
user requests are handled from the nearest location, reducing latency and improving content delivery speed. If
a user requests content served with CloudFront, the service intelligently directs the request to the edge location
with the lowest latency. If the content is available at that location, it's delivered immediately; if not, CloudFront
https://portal.tutorialsdojo.com/ 308
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
r etrieves it from the specified origin, such as an Amazon S3 bucket or a web server, and delivers it to the user.
This process ensures an efficient and rapid content delivery experience.
Key features:
● C loudFront's infrastructure is strategically located worldwide, including regional edge caches within
AWS regions, over 600 Points of Presence (PoPs) in more than 100 cities across over 50 countries, and
additional embedded PoPs within ISP networks. This extensive network ensures high-performance and
low-latency content delivery to end users globally.
● CloudFront provides strong security features to protect content and applications. It seamlessly
integrates with AWS Shield for DDoS protection, AWS WAF for application layer defense, and offers
SSL/TLS encryption for secure and protected content delivery. Access control mechanisms such as
signed URLs and cookies, token authentication, and geo-restriction capabilities further ensure that
content delivery is secure and compliant with global standards.
● Amazon CloudFront offers customizable content delivery and network response through CloudFront
Functions and AWS Lambda@Edge. This includes manipulation of HTTP headers, URL rewrites,
cache-key normalizations, and more directly at the edge locations. These features support high-scale,
latency-sensitive operations, enabling instant scalability and minimal latency for millions of requests
per second.
● Integration with Amazon CloudWatch provides real-time metrics and logging, offering insights into
distributions' operation. CloudFront supports standard and real-time logging for detailed analysis and
content delivery performance monitoring.
● CloudFront provides flexible pricing options, including pay-as-you-go and the CloudFront Security
Savings Bundle, which offers discounts in exchange for a monthly spend commitment. This makes it a
cost-effective solution for company of all sizes looking to deliver content efficiently and at scale.
eference:
R
https://aws.amazon.com/cloudfront/features/?nc=sn&loc=2&whats-new-cloudfront.sort-by=item.additionalFie
lds.postDateTime&whats-new-cloudfront.sort-order=desc
https://portal.tutorialsdojo.com/ 309
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● D istributes incoming application or network traffic across multiple targets, such asEC2 instances,
containers (ECS),Lambda functions,andIP addresses,in multiple Availability Zones.
● When you create a load balancer, you must specify one public subnet from at least two Availability
Zones. You can specify only one public subnet per Availability Zone.
General features
A
● ccepts incoming traffic from clients and routes requests to its registered targets.
● Monitors the health of its registered targets and routes traffic only to healthy targets.
● Cross Zone Load Balancing- when enabled, each loadbalancer node distributes traffic across the
registered targets in all enabled AZs.
A
● pplication Load Balancer
● Network Load Balancer
● Gateway Load Balancer
Features
● S low Start Modegives targets time to warm up beforethe load balancer sends them a full share of
requests.
● Sticky sessionsroute requests to the same targetin a target group. You enable sticky sessions at the
target group level. You can also set the duration for the stickiness of the load balancer-generated
cookie in seconds. Useful if you have stateful applications.
● Health checksverify the status of your targets. Thestatuses for a registered target are:
VALUE DESCRIPTION
initial
he load balancer is in the process of registering the target or
T
performing the initial health checks on the target.
healthy
The target is healthy.
unhealthy
he target did not respond to a health check or failed the health
T
check.
unused
he target is not registered with a target group, the target group is
T
not used in a listener rule, the target is in an Availability Zone that
is not enabled, or the target is in the stopped or terminated state.
https://portal.tutorialsdojo.com/ 310
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
draining
The target is deregistering and connection draining is in process.
unavailable
Target health is unavailable.
● A networking service that uses a hub and spoke model to enable customers to connect their
on-premises data centers and their Amazon Virtual Private Clouds (VPCs) to a single gateway.
● With this service, customers only have to create and manage a single connection from the central
gateway into each on-premises data center, remote office, or VPC across your network.
● If a new VPC is created, it is automatically connected to the Transit Gateway and will also be available
to every other network that is also connected to the Transit Gateway.
Features:
eference:
R
https://aws.amazon.com/transit-gateway/
https://portal.tutorialsdojo.com/ 311
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon GuardDuty
● A n intelligent threat detection service. It analyzes billions of events across your AWS accounts from
AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic
data), and DNS Logs (name query patterns). Take note that GuardDuty is a regional service.
● Threat detection categories
○ Reconnaissance-- Activity suggesting reconnaissanceby an attacker, such as unusual API
activity, intra-VPC port scanning, unusual patterns of failed login requests, or unblocked port
probing from a known bad IP.
○ Instance compromise-- Activity indicating an instancecompromises, such as cryptocurrency
mining, backdoor command and control activity, malware using domain generation algorithms,
outbound denial of service activity, unusually high volume of network traffic, unusual network
protocols, outbound instance communication with a known malicious IP, temporary Amazon
EC2 credentials used by an external IP address, and data exfiltration using DNS.
○ Account compromise-- Common patterns indicative ofaccount compromise include API calls
from unusual geolocation or anonymizing proxy, attempts to disable AWS CloudTrail logging,
changes that weaken the account password policy, unusual instance or infrastructure launches,
infrastructure deployments in an unusual region, and API calls from known malicious IP
addresses.
● CloudTrail Event Source
○ Currently, GuardDuty only analyzes CloudTrail management events. (Read about types of
CloudTrail trails for more information)
○ GuardDuty processes all CloudTrail events that come into a region, including global events that
CloudTrail sends to all regions, such as AWS IAM, AWS STS, Amazon CloudFront, and Route 53.
● VPC Flow Logs Event Source
○ VPC Flow Logs capture information about the IP traffic going to and from Amazon EC2 network
interfaces in your VPC.
● DNS Logs Event Source
○ If you use AWS DNS resolvers for your EC2 instances (the default setting), then GuardDuty can
access and process your request and response DNS logs through the internal AWS DNS
resolvers. Using other DNS resolvers will not provide GuardDuty access to its DNS logs.
● GuardDuty vs Macie
○ Amazon GuardDuty provides broad protection of your AWS accounts, workloads, and data by
helping to identify threats such as attacker reconnaissance, instance compromise, and account
compromise. Amazon Macie helps you protect your data in Amazon S3 by helping you classify
what data you have, the value that data has to the business, and the behavior associated with
access to that data.
https://portal.tutorialsdojo.com/ 312
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon Inspector
● A n automated security assessment service that helps you test the network accessibility of your EC2
instances and the security state of your applications running on the instances.
● Inspector uses IAMservice-linked roles.
Features
● Inspector provides an engine that analyzes system and resource configuration and monitors activity to
determine what an assessment target looks like, how it behaves, and its dependent components. The
combination of this telemetry provides a complete picture of the assessment target and its potential
security or compliance issues.
● Inspector incorporates a built-in library of rules and reports. These include checks against best
practices, common compliance standards and vulnerabilities.
● Automate security vulnerability assessments throughout your development and deployment pipeline or
against static production systems.
● Inspector is an API-driven service that uses an optionalagent, making it easy to deploy, manage, and
automate.
Concepts
● I nspector Agent- A software agent that you can installon all EC2 instances that are included in the
assessment target, the security of which you want to evaluate with Inspector.
● Assessment run- The process of discovering potentialsecurity issues through the analysis of your
assessment target's configuration and behavior against specified rules packages.
● Assessment target- A collection of AWS resourcesthat work together as a unit to help you accomplish
your business goals. Inspector assessment targets can consist only of EC2 instances.
● Assessment template- A configuration that is usedduring your assessment run, which includes
○ Rules packages against which you want Inspector to evaluate your assessment target,
○ The duration of the assessment run,
○ Amazon SNS topics to which you want the Inspector to send notifications about assessment
run states and findings,
○ Inspector-specific attributes (key-value pairs) that you can assign to findings generated by the
assessment run that uses this assessment template.
○ After you create an assessment template, you can't modify it.
● Finding- A potential security issue was discoveredduring the assessment run of the specified target.
● Rule- A security check performed during an assessmentrun. When a rule detects a potential security
issue, Inspector generates a finding that describes the issue.
● Rules package- A collection of rules that correspondsto a security goal that you might have.
● Telemetry- EC2 instance data collected by Inspectorduring an assessment run and passed to the
Inspector service for analysis.
https://portal.tutorialsdojo.com/ 313
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● T
he telemetry data generated by the Inspector Agent during assessment runs is formatted in JSON files
and delivered in near-real-time over TLS to Inspector, where it is encrypted with a per-assessment-run,
ephemeral KMS-derived key and securely stored in an S3 bucket dedicated to the service.
Assessment Reports
A
● document that details what is tested in the assessment run and the results of the assessment.
● You can view the following types of assessment reports:
○ Findings report- this report contains the followinginformation:
■ Executive summary of the assessment
■ EC2 instances evaluated during the assessment run
■ Rules packages included in the assessment run
■ Detailed information about each finding, including all EC2 instances that had the finding
○ Full report- this report contains all the informationthat is included in a findings report, and
additionally provides the list of rules that passed on all instances in the assessment target.
https://portal.tutorialsdojo.com/ 314
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon Macie
● A security service that uses machine learning to automatically discover, classify, and protect sensitive
data in AWS. Macie recognizes sensitive data such as personally identifiable information (PII) or
intellectual property.
● Amazon Macie allows you to achieve the following:
○ Identify and protect various data types, including PII, PHI, regulatory documents, API keys, and
secret keys
○ Verify compliance with automated logs that allow for instant auditing
○ Identify changes to policies and access control lists
○ Observe changes in user behavior and receive actionable alerts
○ Receive notifications when data and account credentials leave protected zones
○ Detect when large quantities of business-critical documents are shared internally and externally
● Concepts
○ AnAlertis a notification about a potential securityissue that Macie discovers. Alerts appear on
the Macie console and provide a comprehensive narrative about all activity that occurred over
the last 24 hours.
■ Basic alerts – Alerts that are generated by the security checks that Macie performs.
There are two types of basic alerts in Macie:
■ Managed (curated by Macie) basic alerts that you can't modify. You can only
enable or disable the existing managed basic alerts.
■ Custom basic alerts that you can create and modify to your exact specifications.
■ Predictive alerts – Automatic alerts based on activity in your AWS infrastructure that
deviates from the established normal activity baseline. More specifically, Macie
continuously monitors IAM user and role activity in your AWS infrastructure and builds a
model of normal behavior. It then looks for deviations from that normal baseline, and
when it detects such activity, it generates automatic predictive alerts.
○ Data sourceis the origin or location of a set ofdata.
■ AWS CloudTrail event logs and errors, including Amazon S3 object-level API activity. You
can't modify existing or add new CloudTrail events to the list that Macie manages. You
can enable or disable the supported CloudTrail events, thus instructing Macie to either
include or exclude them in its data security process.
■ Amazon S3 objects. You can integrate Macie with your S3 buckets and/or specify S3
prefixes
○ User, in the context of Macie, a user is the AWS Identityand Access Management (IAM) identity
that makes the request.
https://portal.tutorialsdojo.com/ 315
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 316
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 317
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A managed service that enables you to easily encrypt your data. KMS provides a highly available key
storage, management, and auditing solution for you to encrypt data within your own applications and
control the encryption of stored data across AWS services.
● It works almost like CloudHSM since, under the hood, KMS also uses hardware security modules that
make it easy for you to create and control your encryption keys. But unlike CloudHSM, this service has
multi-tenant access, which means you share the HSM with other tenants or AWS customers. You also
cannot launch an HSM to Amazon VPC or EC2 instances that you own. The HSM is fully managed by
the Amazon Web Services team themselves. AWS KMS can be integrated with other AWS services to
help you protect the data you store with these services. For example, encrypting volumes or snapshots
in Amazon EBS is powered by AWS KMS as well as Server-Side encryption (SSE-KMS) in Amazon S3
and database encryption in Amazon RDS.
● AWS KMS uses envelope encryption, which is the practice of encrypting your plaintext data with a data
key; and then encrypting that data key using another key, called the master key. The primary resources
in KMS are called customer master key, or CMK. A CMK is basically a representation of the master key
that encrypts your data key. With AWS KMS, you can store your CMKs and automatically rotate them to
meet your encryption requirements. You can also create a custom key store in AWS KMS with
CloudHSM. This custom key store provides complete control over your encryption key lifecycle
management and allows you to remove the key material of your encryption keys. You can also audit key
usage independently of AWS CloudTrail or KMS itself.
https://portal.tutorialsdojo.com/ 318
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Features
● K MS is integrated with CloudTrail, which provides you the ability to audit who used which keys, on
which resources, and when.
● Customer master keys (CMKs) are used to control access to data encryption keys that encrypt and
decrypt your data.
● You can choose to have KMS automatically rotate master keys created within KMS once per year
without the need to re-encrypt data that has already been encrypted with your master key.
Concepts
● C ustomer Master Keys (CMKs)- You can use a CMK toencrypt and decrypt up to 4 KB of data.
Typically, you use CMKs to generate, encrypt, and decrypt the data keys that you use outside of KMS to
encrypt your data. Master keys are 256 bits in length.
● There are three types of CMKs:
Type of CMK Can view Can manage Used only for my AWS account
○ C ustomer managed CMKsare CMKs that you create, own,and manage. You have full control
over these CMKs, including establishing and maintaining their key policies, IAM policies, and
grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating
aliases that refer to the CMK, and scheduling the CMKs for deletion.
○ AWS managed CMKsare CMKs in your account that arecreated, managed, and used on your
behalf by an AWS service that integrates with KMS. You can view the AWS managed CMKs in
your account, view their key policies, and audit their use in CloudTrail logs. However, you cannot
manage these CMKs or change their permissions. And, you cannot use AWS managed CMKs in
cryptographic operations directly; the service that creates them uses them on your behalf.
○ AWS owned CMKsare not in your AWS account. They arepart of a collection of CMKs that AWS
owns and manages for use in multiple AWS accounts. AWS services can use AWS owned CMKs
to protect your data. You cannot view, manage, or use AWS owned CMKs, or audit their use.
https://portal.tutorialsdojo.com/ 319
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 320
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A service that lets you easily provision, manage, and deploy public and private SSL/TLS certificates for
use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure
network communications and establish the identity of websites over the Internet as well as resources
on private networks.
● ACM is integrated with the following services:
○ Elastic Load Balancing
○ Amazon CloudFront - To use an ACM certificate with CloudFront, you must request or import the
certificate in the US East (N. Virginia) region.
○ AWS Elastic Beanstalk
○ Amazon API Gateway
○ AWS CloudFormation
● AWS Certificate Manager manages the renewal process for the certificates managed in ACM and used
with ACM-integrated services.
● You can import your own certificates into ACM, however, you have to renew these yourself.
● Concepts
○ ACM Certificate are X.509 version 3 certificates. Each is valid for13 months.
○ When you request an ACM certificate, you must validate that you own or control all of the
domains that you specify in your request.
○ Each ACM Certificate must include at least one fully qualified domain name (FQDN). You can
add additional names if you want to.
○ You can create an ACM Certificate containing a wildcard name (*.example.com) that can
protect several sites in the same domain (subdomains).
○ You cannot download the private key for an ACM Certificate.
○ The first time you request or import a certificate in an AWS region, ACM creates an
AWS-managed customer master key (KMS key) in AWS KMS with the alias aws/acm. This KMS
key is unique in each AWS account and each AWS region. ACM uses this KMS key to encrypt the
certificate's private key.
○ You cannot add or remove domain names from an existing ACM Certificate. Instead, you must
request a new certificate with the revised list of domain names.
○ You cannot delete an ACM Certificate that is being used by another AWS service. To delete a
certificate that is in use, you must first remove the certificate association.
○ Applications and browsers trust public certificates automatically by default, whereas an
administrator must explicitly configure applications to trust private certificates.
● Types of Certificates For Use With ACM
○ Public certificates
■ ACM manages the renewal and deployment of public certificates used with
ACM-integrated services.
■ You cannot install public ACM certificates directly on your website or application, only
for integrated services.
https://portal.tutorialsdojo.com/ 321
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 322
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
eferences:
R
https://aws.amazon.com/certificate-manager/
https://aws.amazon.com/certificate-manager/faqs/
https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html
https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaWelcome.html
https://portal.tutorialsdojo.com/ 323
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A lso known asAWS Managed Microsoft AD, the serviceenables your directory-aware workloads and
AWS resources to usemanaged Active Directoryin theAWS Cloud.
● The service is built on the actual Microsoft Active Directory and powered by Windows Server 2012 R2.
● AWS Managed Microsoft AD is your best choice if you need actual Active Directory features to support
AWS applications or Windows workloads, including Amazon RDS for Microsoft SQL Server. It's also best
if you want a standalone AD in the Cloud that supports Office 365 or you need an LDAP directory to
support your Linux applications.
● Concepts
○ AWS Managed Microsoft AD provides multiple directory choices for customers who want to use
existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)–aware applications in
the cloud.
○ When you create a directory, AWS Directory Service creates two domain controllers and adds
the DNS service on your behalf. The domain controllers are created in different subnets in a VPC
○ When creating a directory, you need to provide some basic information such as a fully qualified
domain name (FQDN) for your directory, the Administrator account name and password, and the
VPC you want the directory to be attached to.
○ AWS does not provide Windows PowerShell access to directory instances, and it restricts
access to directory objects, roles, and groups that require elevated privileges.
○ AWS Managed Microsoft AD does not allow direct host access to domain controllers via Telnet,
Secure Shell (SSH), or Windows Remote Desktop Connection.
○ When you create an AWS Managed Microsoft AD directory, you are assigned an organizational
unit (OU) and an administrative account with delegated administrative rights for the OU.
○ AWS Managed Microsoft AD directories are deployed acrosstwo Availability Zones in a region
by default and connected to your Amazon VPC.
○ You cannot configure the storage, CPU, or memory parameters of your AWS Managed Microsoft
AD directory.
● Active Directory Schema
○ Aschemais the definition of attributes and classesthat are part of a distributed directory and is
similar to fields and tables in a database. Schemas include a set of rules which determine the
type and format of data that can be added or included in the database.
○ Attributes, classes, and objects are the basic elements that are used to build object definitions
in the schema.
■ Each schema attribute, which is similar to a field in a database, has several properties
that define the characteristics of the attribute.
■ The classes are analogous to tables in a database and also have several properties to be
defined.
https://portal.tutorialsdojo.com/ 324
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
■ E ach class and attribute must have an Object ID that is unique for all of your objects.
Software vendors must obtain their own Object ID to ensure uniqueness.
■ Some attributes are linked between two classes with forward and backlinks, such as
groups. A group shows you the members of the group, while a member shows what
groups it belongs to.
● Features
○ AWS Managed Microsoft AD is deployed in HA and across multiple Availability Zones. You can
also scale out your directory by deploying additional domain controllers.
○ AWS Managed Microsoft AD runs on AWS-managed infrastructure with monitoring that
automatically detects and replaces domain controllers that fail.
○ Data replication and automated daily snapshots are configured for you.
○ You can integrate AWS Managed Microsoft AD easily with your existing Active Directory by
usingActive Directory trust relationships.
○ Allows seamless domain join for new and existing Amazon EC2 for Windows Server instances.
○ AWS Managed Microsoft AD can also provide a single directory for all kinds of workloads (EC2,
RDS, WorkSpaces, etc).
○ The service supports schema extensions that you submit to the service in the form of a LDAP
Data Interchange Format (LDIF) file.
○ You can configure Amazon SNS to receive email and text messages when the status of your
AWS Directory Service changes.
○ You can configure SAML 2.0–based authentication with cloud applications using AWS Directory
Service.
○ You can use AWS Managed Microsoft AD as a resource forest that contains primarily
computers and groups with trust relationships to your on-premises directory. This enables your
users to access AWS applications and resources with their on-premises AD credentials.
● Microsoft AD Prerequisites
○ A VPC with at least two subnets. Each of the subnets must be in a different Availability Zone.
○ The necessary ports for the domain controllers that AWS Directory Service creates for you
should be open to allow them to communicate with each other.
○ The VPC must have default hardware tenancy.
○ AWS Directory Service does not support using NAT with Active Directory.
● Two Editions of AWS Managed Microsoft AD
○ Both Standard Edition and Enterprise Edition can be used as your organization’s primary
directory to manage users, devices, and computers.
○ You also can use both editions to create resource forests and extend your on-premises AD to
the AWS Cloud.Resource forestsuse a trust relationshipwith your on-premises AD to enable
you to access AWS applications and resources with your on-premises AD credentials.
○ Both editions also support the creation of additional domain controllers to improve the
redundancy and performance of your managed directory.
○ Unique to Standard Edition
https://portal.tutorialsdojo.com/ 325
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
■ O ptimized to be a primary directory for small and midsize businesses with up to 5,000
employees.
■ Provides you with enough storage capacity to support up to approximately 30,000
directory objects, such as users, groups, and computers.
○ Unique to Enterprise Edition
■ Designed to support enterprise organizations with up to approximately 500,000 directory
objects.
● Seamless Domain Joins
○ Seamless domain joinis a feature that allows youto join your Amazon EC2 for Windows Server
instances seamlessly to a domain, at the time of launch and from the AWS Management
Console. You can join instances to AWS Managed Microsoft AD that you launch in the AWS
Cloud.
○ You cannot use the seamless domain join feature from the AWS Management Console for
existing EC2 for Windows Serverinstances, but youcan join existing instances to a domain
using the EC2 API or by using PowerShell on the instance.
Security and Monitoring
●
○ AWS Managed Microsoft AD is both HIPAA and PCI DSS compliant.
○ Manage users and devices by using native Active Directory Group Policy objects (GPOs).
○ AWS Managed Microsoft AD uses the same Kerberos-based authentication as Active Directory
to deliver IAM Identity Center.
○ AWS Managed Microsoft AD supports federation access for users and groups to the AWS
Management Console.
○ Amazon EBS volumes used in the directory service are encrypted.
● Pricing
○ You pay only for the type and size of the managed directory that you use.
○ AWS Managed Microsoft AD allows you to use a directory in one account and share it with
multiple accounts and VPCs. There is an hourly sharing charge for each additional account to
which you share a directory.
● A proxy servicethat provides an easy way to connectcompatible AWS applications, such as Amazon
WorkSpaces, Amazon QuickSight, and Amazon EC2 for Windows Server instances, to your existing
on-premises Microsoft Active Directory.
● AD Connector is your best choice when you want to use your existing on-premises directory with
compatible AWS services.
● Features
○ When users log in to the AWS applications, AD Connector forwards sign-in requests to your
on-premises Active Directory domain controllers for authentication.
https://portal.tutorialsdojo.com/ 326
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can also join your EC2 Windows instances to your on-premises Active Directory domain
through AD Connector using seamless domain join.
○ AD Connector is NOT compatible with RDS SQL Server.
○ AD Connector comes in two sizes, small and large.
○ You can spread application loads across multiple AD Connectors to scale to your performance
needs. There are no enforced user or connection limits.
AD Connector Prerequisites
●
○ You need to have a VPC with at least two subnets. Each of the subnets must be in a different
Availability Zone.
○ The VPC must be connected to your existing network through a virtual private network (VPN)
connection or AWS Direct Connect.
○ The VPC must have default hardware tenancy.
○ Your user accounts must have Kerberos pre-authentication enabled.
Simple AD
● A standalone Microsoft Active Directory–compatibledirectory from AWS Directory Service that is
powered bySamba 4.
● You can use Simple AD as a standalone directory in the cloud to support Windows workloads that need
basic AD features, compatible AWS applications, or to support Linux workloads that need LDAP service.
● Features
○ Simple AD supports basic Active Directory features such as user accounts, group memberships,
joining a Linux domain or Windows-based EC2 instances, Kerberos-based SSO, and group
policies.
○ AWS provides monitoring, daily snapshots, and recovery as part of the service.
○ Simple AD is compatible with the following AWS applications: Amazon WorkSpaces, Amazon
WorkDocs, Amazon QuickSight, and Amazon WorkMail.
○ You can also sign in to the AWS Management Console with Simple AD user accounts.
○ Simple AD does NOT support multi-factor authentication, trust relationships, DNS dynamic
update, schema extensions, communication over LDAPS, PowerShell AD cmdlets, or FSMO role
transfer.
○ Simple AD is NOT compatible with RDS SQL Server.
○ Simple AD is available in two sizes:
■ Small - Supports up to 500 users
■ Large - Supports up to 5,000 users
● Simple AD Prerequisites
○ Your VPC should have at least two subnets. For Simple AD to install correctly, you must install
your two domain controllers in separate subnets that must be in a different Availability Zone. In
addition, the subnets must be in the same Classless Inter-Domain Routing (CIDR) range.
https://portal.tutorialsdojo.com/ 327
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ T he necessary ports for the domain controllers that AWS Directory Service creates for you
should be open to allow them to communicate with each other.
○ The VPC must have default hardware tenancy.
● When you create a directory with Simple AD, AWS Directory Service performs the following tasks on
your behalf:
○ Set up a Samba-based directory within the VPC.
○ Creates a directory administrator account with the user name ‘Administrator’ and the specified
password. You use this account to manage your directory.
○ Creates a security group for the directory controllers.
○ Creates an account that has domain admin privileges.
Simple AD forwards DNS requests to the IP address of the Amazon-provided DNS servers for your VPC.
●
These DNS servers will resolve names configured in your Route 53 private hosted zones
● A cloud-native directorythat can store hundreds ofmillions of application-specific objects with
multiple relationships and schemas. Use Amazon Cloud Directory if you need ahighly scalable
directory storefor your application’shierarchicaldata.
● You can organize directory objects into multiple hierarchies to support many organizational pivots and
relationships across directory information.
● Concepts
○ A schema is a collection of facets that define what objects can be created in a directory and
how they are organized.
○ A schema also enforces data integrity and interoperability.
○ A single schema can be applied to more than one directory at a time.
○ Amazon Cloud Directory supports uploading a compliantJSON file for schema creation.
○ A directory is a schema-based data store that contains specific types of objects organized in a
multi-hierarchical structure.
○ Before you can create a directory in Amazon Cloud Directory, AWS Directory Service requires
that you first apply a schema to it. A directory cannot be created without a schema and typically
has one schema applied to it.
eferences:
R
https://aws.amazon.com/directoryservice/features/?nc=sn&loc=2
https://docs.aws.amazon.com/clouddirectory/latest/developerguide/what_is_cloud_directory.html
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html
https://docs.aws.amazon.com/clouddirectory/latest/developerguide/what_is_cloud_directory.html
https://portal.tutorialsdojo.com/ 328
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A service that enables you to easily and securely share AWS resources with any AWS account or, if you
are part of AWS Organizations, with Organizational Units (OUs) or your entire Organization. If you share
resources with accounts that are outside of your Organization, then those accounts will receive an
invitation to the Resource Share and can start using the shared resources upon accepting the invitation.
○ Only the master account can enable sharing with AWS Organizations.
○ The organization must be enabled for all features.
● RAM eliminates the need to create duplicate resources in multiple accounts. You can create resources
centrally in a multi-account environment and use RAM to share those resources across accounts in
three simple steps:
1. Create a Resource Share
2. Specify resources
3. Specify accounts
● You can stop sharing a resource by deleting the share in AWS RAM.
● Services you can share with AWS RAM
Service Resource
● Security
○ Use IAM policies to secure who can access resources that you shared or received from another
account.
● Pricing
○ There is no additional charge for using AWS RAM.
https://portal.tutorialsdojo.com/ 329
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
eferences:
R
https://aws.amazon.com/ram/
https://aws.amazon.com/ram/faqs/
https://portal.tutorialsdojo.com/ 330
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A WS Security Hub provides acomprehensive viewofyoursecurity statewithin AWS and your
compliancewith security industry standards and bestpractices.
● Features
○ You now have a single place thataggregates, organizes,and prioritizes your security alerts, or
findings, across multiple accounts, AWS partner tools, and AWS services such as Amazon
GuardDuty, Amazon Inspector, Amazon Macie, AWS IAM Access Analyzer, AWS Firewall
Manager, and AWS Audit Manager.
○ AWS Security Hub works with AWS Organizations to simplify security posture management
across all of your existing and future AWS accounts in an organization.
○ You can run automated, continuous account-level configuration and compliance checks based
on industry standards and best practices, such as the Center for Internet Security (CIS) AWS
Foundations Benchmark. These checks provide a compliance score and identify specific
accounts and resources that require attention.
○ AWS Security Hub compliance checks also leverage configuration items recorded by AWS
Config.
○ Integrated dashboards consolidate your security findings across accounts to show you their
current security and compliance status.
○ You can send security findings to ticketing, chat, email, or automated remediation systems
through integration with Amazon EventBridge (Amazon CloudWatch Events).
○ All findings are stored for at least 90 days within AWS Security Hub.
● Security Hub receives and processes only those findings from the same Region where you enabled
Security Hub in your account.
● Concepts
○ AWS Security Finding Format - A standardized format for the contents of findings that Security
Hub aggregates or generates.
○ Control - A safeguard or countermeasure prescribed for an information system or an
organization designed to protect the confidentiality, integrity, and availability of its information
and to meet a set of defined security requirements. A security standard consists of controls.
○ Custom action - A Security Hub mechanism for sending selected findings toAmazon
EventBridge (Amazon CloudWatch Events).
○ Finding - The observable record of a compliance check or security-related detection.
○ Insight - A collection of related findings defined by an aggregation statement and optional
filters. An insight identifies a security area that requires attention and intervention.
○ Compliance standards - Sets of controls that are based on regulatory requirements or best
practices.
○ You can disable specific compliance controls that are not relevant to your workloads.
● Compliance standard vs. Control vs. Compliance check
○ A compliance standard is a collection of controls based on regulatory frameworks or industry
best practices. Security Hub conducts automated compliance checks against controls. Each
https://portal.tutorialsdojo.com/ 331
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ompliance check consists of an evaluation of a rule against a single resource. A single control
c
may involve multiple resources, and a compliance check is performed against each resource.
○ AWS Security Hub uses aservice-linked rolethat includesthe permissions and trust policy that
Security Hub requires to detect and aggregate findings and to configure the requisite AWS
Config infrastructure needed to run compliance checks. In order for Security Hub to run
compliance checksin an account, you must haveAWSConfig enabledin that account.
Pricing
●
○ AWS Security Hub is priced based on thequantity ofcompliance checksand thequantity of
finding ingestion events.
○ Pricing is on a monthly per-account, per-region basis.
eferences:
R
https://aws.amazon.com/security-hub/
https://aws.amazon.com/security-hub/faqs/
Key Features:
● AWS STS provides short-term credentials that can last from a few minutes to several hours, expiring
automatically after the set duration. This mitigates risks associated with long-term credential exposure.
● By default, STS operates globally with a central endpoint. However, AWS recommends utilizing regional
endpoints to reduce latency, enhance redundancy, and potentially increase session token validity.
● AWS STS is compatible with AWS CloudTrail, allowing for detailed logging of STS API calls. This feature
aids in auditing and monitoring the use of temporary credentials across AWS resources.
● AWS STS supports identity federation, allowing users from external systems to access AWS resources
without AWS-specific credentials. It also supports cross-account roles, enabling resource access
across different AWS accounts without direct identity provisioning within each account.
● You can specify the desired validity period for the temporary credentials, tailored to the needs of
specific tasks or operations within AWS environments.
https://portal.tutorialsdojo.com/ 332
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● ssume-role-with-web-identity: To assume a role with web identity federation.
a
● decode-authorization-message: To decode additionalinformation about authorization status.
● get-access-key-info: For retrieving information aboutthe access key used in a request.
● get-caller-identity: To retrieve details about theentity making the call.
● get-federation-token: To get a federation token fora federated user.
● get-session-token: For getting a session token forMFA or in cases where none is required.
eferences:
R
https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html
https://awscli.amazonaws.com/v2/documentation/api/2.0.33/reference/sts/index.html
https://portal.tutorialsdojo.com/ 333
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS Shield
● A
managed Distributed Denial of Service (DDoS) protection service that safeguards applications
running on AWS.
● Standard
○ All AWS customers benefit from the automatic protections of Shield Standard.
○ Shield Standard provides always-on network flow monitoring, which inspects incoming traffic to
AWS and detects malicious traffic in real-time.
○ Uses several techniques like deterministic packet filtering and priority-based traffic shaping to
automatically mitigate attacks without impact to your applications.
○ When you use Shield Standard with CloudFront and Route 53, you receive comprehensive
availability protection against all known infrastructure attacks.
○ You can also view all the events detected and mitigated by AWS Shield in your account.
● Advanced
○ Shield Advanced provides enhanced detection, inspecting network flows, and also monitoring
application layer traffic to your Elastic IP address, Elastic Load Balancing, CloudFront, or Route
53 resources.
○ It handles the majority of DDoS protection and mitigation responsibilities forlayer 3,layer 4, and
layer 7attacks.
○ You have 24x7 access to the AWS DDoS Response Team. To contact the DDoS Response Team,
customers will need the Enterprise or Business Support levels of AWS Premium Support.
○ It automatically provides additional mitigation capacity to protect against larger DDoS attacks.
The DDoS Response Team also applies manual mitigations for more complex and sophisticated
DDoS attacks.
○ It gives you complete visibility into DDoS attacks with near real-time notification via CloudWatch
and detailed diagnostics on the “AWS WAF and AWS Shield” Management Console.
○ Shield Advanced comes with “DDoS cost protection”, a safeguard from scaling charges as a
result of a DDoS attack that causes usage spikes on your AWS services. It does so by providing
service credits for charges due to usage spikes.
○ It is available globally on all CloudFront and Route 53 edge locations.
○ With Shield Advanced, you will be able to see the history of all incidents in the trailing 13
months.
Pricing
https://portal.tutorialsdojo.com/ 334
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● S
hield Advanced, however, is a paid service. It requires a 1-year subscription commitment and charges
a monthly fee, plus a usage fee based on data transfer out from CloudFront, ELB, EC2, and AWS Global
Accelerator.
eferences:
R
https://aws.amazon.com/shield/features/
https://aws.amazon.com/shield/pricing/
https://aws.amazon.com/shield/faqs/
https://portal.tutorialsdojo.com/ 335
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS WAF
● A web application firewall that helps protect web applications from attacks by allowing you to configure
rules thatallow, block, or monitor (count) web requestsbased on conditions that you define.
● These conditions include:
○ IP addresses
○ HTTP headers
○ HTTP body
○ URI strings
○ SQL injection
○ cross-site scripting.
Features
● W AF lets you create rules to filter web traffic based on conditions that include IP addresses, HTTP
headers, and body, or custom URIs.
● You can also create rules that block common web exploits like SQL injection and cross-site scripting.
● For application layer attacks, you can use WAF to respond to incidents. You can set up proactive rules
likeRate-Based Blacklistingto automatically blockbad traffic or respond immediately to incidents as
they happen.
● WAF provides real-time metrics and captures raw requests that include details about IP addresses,
geo-locations, URIs, User-Agent, and Referers.
● AWS WAF Security Automationsis a solution that automaticallydeploys a single web access control
list (web ACL) with a set of AWS WAF rules designed to filter common web-based attacks. The solution
supports log analysis using Amazon Athena and AWS WAF full logs.
Y
● ou define your conditions, combine your conditions into rules, and combine the rules into a web ACL.
● Conditionsdefine the basic characteristics that youwant WAF to watch for in web requests.
● You combine conditions intorulesto precisely targetthe requests that you want to allow, block, or
count. WAF provides two types of rules:
○ Regular rules- use only conditions to target specificrequests.
○ Rate-based rules- are similar to regular rules, witha rate limit. Rate-based rules count the
requests that arrive from a specified IP address every five minutes. The rule can trigger an
action if the number of requests exceeds the rate limit.
● WAF Managed Rulesare an easy way to deploy pre-configuredrules to protect your applications'
common threats like application vulnerabilities. All Managed Rules are automatically updated by AWS
Marketplace security Sellers.
https://portal.tutorialsdojo.com/ 336
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A
fter you combine your conditions into rules, you combine the rules into aweb ACL. This is where you
define an action for each rule—allow, block, or count—and a default action, which determines whether to
allow or block a request that doesn't match all the conditions in any of the rules in the web ACL.
Pricing
● W
AF charges are based on the number of web access control lists (web ACLs) that you create, the
number of rules that you add per web ACL, and the number of web requests that you receive.
eferences:
R
https://aws.amazon.com/waf/features/
https://aws.amazon.com/waf/pricing/
https://aws.amazon.com/waf/faqs/
https://portal.tutorialsdojo.com/ 337
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS Organizations
Features
W
● ith Organizations, you can create groups of accounts and then apply policies to those groups.
● Organizations provide you with a policy framework for multiple AWS accounts. You can apply policies
to a group of accounts or all the accounts in your organization.
● AWS Organizations enables you to set up a single payment method for all the AWS accounts in your
organization throughconsolidated billing. With consolidatedbilling, you can see a combined view of
charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated
usage, such as volume discounts for EC2 and S3.
● AWS Organizations, like many other AWS services, iseventually consistent. It achieves high availability
by replicating data across multiple servers in AWS data centers within its region.
● C reate an AWS account and add it to your organization, or add an existing AWS account to your
organization.
● Organize your AWS accounts into groups calledorganizationalunits(OUs).
● Organize your OUs into a hierarchy that reflects your company’s structure.
● Centrally manage and attach policies to the entire organization, OUs, or individual AWS accounts.
Concepts
● A norganizationis a collection of AWS accounts thatyou can organize into a hierarchy and manage
centrally.
● Amanagement accountis the AWS account you use tocreate your organization. You cannot change
which account in your organization is the management account.
○ From the management account, you can create other accounts in your organization, invite and
manage invitations for other accounts to join your organization, and remove accounts from your
organization.
○ You can also attach policies to entities such as administrative roots, organizational units (OUs),
or accounts within your organization.
○ The management account has the role of a payer account and is responsible for paying all
charges accrued by the accounts in its organization.
● Amember accountis an AWS account, other than themanagement account, that is part of an
organization. A member account can belong to only one organization at a time. The management
https://portal.tutorialsdojo.com/ 338
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
ccount has the responsibilities of a payer account and is responsible for paying all charges that are
a
accrued by the member accounts.
● Anadministrative rootis the starting point for organizingyour AWS accounts. The administrative root
is the topmost container in your organization’s hierarchy. Under this root, you can create OUs to logically
group your accounts and organize these OUs into a hierarchy that best matches your business needs.
● Anorganizational unit(OU) is a group of AWS accountswithin an organization. An OU can also contain
other OUs enabling you to create a hierarchy.
● Apolicyis a “document” with one or more statementsthat define the controls that you want to apply to
a group of AWS accounts.
○ Service control policy(SCP) is a policy that specifiesthe services and actions that users and
roles can use in the accounts that the SCP affects. SCPs are similar to IAM permission policies
except that they don't grant any permissions. Instead, SCPs arefiltersthat allow only the
specified services and actions to be used in affected accounts
● AWS Organizations has two available feature sets:
○ All organizations supportconsolidated billing, whichprovides basic management tools that you
can use to centrally manage the accounts in your organization.
○ If you enableall features, you continue to get allthe consolidated billing features plus a set of
advanced features such as service control policies.
● You can remove an AWS account from an organization and make it into a standalone account.
● Organization Hierarchy
○ Including root and AWS accounts created in the lowest OUs, your hierarchy can be five levels
deep.
○ Policies inherited through hierarchical connections in an organization.
○ Policies can be assigned at different points in the hierarchy.
Pricing
eferences:
R
https://docs.aws.amazon.com/organizations/latest/userguide/
https://aws.amazon.com/organizations/features/
https://aws.amazon.com/organizations/faqs/
https://portal.tutorialsdojo.com/ 339
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon CloudWatch
M
● onitoring tool for your AWS resources and applications.
● Display metrics and create alarms that watch the metrics and send notifications or automatically make
changes to the resources you are monitoring when a threshold is breached.
● CloudWatch is basically a metrics repository. An AWS service, such as Amazon EC2, puts metrics into
the repository and you retrieve statistics based on those metrics. If you put your own custom metrics
into the repository, you can retrieve statistics on these metrics as well.
● CloudWatch does not aggregate data across regions. Therefore, metrics are completely separate
between regions.
● CloudWatch Concepts
○ Namespaces- a container for CloudWatch metrics.
■ There is no default namespace.
■ The AWS namespaces use the following naming convention: AWS/service.
○ Metrics- represents a time-ordered set of data pointsthat are published to CloudWatch.
■ Exists only in the region in which they are created.
■ By default, several services provide free metrics for resources. You can also enable
detailed monitoring, or publish your own applicationmetrics.
■ Metric mathenables you to query multiple CloudWatchmetrics and use math
expressions to create new time series based on these metrics.
■ Important note for EC2 metrics:CloudWatch does notcollect memory utilization and
disk space usage metrics right from the get go. You need to install CloudWatch Agent in
your instances first to retrieve these metrics.
○ Dimensions- a name/value pair that uniquely identifiesa metric.
■ You can assign up to 10 dimensions to a metric.
■ Whenever you add a unique dimension to one of your metrics, you are creating a new
variation of that metric.
○ Statistics- metric data aggregations over specifiedperiods of time.
■ Each statistic has a unit of measure. Metric data points that specify a unit of measure
are aggregated separately.
■ You can specify a unit when you create a custom metric. If you do not specify a unit,
CloudWatch usesNoneas the unit.
■ Aperiodis the length of time associated with a specificCloudWatch statistic. The
default value is 60 seconds.
■ CloudWatch aggregates statistics according to the period length that you specify when
retrieving statistics.
■ For large datasets, you can insert a pre-aggregated dataset called astatistic set.
● Alarms- watches a single metric over a specifiedtime period, and performs one or more specified
actions based on the value of the metric relative to a threshold over time.
https://portal.tutorialsdojo.com/ 340
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can create an alarm for monitoring CPU usage and load balancer latency, for managing
instances, and for billing alarms.
○ When an alarm is on a dashboard, it turns red when it is in theALARMstate.
○ Alarms invoke actions for sustained state changes only.
○ Alarm States
■ OK—The metric or expression is within the definedthreshold.
■ ALARM—The metric or expression is outside of the definedthreshold.
■ INSUFFICIENT_DATA—The alarm has just started, themetric is not available, or not
enough data is available for the metric to determine the alarm state.
○ You can also monitor your estimated AWS charges by using Amazon CloudWatch Alarms.
However, take note that you can only track the estimated AWS charges in CloudWatch and not
the actual utilization of your resources. Remember that you can only set coverage targets for
your reserved EC2 instances in AWS Budgets or Cost Explorer, but not in CloudWatch.
○ When you create an alarm, you specify three settings:
■ Periodis the length of time to evaluate the metricor expression to create each individual
data point for an alarm. It is expressed in seconds.
■ Evaluation Periodis the number of the most recentperiods or data points to evaluate
when determining alarm state.
■ Datapoints to Alarmis the number of data points withinthe evaluation period that must
be breached to cause the alarm to go to the ALARM state. The breaching data points do
not have to be consecutive, they just must all be within the last number of data points
equal to theEvaluation Period.
CloudWatch Dashboard
○ Customizable home pages in the CloudWatch console that you can use to monitor your
resources in a single view, even those spread across different regions.
○ There is no limit on the number of CloudWatch dashboards you can create.
○ All dashboards areglobal, not region-specific.
○ You can add, remove, resize, move, edit, or rename a graph. You can metrics manually in a
graph.
CloudWatch Logs
○ Features
■ Monitor logs from EC2 instances in real-time
■ Monitor CloudTrail logged events
■ By default, logs are kept indefinitely and never expire
■ Archive log data
■ Log Route 53 DNS queries
CloudWatch Agent
○ Collect more logs and system-level metrics from EC2 instances and your on-premises servers.
○ Needs to be installed.
https://portal.tutorialsdojo.com/ 341
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 342
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Groups our EC2 instances are organized into groups so that they are
Y
treated as a logical unit for scaling and management. When you
create a group, you can specify its minimum, maximum, and
desired number of EC2 instances.
Launch templates our group uses a launch template for its EC2 instances. When
Y
you create a launch template, you can specify information such
as the AMI ID, instance type, key pair, security groups, and block
device mapping for your instances.
https://portal.tutorialsdojo.com/ 343
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Y
○ ou can specify your launch template with multiple Auto Scaling groups.
○ You can only specify one launch template for an Auto Scaling group at a time, and you can't
modify a launch template after you've created it.
Monitoring
●
○ Health checks - identifies any instances that are unhealthy
■ Amazon EC2 status checks (default)
■ Elastic Load Balancing health checks
■ Custom health checks.
○ Autoscaling does not perform health checks on instances in the standby state. Standby state
can be used for performing updates/changes/troubleshooting without health checks being
performed or replacement instances being launched.
○ CloudWatch metrics - enables you to retrieve statistics about Auto Scaling-published data points
as an ordered set of time-series data, known as metrics. You can use these metrics to verify that
your system is performing as expected.
○ Amazon EventBridge (Amazon CloudWatch Events) - Auto Scaling can submit events to Amazon
EventBridge (Amazon CloudWatch Events) when your Auto Scaling groups launch or terminate
instances or when a lifecycle action occurs.
○ SNS notifications - Auto Scaling can send Amazon SNS notifications when your Auto Scaling
groups launch or terminate instances.
○ CloudTrail logs - enables you to keep track of the calls made to the Auto Scaling API by or on
behalf of your AWS account and stores the information in log files in an S3 bucket that you
specify.
https://portal.tutorialsdojo.com/ 344
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS CloudFormation
● A
service that gives developers and businesses an easy way to create a collection of related AWS
resources and provision them in an orderly and predictable fashion.
Features
● C loudFormation allows you to model your entire infrastructure in a text file called atemplate. You can
use JSON or YAML to describe what AWS resources you want to create and configure. If you want to
design visually, you can useAWS CloudFormation Designer.
● CloudFormation automates the provisioning and updating of your infrastructure in a safe and controlled
manner. You can useRollback Triggersto specify theCloudWatch alarm that CloudFormation should
monitor during the stack creation and update process. If any of the alarms are breached,
CloudFormation rolls back the entire stack operation to a previously deployed state.
● CloudFormation enables you to build custom extensions to your stack template using AWS Lambda.
Concepts
● Templates
○ A JSON or YAML formatted text file.
○ CloudFormation uses these templates as blueprints for building your AWS resources.
● Stacks
○ Manage related resources as a single unit.
○ All the resources in a stack are defined by the stack's CloudFormation template.
● Change Sets
○ Before updating your stack and making changes to your resources, you can generate a change
set, which is a summary of your proposed changes.
○ Change sets allow you to see how your changes might impact your running resources,
especially critical resources, before implementing them.
● With AWS CloudFormation and AWS CodePipeline, you can use continuous delivery to automatically
build and test changes to your CloudFormation templates before promoting them to production stacks.
● CloudFormation artifactscan include a stack templatefile, a template configuration file, or both. AWS
CodePipeline uses these artifacts to work with CloudFormation stacks and change sets.
○ Stack Template File- defines the resources that CloudFormationprovisions and configures. You
can use YAML or JSON-formatted templates.
○ Template Configuration File- a JSON-formatted textfile that can specify template parameter
values, a stack policy, and tags. Use these configuration files to specify parameter values or a
stack policy for a stack.
https://portal.tutorialsdojo.com/ 345
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Stacks
● If a resource cannot be created, CloudFormation rolls the stack back and automatically deletes any
resources that were created. If a resource cannot be deleted, any remaining resources are retained until
the stack can be successfully deleted.
● Stack update methods
○ Direct update
○ Creating and executing change sets
● Drift detectionenables you to detect whether a stack'sactual configuration differs, or has drifted, from
its expected configuration. Use CloudFormation to detect drift on an entire stack or on individual
resources within the stack.
○ A resource is considered to have drifted, if any, if its actual property values differ from the
expected property values.
○ A stack is considered to have drifted if one or more of its resources have drifted.
● To share information between stacks, export a stack's output values. Other stacks that are in the same
AWS account and region can import the exported values.
● You can nest stacks.
Templates
T
● emplates include several major sections. The Resources section is the only required section.
● CloudFormation Designeris a graphic tool for creating,viewing, and modifying CloudFormation
templates. You can diagram your template resources using a drag-and-drop interface and then edit their
details using the integrated JSON and YAML editor.
● Custom resources enable you to write custom provisioning logic in templates that CloudFormation runs
anytime you create, update (if you change the custom resource), or delete stacks.
● Template macros enable you to perform custom processing on templates, from simple actions like
find-and-replace operations to extensive transformations of entire templates.
StackSets
● C loudFormation StackSets allow you to roll out CloudFormation stacks over multiple AWS accounts
and in multiple Regions with just a couple of clicks. StackSets is commonly used together with AWS
Organizations to centrally deploy and manage services in different accounts.
● Administrator and target accounts - Anadministratoraccountis the AWS account in which you create
stack sets. A stack set is managed by signing in to the AWS administrator account in which it was
created. Atarget accountis an account into whichyou create, update, or delete one or more stacks in
your stack set.
● In addition to the organization’s management account, you can delegate other administrator accounts
in your AWS Organization that can create and manage stack sets with service-managed permissions
for the organization.
https://portal.tutorialsdojo.com/ 346
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● S tack sets - Astack setlets you create stacks in AWS accounts across regions by using a single
CloudFormation template. All the resources included in each stack are defined by the stack set's
CloudFormation template. A stack set is a regional resource.
● Stack instances - Astack instanceis a referenceto a stack in a target account within a region. A stack
instance can exist without a stack and can be associated with only one stack set.
● Stack set operations - Create stack set, update stack set, delete stacks, and delete stack set.
● Tags - You can add tags during stack set creation and update operations by specifying key and value
pairs.
● Drift detection identifies unmanaged changes or changes made to stacks outside of CloudFormation.
When CloudFormation performs drift detection on a stack set, it performs drift detection on the stack
associated with each stack instance in the stack set. If the current state of a resource varies from its
expected state, that resource is considered to have drifted.
● If one or more resources in a stack have drifted, then the stack itself is considered to have drifted, and
the stack instances that the stack is associated with are considered to have drifted as well.
● If one or more stack instances in a stack set have drifted, the stack set itself is considered to have
drifted.
https://portal.tutorialsdojo.com/ 347
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS CloudTrail
● A ctions taken by a user, role, or an AWS service in the AWS Management Console, AWS Command Line
Interface, and AWS SDKs and APIs are recorded as events.
● CloudTrail is enabled on your AWS account when you create it.
● CloudTrail focuses on auditing API activity.
Trails
○ C reate a CloudTrail trail to archive, analyze, and respond to changes in your AWS resources.
○ Types
■ A trail that applies to all regions - CloudTrail records events in each region and delivers
the CloudTrail event log files to an S3 bucket that you specify. This is the default option
when you create a trail in the CloudTrail console.
■ A trail that applies to one region - CloudTrail records the events in the region that you
specify only. This is the default option when you create a trail using the AWS CLI or the
CloudTrail API.
○ Y ou can create an organization trail that will log all events for all AWS accounts in an
organization created by AWS Organizations. Organization trails must be created in the
management account.
○ By default, CloudTrail event log files are encrypted using Amazon S3 server-side encryption. You
can also choose to encrypt your log files with an AWS Key Management Service key.
○ You can store your log files in your S3 bucket for as long as you want and also define S3
lifecycle rules to archive or delete log files automatically. If you want notifications about log file
delivery and validation, you can set up Amazon SNS notifications. CloudTrail publishes log files
about every five minutes.
Events
○ The record of activity in an AWS account. This activity can be an action taken by a user, role, or
service that is monitorable by CloudTrail.
○ Types
■ Management events
■ Logged by default
■ Management events provide insight into management operations performed on
resources in your AWS account, also known as control plane operations.
■ Data events
■ Not logged by default
■ Data events provide insight into the resource operations performed on or in a
resource, also known as data plane operations.
■ Data events are often high-volume activities.
https://portal.tutorialsdojo.com/ 348
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ B y default, the configuration recorder records all supported resources in the region where Config
is running. You can create a customized configuration recorder that records only the resource
types that you specify.
○ You can also have Config record supported types ofglobal resourceswhich are IAM users,
groups, roles, and customer-managed policies.
Configuration Item
○ The configuration of a resource at a given point in time. A CI consists of 5 sections:
■ Basic information about the resource that is common across different resource types.
■ Configuration data specific to the resource.
■ Map of relationships with other resources.
■ CloudTrail event IDs that are related to this state.
■ Metadata that helps you identify information about the CI, such as the version of this CI,
and when this CI was captured.
Resource Relationship
○ Config discovers AWS resources in your account and then creates a map of relationships
between AWS resources.
Config rule
○ Represents your desired configuration settings for specific AWS resources or for an entire AWS
account.
○ Provides customizable, predefined rules. If a resource violates a rule, Config flags the resource
and the rule as noncompliant and notifies you through Amazon SNS.
○ Evaluates your resources eitherin response to configurationchangesorperiodically.
Multi-Account Multi-Region Data Aggregation
●
○ An aggregator collects configuration and compliance data from the following:
■ Multiple accounts and multiple regions.
■ Single account and multiple regions.
■ An organization in AWS Organizations and all the accounts in that organization.
Monitoring
● U se Amazon SNS to send you notifications every time a supported AWS resource is created, updated, or
otherwise modified as a result of user API activity.
● UseAmazon EventBridge (Amazon CloudWatch Events)to detect and react to changes in the status of
AWS Config events.
● Use AWS CloudTrail to capture API calls to Config.
https://portal.tutorialsdojo.com/ 350
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ B y default, the configuration recorder records all supported resources in the region where Config
is running. You can create a customized configuration recorder that records only the resource
types that you specify.
○ You can also have Config record supported types ofglobal resourceswhich are IAM users,
groups, roles, and customer-managed policies.
Configuration Item
○ The configuration of a resource at a given point in time. A CI consists of 5 sections:
■ Basic information about the resource that is common across different resource types.
■ Configuration data specific to the resource.
■ Map of relationships with other resources.
■ CloudTrail event IDs that are related to this state.
■ Metadata that helps you identify information about the CI, such as the version of this CI,
and when this CI was captured.
Resource Relationship
○ Config discovers AWS resources in your account and then creates a map of relationships
between AWS resources.
Config rule
○ Represents your desired configuration settings for specific AWS resources or for an entire AWS
account.
○ Provides customizable, predefined rules. If a resource violates a rule, Config flags the resource
and the rule as noncompliant and notifies you through Amazon SNS.
○ Evaluates your resources eitherin response to configurationchangesorperiodically.
Multi-Account Multi-Region Data Aggregation
●
○ An aggregator collects configuration and compliance data from the following:
■ Multiple accounts and multiple regions.
■ Single account and multiple regions.
■ An organization in AWS Organizations and all the accounts in that organization.
Monitoring
● U se Amazon SNS to send you notifications every time a supported AWS resource is created, updated, or
otherwise modified as a result of user API activity.
● UseAmazon EventBridge (Amazon CloudWatch Events)to detect and react to changes in the status of
AWS Config events.
● Use AWS CloudTrail to capture API calls to Config.
https://portal.tutorialsdojo.com/ 350
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS Health
P
● rovides ongoing visibility into the state of your AWS resources, services, and accounts.
● The service delivers alerts and notifications triggered by changes in the health of AWS resources.
● ThePersonal Health Dashboard, powered by the AWSHealth API, is available to all customers. The
dashboard requires no setup, and it is ready to use for authenticated AWS users. The Personal Health
Dashboard organizes issues in three groups:
○ Open issues - restricted to issues whose start time is within the last seven days.
○ Scheduled changes - contain items that are ongoing or upcoming.
○ Other notifications - restricted to issues whose start time is within the last seven days.
● You can centrally aggregate your AWS Health events from all accounts in your AWS Organization. The
AWS Health Organizational View provides centralized, and real-time access to all AWS Health events
posted to individual accounts in your organization, including operational issues, scheduled
maintenance, and account notifications.
https://portal.tutorialsdojo.com/ 351
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A
llows you to centralize operational data from multiple AWS services and automate tasks across your
AWS resources.
Features
● C reate logical groups of resources such as applications, different layers of an application stack, or
production versus development environments.
● You can select a resource group and view its recent API activity, resource configuration changes,
related notifications, operational alerts, software inventory, and patch compliance status.
● Collects information about your instances and the software installed on them.
● Allows you to safely automate common and repetitive IT operations and management tasks across
AWS resources.
● Provides a browser-based interactive shell and CLI for managing Windows and Linux EC2 instances
without the need to open inbound ports, manage SSH keys, or use bastion hosts. Administrators can
grant and revoke access to instances through a central location by using IAM policies.
● Helps ensure that your software is up-to-date and meets your compliance policies.
● Lets you schedule windows of time to run administrative and maintenance tasks across your instances.
SM Agentis the tool that processes Systems Managerrequests and configures your machine as specified in
S
the request. SSM Agent must be installed on each instance you want to use with Systems Manager. On newer
AMIs and instance types, SSM Agent is installed by default. On older versions, you must install it manually.
Capabilities
● Automation
○ Allows you to safely automate common and repetitive IT operations and management tasks
across AWS resources
○ Astepis defined as an initiated action performedin the Automation execution on a per-target
basis. You can execute the entire Systems Manager automation document in one action or
choose to execute one step at a time.
○ Concepts
■ Automation document- defines the Automation workflow.
■ Automation action- the Automation workflow includesone or more steps. Each
step is associated with a particular action or plugin. The action determines the
inputs, behavior, and outputs of the step.
■ Automation queue- if you attempt to run more than25 Automations
simultaneously, Systems Manager adds the additional executions to a queue and
displays a status ofPending. When an Automation reachesa terminal state, the
first execution in the queue starts.
https://portal.tutorialsdojo.com/ 352
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 353
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
O
■ ne time by using Systems Manager Run Command.
■ On a schedule by using Systems Manager State Manager.
● Patch Manager
○ Automate the process of patching your managed instances.
○ Enables you to scan instances for missing patches and apply missing patches individually or to
large groups of instances by using EC2 instance tags.
○ For security patches, Patch Manager usespatch baselinesthat include rules for auto-approving
patches within days of their release, as well as a list of approved and rejected patches.
○ You can use AWS Systems Manager Patch Manager to select and apply Microsoft application
patches automatically across your Amazon EC2 or on-premises instances.
○ AWS Systems Manager Patch Manager includes common vulnerability identifiers (CVE ID). CVE
IDs can help you identify security vulnerabilities within your fleet and recommend patches.
○ You can configure actions to be performed on a managed instance before and after installing
patches.
● Maintenance Window
○ Set up recurring schedules for managed instances to execute administrative tasks like installing
patches and updates without interrupting business-critical operations.
○ Supports running four types of tasks:
■ Systems Manager Run Command commands
■ Systems Manager Automation workflows
■ AWS Lambda functions
■ AWS Step Functions tasks
● Systems Manager Document (SSM)
○ Defines the actions that Systems Manager performs.
○ Types of SSM Documents
ommand
C un Command,
R un Command uses command documents to execute
R
document State Manager commands. State Manager uses command documents to
apply a configuration. These actions can be run on one or
more targets at any point during the lifecycle of an instance.
Policy document State Manager olicy documents enforce a policy on your targets. If the
P
policy document is removed, the policy action no longer
happens.
https://portal.tutorialsdojo.com/ 354
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
C
○ an be in JSON or YAML.
○ You can create and save different versions of documents. You can then specify a default version
for each document.
○ If you want to customize the steps and actions in a document, you can create your own.
○ You can tag your documents to help you quickly identify one or more documents based on the
tags you've assigned to them.
State Manager
○ A service that automates the process of keeping your EC2 and hybrid infrastructure in a state
that you define.
○ AState Manager associationis a configuration thatis assigned to your managed instances. The
configuration defines the state that you want to maintain on your instances. The association
also specifies actions to take when applying the configuration.
Parameter Store
○ Provides secure, hierarchical storage for configuration data and secrets management.
○ You can store values as plain text or encrypted data withSecureString.
○ Parameters work with Systems Manager capabilities such as Run Command, State Manager,
and Automation.
OpsCenter
○ OpsCenter helps you view, investigate, and resolve operational issues related to your
environment from a central location.
○ OpsCenter complements existing case management systems by enabling integrations via
Amazon Simple Notification Service (SNS) and public AWS SDKs. By aggregating information
from AWS Config, AWS CloudTrail logs, resource descriptions, and Amazon EventBridge
(Amazon CloudWatch Events), OpsCenter helps you reduce the mean time to resolution (MTTR)
of incidents, alarms, and operational tasks.
https://portal.tutorialsdojo.com/ 355
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● T rusted Advisor analyzes your AWS environment and provides best practice recommendations in five
categories:
○ Cost Optimization
○ Performance
○ Security
○ Fault Tolerance
○ Service Limits
● Access to the seven core Trusted Advisor checks are available to all AWS users.
● Access to the full set of Trusted Advisor checks is available to Business and Enterprise Support plans.
ow AWS Trusted Advisor does this is by having multiple checks that scan for underutilized (e.g. idle
H
instances)andunoptimized(e.g.oversizedinstances)resourcesthatarerunninginyouraccount.Thenumber
of Trusted Advisor checks that will be available to you will depend on your support plan. Nevertheless, you
should often review your AWS Trusted Advisor to ensure all your resources are well-utilized and right-sized.
https://portal.tutorialsdojo.com/ 356
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A llows you to create, manage, and distribute catalogs of approved products to end-users, who can then
access the products they need in a personalized portal.
● Administrators can control which users have access to each product to enforce compliance with
organizational business policies. Administrators can also set up adopted roles, so that end users only
require IAM access to AWS Service Catalog in order to deploy approved resources.
● This is a regional service.
Features
● tandardization of assets
S
● Self-service discovery and launch
● Fine-grain access control
● Extensibility and version control
Concepts
● Users
○ C atalog administrators – Manage a catalog of products, organizing them into portfolios and
granting access to end users. Catalog administrators prepare AWS CloudFormation templates,
configure constraints, and manage IAM roles that are assigned to products to provide for
advanced resource management.
○ End users – Use AWS Service Catalog to launch products to which they have been granted
access.
● Products
○ Can comprise one or more AWS resources, such as EC2 instances, storage volumes, databases,
monitoring configurations, and networking components, or packaged AWS Marketplace
products.
○ You create your products by importing AWS CloudFormation templates. The templates define
the AWS resources required for the product, the relationships between resources, and the
parameters for launching the product to configure security groups, create key pairs, and perform
other customizations.
○ You can see the products that you are using and their health state in the AWS Service Catalog
console.
● Portfolio
○ A collection of products, together with configuration information. Portfolios help manage
product configuration and determine who can use specific products and how they can use them.
○ When you add a new version of a product to a portfolio, that version is automatically available to
all current users of that portfolio.
https://portal.tutorialsdojo.com/ 357
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can also share your portfolios with other AWS accounts and allow the administrator of
those accounts to distribute your portfolios with additional constraints.
○ When you add tags to your portfolio, the tags are applied to all instances of resources
provisioned from products in the portfolio.
● Versioning
○ Service Catalog allows you to manage multiple versions of the products in your catalog.
○ A version can have one of three statuses:
■ Active - An active version appears in the version list and allows users to launch it.
■ Inactive - An inactive version is hidden from the version list. Existing provisioned
products launched from this version will not be affected.
■ Deleted - If a version is deleted, it is removed from the version list. Deleting a version
can't be undone.
● Access control
○ You apply AWS IAM permissions to control who can view and modify your products and
portfolios.
○ By assigning an IAM role to each product, you can avoid giving users permission to perform
unapproved operations and enable them to provision resources using the catalog.
● Constraints
○ You use constraints to apply limits to products for governance or cost control.
○ Types of constraints:
■ Template constraints restrict the configuration parameters that are available for the user
when launching the product. Template constraints allow you to reuse generic AWS
CloudFormation templates for products and apply restrictions to the templates on a
per-product or per-portfolio basis.
■ Launch constraints allow you to specify a role for a product in a portfolio. This role is
used to provision the resources at launch, so you can restrict user permissions without
impacting users’ ability to provision products from the catalog.
■ Notification constraints specify an Amazon SNS topic to receive notifications about
stack events.
■ Tag update constraints allow administrators to allow or disallow end users to update
tags on resources associated with an AWS Service Catalog provisioned product.
● Stack
○ Every AWS Service Catalog product is launched as an AWS CloudFormation stack.
○ You can use CloudFormation StackSets to launch Service Catalog products across multiple
regions and accounts. You can specify the order in which products deploy sequentially within
regions. Across accounts, products are deployed in parallel.
https://portal.tutorialsdojo.com/ 358
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Security
● S ervice Catalog uses Amazon S3 buckets and Amazon DynamoDB databases that are encrypted at rest
using Amazon-managed keys.
● Service Catalog uses TLS and client-side encryption of information in transit between the caller and
AWS.
● Service Catalog integrates with AWS CloudTrail and Amazon SNS.
Pricing
T
● he AWS Service Catalog free tier includes 1,000 API calls per month.
● You are charged based on the number of API calls made to the Service Catalog beyond the free tier.
eferences:
R
https://aws.amazon.com/servicecatalog/
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html
https://docs.aws.amazon.com/servicecatalog/latest/userguide/end-user-console.html
https://aws.amazon.com/servicecatalog/pricing/
https://aws.amazon.com/servicecatalog/faqs/
https://portal.tutorialsdojo.com/ 359
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A mazon OpenSearch lets you search, analyze, and visualize your datain real-time. This service
manages the capacity, scaling, patching, and administration of your Elasticsearch clusters for you while
still giving you direct access to the Elasticsearch APIs.
● The service offers open-source Elasticsearch APIs, managed Kibana, and integrations with Logstash
and other AWS Services. This combination is often coined as theELK Stack.
● Concepts
○ An Amazon OpenSearchdomainis synonymous with anElasticsearch cluster. Domains are
clusters with the settings, instance types, instance counts, and storage resources that you
specify.
○ You can create multiple Elasticsearch indices within the same domain. Elasticsearch
automatically distributes the indices and any associated replicas between the instances
allocated to the domain.
○ Amazon OpenSearch uses ablue/green deployment processwhen updating domains.
Blue/green typically refers to the practice of running two production environments, one live and
one idle, and switching the two as you make software changes.
● Data Ingestion
○ Easily ingest structured and unstructured data into your Amazon Elasticsearch domain with
Logstash, an open-source data pipeline that helpsyou process logs and other event data.
○ You can also ingest data into your Amazon Elasticsearch domain using Amazon Kinesis
Firehose, AWS IoT, or Amazon CloudWatch Logs.
○ You can get faster and better insights into your data usingKibana, an open-source analytics and
visualization platform. Kibana is automatically deployed with your Amazon OpenSearch Service
domain.
○ You can load streaming data from the following sources using AWS Lambda event handlers:
■ Amazon S3
■ Amazon Kinesis Data Streams and Data Firehose
■ Amazon DynamoDB
■ Amazon CloudWatch
■ AWS IoT
○ Amazon OpenSearch exposes three Elasticsearch logs through CloudWatch Logs:
■ error logs
■ search slow logs - These logs help fine-tune the performance of any kind of search
operation on Elasticsearch.
■ index slow logs - These logs provide insights into the indexing process and can be used
to fine-tune the index setup.
○ Kibana and Logstash
https://portal.tutorialsdojo.com/ 360
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
K
■ ibana is a popular open-source visualization tool designed to work with Elasticsearch.
■ The URL iselasticsearch-domain-endpoint/_plugin/kibana/.
■ You can configure your own Kibana instance aside from using the default provided
Kibana.
■ Amazon OpenSearch usesAmazon Cognitoto offer usernameand password protection
for Kibana. (Optional feature)
■ Logstash provides a convenient way to use the bulk API to upload data into your Amazon
OpenSearch domain with the S3 plugin. The service also supports all other standard
Logstash input plugins that are provided by Elasticsearch.
■ Amazon OpenSearch also supports two Logstash output plugins:
■ standard Elasticsearch plugin
■ logstash-output-amazon-esplugin, which signs andexports Logstash events to
Amazon OpenSearch.
https://portal.tutorialsdojo.com/ 361
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon Kinesis
M
● akes it easy to collect, process, and analyze real-time streaming data.
● Kinesis can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT
telemetry data for machine learning, analytics, and other applications.
● A massively scalable, highly durable data ingestion and processing service optimized for streaming
data. You can configure hundreds of thousands of data producers to continuously put data into a
Kinesis data stream.
● Concepts
○ Data Producer- An application that typically emitsdata records as they are generated to a
Kinesis data stream. Data producers assign partition keys to records. Partition keys ultimately
determine which shard ingests the data record for a data stream.
○ Data Consumer- A distributed Kinesis applicationor AWS service retrieving data from all shards
in a stream as it is generated. Most data consumers are retrieving the most recent data in a
shard, enabling real-time analytics or handling of data.
○ Data Stream- A logical grouping of shards. Thereare no bounds on the number of shards within
a data stream. A data stream will retain data for24 hours or up to 7 dayswhen extended
retention is enabled.
○ Shard- The base throughput unit of a Kinesis datastream.
■ A shard is an append-only log and a unit of streaming capability. A shard contains an
ordered sequence of records ordered by arrival time.
■ Add or remove shards from your stream dynamically as your data throughput changes.
■ One shard can ingest up to 1000 data records per second, or 1MB/sec. Add more shards
to increase your ingestion capability.
■ When consumers useenhanced fan-out, one shard provides1MB/sec data input and
2MB/sec data output for each data consumer registered to use enhanced fan-out.
■ When consumers donot useenhanced fan-out, a shardprovides 1MB/sec of input and
2MB/sec of data output, and this output is shared with any consumer not using
enhanced fan-out.
○ Data Record
■ A record is the unit of data stored in a Kinesis stream. A record is composed of a
sequence number, partition key, and data blob.
■ A data blob is the data of interest your data producer adds to a stream. The maximum
size of a data blob is 1 MB.
○ Partition Key
■ A partition key is typically a meaningful identifier, such as a user ID or timestamp. It is
specified by your data producer while putting data into a Kinesis data stream, and useful
https://portal.tutorialsdojo.com/ 362
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
f or consumers as they can use the partition key to replay or build a history associated
with the partition key.
■ The partition key is also used to segregate and route data records to different shards of
a stream.
Sequence Number
○
■ A sequence number is a unique identifier for each data record. Sequence number is
assigned by Kinesis Data Streams when a data producer callsPutRecordorPutRecords
API to add data to a Kinesis data stream.
Data Firehose
● he easiest way to load streaming data into data stores and analytics tools.
T
● It is a fully managed service that automatically scales to match the throughput of your data.
● It can also batch, compress, and encrypt the data before loading it.
● Features
○ It can capture, transform, and load streaming data into S3, Redshift, OpenSearch Service, and
Splunk, enabling near real-time analytics with existing business intelligence tools and
dashboards being used today.
○ Once launched, your delivery streams automatically scale up and down to handle gigabytes per
second or more of input data rate and maintain data latency at levels you specify for the stream.
○ Data Firehose can convert the format of incoming data from JSON to Parquet or ORC formats
before storing the data in S3.
○ You can configure Data Firehose to prepare your streaming data before it is loaded to data
stores. Data Firehose provides pre-built Lambda blueprints for converting common data
sources, such as Apache logs and system logs, to JSON and CSV formats. You can use these
pre-built blueprints without any change, customize them further, or write your own custom
functions.
● Concepts
○ Data Firehose Delivery Stream- The underlying entityof Data Firehose. You use Data Firehose
by creating a Data Firehose delivery stream and then sending data to it.
○ Record- The data of interest that your data producersends to a Data Firehose delivery stream.
A record can be as large as 1,000 KB.
○ Data Producer- Producers send records to Data Firehosedelivery streams.
○ Buffer Size and Buffer Interval- Data Firehose buffersincoming streaming data to a certain size
or for a certain period of time before delivering it to destinations. Buffer Size is in MBs, and
Buffer Interval is in seconds.
Stream Sources
●
○ You can send data to your Data Firehose Delivery stream using different types of sources:
■ a Kinesis data stream,
■ the Kinesis Agent,
■ or the Data Firehose API using the AWS SDK.
https://portal.tutorialsdojo.com/ 363
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can also use CloudWatch Logs,Amazon EventBridge (Amazon CloudWatch Events), or AWS
IoT as your data source.
○ Some AWS services can only send messages and events to a Data Firehose delivery stream that
is in the same Region.
Data Delivery and Transformation
●
○ Data Firehose can invoke your Lambda function to transform incoming source data and deliver
the transformed data to destinations.
○ Data Firehose buffers incoming data up to 3 MB by default.
○ If your Lambda function invocation fails because of a network timeout or because you've
reached the Lambda invocation limit, Data Firehose retries the invocation three times by default.
○ Data Firehose can convert the format of your input data from JSON to Apache Parquet or
Apache ORC before storing the data in S3. Parquet and ORC are columnar data formats that
save space and enable faster queries compared to row-oriented formats like JSON.
○ Data delivery format:
■ For data delivery to S3, Data Firehose concatenatesmultiple incoming records based on
the buffering configuration of your delivery stream. It then delivers the records to S3 as
an S3 object.
■ For data delivery to Redshift, Data Firehose firstdelivers incoming data to your S3
bucket in the format described earlier. Data Firehose then issues a RedshiftCOPY
command to load the data from your S3 bucket to your Redshift cluster.
■ For data delivery to ElasticSearch, Data Firehosebuffers incoming records based on the
buffering configuration of your delivery stream. It then generates an Elasticsearch bulk
request to index multiple records to your Elasticsearch cluster.
■ For data delivery to Splunk, Data Firehose concatenatesthe bytes that you send.
● A nalyze streaming data, gain actionable insights, and respond to your business and customer needs in
real-time. You can quickly build SQL queries and Java applications using built-in templates and
operators for common processing functions to organize, transform, aggregate, and analyze data at any
scale.
● General Features
○ Managed Service for Apache Flink isserverlessandtakes care of everything required to
continuously run your application.
○ Managed Service for Apache Flink elastically scales applications to keep up with any volume of
data in the incoming data stream.
○ Managed Service for Apache Flink delivers sub-second processing latencies so you can
generate real-time alerts, dashboards, and actionable insights.
● Anapplicationis the primary resource in ManagedService for Apache Flink. Managed Service for
Apache Flink applications continuously read and process streaming data in real-time.
https://portal.tutorialsdojo.com/ 364
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou write application code using SQL to process the incoming streaming data and produce
output. Then, Managed Service for Apache Flink writes the output to a configured destination.
○ You can also process and analyze streaming data using Java.
● Components
○ Input is the streaming source for your application. In the input configuration, you map the
streaming source to an in-application data stream(s).
○ Application codeis a series of SQL statements thatprocess input and produce output.
○ You can create one or more in-application streams to store theoutput. You can then optionally
configure an application output to persist data from specific in-application streams to an
external destination.
Anin-application data streamis an entity that continuouslystores data in your application for you to
●
perform processing.
https://portal.tutorialsdojo.com/ 365
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon Athena
● A massively scalable, highly durable data ingestion and processing service optimized for streaming
data. You can configure hundreds of thousands of data producers to continuously put data into a
Kinesis data stream.
● An interactive query service that makes it easy to analyze data directly in Amazon S3 and other data
sources using SQL.
Features
A
● thena is serverless
● Has a built-in query editor.
● Uses Presto, an open source, distributed SQL query engine optimized for low latency, ad hoc analysis of
data.
● Athena supports a wide variety of data formats such as CSV, JSON, ORC, Avro, or Parquet.
● Athena automatically executes queries in parallel so that you get query results in seconds, even on
large datasets.
● Athena uses Amazon S3 as its underlying data store, making your data highly available and durable.
● Athena integrates with Amazon QuickSight for easy data visualization.
● Athena integrates out-of-the-box with AWS Glue.
● Athena uses a managed Data Catalog to store information and schemas about the databases and
tables that you create for your data stored in S3.
Queries
● ou can query geospatial data.
Y
● You can query different kinds of logs as your datasets.
● Athena stores query results in S3.
● Athena retains query history for 45 days.
● Athena does not support user-defined functions, INSERT INTO statements, and stored procedures.
● Athena supports both simple data types, such as INTEGER, DOUBLE, VARCHAR and complex data
types, such as MAPS, ARRAY, and STRUCT.
Athena supports querying data in Amazon S3 Requester Pays buckets.
●
● llows you to query data sources other than S3 buckets using a data connector.
A
● A data connector is implemented in a Lambda function that uses Athena Query Federation SDK.
● There are pre-built connectors available for some popular data sources, such as:
● MySQL, PostgreSQL, Oracle, SQL Server databases
https://portal.tutorialsdojo.com/ 366
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● mazon DynamoDB
A
● Amazon Managed Streaming for Apache Kafka (MSK)
● Amazon RedShift
● Amazon OpenSearch
● Amazon CloudWatch Logs and CloudWatch metrics
● Amazon DocumentDB
● Apache Kafka
● You can write your own data connector using the Athena Query Federation SDK if your data source is
not natively supported by Athena.
You may also customize the pre-built connectors to fit your use case.
●
● D ata partitioning. For instance, partitioning data based on column values such as date, country, and
region makes it possible to limit the amount of data that needs to be scanned by a query.
● Converting data format into columnar formats such as Parquet and ORC
● Compressing files
● Making files splittable. Athena can read a splittable file in parallel; thus, the time it takes for a query to
complete is faster.
● AVRO, Parquet, and Orc are splittable files regardless of the compression codec used
● Only text files (TSV, CSV, JSON, and custom SerDes for text) compressed with BZIP2 and LZO are
splittable.
Cost controls
● Y ou can create to isolate queries for teams, applications, or different workloads and enforce cost
controls.
● There are two types of cost controls available in a workgroup:
● Per-query limit – specifies a threshold for the total amount of data scanned per query. Any query
running in a workgroup is canceled once it exceeds the specified limit. Only one per-query limit can be
created in a workgroup.
● Per-workgroup limit – this limits the total amount of data scanned by all queries running within a
specific time frame. You can establish multiple limits based on hourly or daily data scan totals for
queries within the workgroup.
https://portal.tutorialsdojo.com/ 367
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS CodeBuild
● A fully managedcontinuous integration servicethatcompiles source code, run tests, and produces
software packages that are ready to deploy.
● Concepts
○ Abuild projectdefines how CodeBuild will run a build.It includes information such as where to
get the source code, which builds environment to use, the build commands to run, and where to
store the build output.
○ Abuild environmentis the combination of operatingsystem, programming language runtime,
and tools used by CodeBuild to run a build.
○ Thebuild specificationis a YAML file that lets youchoose the commands to run at each phase
of the build and other settings. Without a build spec, CodeBuild cannot successfully convert
your build input into build output or locate the build output artifact in the build environment to
upload to your output bucket.
■ If you include a build spec as part of the source code, by default, the build spec file must
be named buildspec.yml and placed in the root of your source directory.
○ A collection of input files is calledbuild inputartifacts or build input,and a deployable version
of a source code is calledbuild output artifact orbuild output.
● Features
○ AWS CodeBuild runs your builds in preconfigured build environments that contain the operating
system, programming language runtime, and build tools (such as Apache Maven, Gradle, and
npm) required to complete the task. You just specify your source code’s location and select
settings for your build, such as the build environment to use and the build commands to run
during a build.
○ AWS CodeBuild builds your code and stores the artifacts in an Amazon S3 bucket, or you can
use a build command to upload them to an artifact repository.
○ AWS CodeBuild provides build environments for
■ Java
■ Python
■ Node.js
■ Ruby
■ Go
■ Android
■ .NET Core for Linux
■ Docker
○ You can define the specific commands that you want AWS CodeBuild to perform, such as
installing build tool packages, running unit tests, and packaging your code.
https://portal.tutorialsdojo.com/ 368
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can integrate CodeBuild into existing CI/CD workflows using its source integrations, build
commands, or Jenkins integration.
○ CodeBuild can connect to AWS CodeCommit, S3, GitHub, and GitHub Enterprise and Bitbucket
to pull source code for builds.
○ CodeBuild allows you to use Docker images stored in another AWS account as your build
environment by granting resource-level permissions.
○ It now allows you to access Docker images from any private registry as the build environment.
Previously, you could only use Docker images from public DockerHub or Amazon ECR in
CodeBuild.
○ You can access your past build results through the console, CloudWatch, or the API. The results
include outcome (success or failure), build duration, output artifact location, and log location.
○ You can automate your release process by usingAWSCodePipelineto test your code and run
your builds with CodeBuild.
● Steps in a Build Process
○ CodeBuild will create a temporary compute container of the class defined in the build project
○ CodeBuild loads it with the specified runtime environment
○ CodeBuild downloads the source code
○ CodeBuild executes the commands configured in the project
○ CodeBuild uploads the generated artifact to an S3 bucket
○ Then it destroys the compute container
Build Duration is calculated in minutes, from the time you submit your build until your build is
●
terminated, rounded up to the nearest minute.
● You can save time when your project builds by using a cache. A build project can use one of two types
of caching:
○ Amazon S3 - stores the cache in an Amazon S3 bucket that is available across multiple build
hosts. This is a good option for small intermediate-build artifacts that are more expensive to
build than to download. Not the best option for large build artifacts because they can take a
long time to transfer over your network, which can affect build performance.
○ Local - stores a cache locally on a build host that is available to that build host only. This is a
good option for large intermediate build artifacts because the cache is immediately available on
the build host. Build performance is not impacted by network transfer time.
○ If you use a local cache, you must choose one or more of three cache modes:
■ source cache
■ Docker layer cache
■ custom cache
https://portal.tutorialsdojo.com/ 369
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS CodeCommit
A
● fully-managed source controlservice that hostssecure Git-based repositories, similar to Github.
● You can create your own code repository and use Git commands to interact with your own repository
and other repositories.
● You can store and version any kind of file, including application assets such as images and libraries
alongside your code.
● The AWS CodeCommit Console lets you visualize your code, pull requests, commits, branches, tags,
and other settings.
Concepts
○ A nactive useris any unique AWS identity (IAM user/role,federated user, or root account) that
accesses AWS CodeCommit repositories during the month. AWS identities that are created
through your use of other AWS Services, such as AWS CodeBuild and AWS CodePipeline, as well
as servers accessing CodeCommit using a unique AWS identity, count as active users.
○ Arepositoryis the fundamental version control objectin CodeCommit. It's where you securely
store code and files for your project. It also stores your project history, from the first commit
through the latest changes.
○ Afileis a version-controlled, self-contained pieceof information available to you and other
users of the repository and branch where the file is stored.
○ Apull requestallows you and other repository usersto review, comment on, and merge code
changes from one branch to another.
○ Anapproval ruleis used to designate a number ofusers who will approve a pull request before
it is merged into your branch.
○ Acommitis a snapshot of the contents and changesto the contents of your repository. This
includes information like who committed the change, the date and time of the commit, and the
changes made as part of the commit.
○ In Git,branchesare simply pointers or referencesto a commit. You can use branches to
separate work on a new or different version of files without impacting work in other branches.
You can use branches to develop new features, store a specific version of your project from a
particular commit, etc.
Repository Features
Y
○ ou can share your repository with other users.
○ If you add AWS tags to repositories, you can set up notifications so that repository users receive
emails about events, such as another user commenting on code.
https://portal.tutorialsdojo.com/ 370
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ Y ou can create triggers for your repository so that code pushes or other events trigger actions,
such as emails or code functions.
○ To copy a remote repository to your local computer, use the command ‘git clone’
○ To connect to the repository after the name is changed, users must use the ‘git remote set-url’
command and specify the new URL to use.
○ To push changes from the local repo to the CodeCommit repository, run ‘git pushremote-name
branch-name’.
○ To pull changes to the local repo from the CodeCommit repository, run ‘git pullremote-name
branch-name’.
○ You can create up to 10 triggers for Amazon SNS or AWS Lambda for each CodeCommit
repository.
○ You can push your files to two different repositories at the same time.
Pull Requests
○ P ull requests require two branches: a source branch that contains the code you want to be
reviewed, and a destination branch, where you merge the reviewed code.
○ Create pull requests to let other users see and review your code changes before you merge
them into another branch.
○ Create approval rules for your pull requests to ensure the quality of your code by requiring users
to approve the pull request before the code can be merged into the destination branch. You can
specify the number of users who must approve a pull request. You can also specify an approval
pool of users for the rule.
○ To review the changes on files included in a pull request and resolve merge conflicts, you use
the CodeCommit console, the ‘git diff’ command, or a diff tool.
○ After the changes have been reviewed and all approval rules on the pull request have been
satisfied, you can merge a pull request using the AWS Console, AWS CLI or with the ‘git merge’
command.
○ You can close a pull request without merging it with your code.
○ Y ou can migrate a Git repository to a CodeCommit repository in a number of ways: by cloning it,
mirroring it, or migrating all or just some of the branches.
○ You can also migrate your local repository in your machine to CodeCommit.
https://portal.tutorialsdojo.com/ 371
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Monitoring
○ C odeCommit uses AWS IAM to control and monitor who can access your data as well as how,
when, and where they can access it.
○ CodeCommit helps you monitor your repositories via AWS CloudTrail and Amazon CloudWatch.
○ You can use Amazon SNS to receive notifications for events impacting your repositories. Each
notification will include a status message as well as a link to the resources whose event
generated that notification.
https://portal.tutorialsdojo.com/ 372
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS CodeDeploy
● A fully managed deployment service that automates software deployments to a variety of compute
services such as Amazon EC2, AWS Fargate, AWS Lambda, and your on-premises servers.
● Concepts
○ An Application is a name that uniquely identifies the application you want to deploy. CodeDeploy
uses this name, which functions as a container, to ensure the correct combination of revision,
deployment configuration, and deployment group are referenced during a deployment.
○ Compute platform is the platform on which CodeDeploy deploys an application (EC2, ECS,
Lambda, On-premises servers).
○ Deployment configuration is a set of deployment rules and deployment success and failure
conditions used by CodeDeploy during a deployment.
○ Deployment group contains individually tagged instances, Amazon EC2 instances in Amazon
EC2 Auto Scaling groups, or both.
1. In an Amazon ECS deployment, a deployment group specifies the Amazon ECS service,
load balancer, optional test listener, and two target groups. It also specifies when to
reroute traffic to the replacement task set and when to terminate the original task set
and ECS application after a successful deployment.
2. In an AWS Lambda deployment, a deployment group defines a set of CodeDeploy
configurations for future deployments of an AWS Lambda function.
3. In an EC2/On-Premises deployment, a deployment group is a set of individual instances
targeted for deployment.
■ In an in-place deployment, the instances in the deployment group are updated
with the latest application revision.
■ In a blue/green deployment, traffic is rerouted from one set of instances to
another by deregistering the original instances from a load balancer and
registering a replacement set of instances that typically has the latest application
revision already installed.
○ A deployment goes through a set of predefined phases called deployment lifecycle events. A
deployment lifecycle event gives you an opportunity to run code as part of the deployment.
1. ApplicationStop
2. DownloadBundle
3. BeforeInstall
4. Install
5. AfterInstall
6. ApplicationStart
7. ValidateService
○ Features
■ CodeDeploy protects your application from downtime during deployments through
rolling updates and deployment health tracking.
■ AWS CodeDeploy tracks and stores the recent history of your deployments.
https://portal.tutorialsdojo.com/ 373
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
C
■ odeDeploy is platform and language agnostic.
■ CodeDeploy uses a file and command-based install model, which enables it to deploy
any application and reuse existing setup code. The same setup code can be used to
consistently deploy and test updates across your environment release stages for your
servers or containers.
■ CodeDeploy integrates with Amazon Auto Scaling, which allows you to scale EC2
capacity according to conditions you define such as traffic spikes. Notifications are then
sent to AWS CodeDeploy to initiate an application deployment onto new instances
before they are placed behind an Elastic Load Balancing load balancer.
■ When using AWS CodeDeploy with on-premises servers, make sure that they can
connect to AWS public endpoints.
■ AWS CodeDeploy offers two types of deployments:
■ With in-place deployments, the application on each instance in the deployment
group is stopped, the latest application revision is installed, and the new version
of the application is started and validated. Only deployments that use the
EC2/On-Premises compute platform can use in-place deployments.
■ With blue/green deployments, once the new version of your application is tested
and declared ready, CodeDeploy can shift the traffic from your old version (blue)
to your new version (green) according to your specifications.
■ Deployment groups are used to match configurations to specific environments, such as
a staging or production environments. An application can be deployed to multiple
deployment groups.
■ You can integrate AWS CodeDeploy with your continuous integration and deployment
systems by calling the public APIs using the AWS CLI or AWS SDKs.
○ Application Specification Files
■ The AppSpec file is a YAML-formatted or JSON-formatted file that is used to manage
each deployment as a series of lifecycle event hooks.
■ For ECS Compute platform, the file specifies
■ The name of the ECS service and the container name and port used to direct
traffic to the new task set.
■ The functions to be used as validation tests.
■ For Lambda compute platform, the file specifies
■ The AWS Lambda function version to deploy.
■ The functions to be used as validation tests.
■ For EC2/On-Premises compute platform, the file is always written in YAML and is used to
■ Map the source files in your application revision to their destinations on the
instance.
■ Specify custom permissions for deployed files.
■ Specify scripts to be run on each instance at various stages of the deployment
process.
Deployments
○
https://portal.tutorialsdojo.com/ 374
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
■ Y ou can use the CodeDeploy console or the create-deployment command to deploy the
function revision specified in the AppSpec file to the deployment group.
■ You can use the CodeDeploy console or the stop-deployment command to stop a
deployment. When you attempt to stop the deployment, one of three things happens:
■ The deployment stops, and the operation returns a status of SUCCEEDED.
■ The deployment does not immediately stop, and the operation returns a status of
pending. After the pending operation is complete, subsequent calls to stop the
deployment return a status of SUCCEEDED.
■ The deployment cannot stop, and the operation returns an error.
■ With Lambda functions and EC2 instances, CodeDeploy implements rollbacks by
redeploying, as a new deployment, a previously deployed revision.
■ With ECS services, CodeDeploy implements rollbacks by rerouting traffic from the
replacement task set to the original task set.
■ The CodeDeploy agent is a software package that, when installed and configured on an
EC2/on-premises instance, makes it possible for that instance to be used in CodeDeploy
deployments. The agent is not required for deployments that use the Amazon ECS or
AWS Lambda.
■ CodeDeploy monitors the health status of the instances in a deployment group. For the
overall deployment to succeed, CodeDeploy must be able to deploy to each instance in
the deployment and deployment to at least one instance must succeed.
■ You can specify a minimum number of healthy instances as a number of instances or as
a percentage of the total number of instances required for the deployment to be
successful.
■ CodeDeploy assigns two health status values to each instance:
■ Revision health - based on the application revision currently installed on the
instance. Values include Current, Old and Unknown.
■ Instance health - based on whether deployments to an instance have been
successful. Values include Healthy and Unhealthy.
Blue/Green Deployments
○
■ EC2/On-Premises compute platform
■ You must have one or more Amazon EC2 instances with identifying Amazon EC2
tags or an Amazon EC2 Auto Scaling group.
■ Each Amazon EC2 instance must have the correct IAM instance profile attached.
■ The CodeDeploy agent must be installed and running on each instance.
■ During replacement, you can either
■ use the Amazon EC2 Auto Scaling group you specify as a template for the
replacement environment; or
■ specify the instances to be counted as your replacement using EC2
instance tags, EC2 Auto Scaling group names, or both.
https://portal.tutorialsdojo.com/ 375
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
■ Y
ou must choose one of the following deployment configuration types to specify
how traffic is shifted from the original Lambda function version to the new
version:
■ Canary: Traffic is shifted in two increments. You can choose from
predefined canary options that specify the percentage of traffic shifted to
your updated Lambda function version in the first increment and the
interval, in minutes, before the remaining traffic is shifted in the second
increment.
■ Linear: Traffic is shifted in equal increments with an equal number of
minutes between each increment. You can choose from predefined linear
options that specify the percentage of traffic shifted in each increment
and the number of minutes between each increment.
■ All-at-once: All traffic is shifted from the original Lambda function to the
updated Lambda function version all at once.
■ W ith Amazon ECS, production traffic shifts from your ECS service's original task set to a
replacement task set all at once.
○ Advantages of using Blue/Green Deployments vs In-Place Deployments
■ An application can be installed and tested in the new replacement environment and
deployed to production simply by rerouting traffic.
■ If you're using the EC2/On-Premises compute platform, switching back to the most
recent version of an application is faster and more reliable. Traffic can just be routed
back to the original instances as long as they have not been terminated. With an in-place
deployment, versions must be rolled back by redeploying the previous version of the
application.
■ If you're using the EC2/On-Premises compute platform, new instances are provisioned
and contain the most up-to-date server configurations.
■ If you're using the AWS Lambda compute platform, you control how traffic is shifted
from your original AWS Lambda function version to your new AWS Lambda function
version.
With AWS CodeDeploy, you can also deploy your applications to your on-premises data centers. Your
●
on-premises instances will have a prefix of “mi-xxxxxxxxx” as shown in the image below:
https://portal.tutorialsdojo.com/ 376
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 377
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS CodePipeline
● A fully managedcontinuous delivery servicethat helpsyou automate your release pipelines for
application and infrastructure updates.
● You can easily integrate AWS CodePipeline with third-party services such as GitHub or with your own
custom plugin.
● Concepts
○ Apipelinedefines your release process workflow,and describes how a new code change
progresses through your release process.
○ A pipeline comprises a series ofstages(e.g., build,test, and deploy), which act as logical
divisions in your workflow. Each stage is made up of a sequence of actions, which are tasks
such as building code or deploying to test environments.
■ Pipelines must haveat least two stages. The firststage of a pipeline is required to be a
source stage, and the pipeline is required to additionally have at least one other stage
that is a build or deployment stage.
○ Define your pipeline structure through adeclarativeJSONdocument that specifies your release
workflow and its stages and actions. These documents enable you to update existing pipelines
as well as provide starting templates for creating new pipelines.
○ Arevisionis a change made to the source locationdefined for your pipeline. It can include
source code, build output, configuration, or data. A pipeline can have multiple revisions flowing
through it at the same time.
○ Astageis a group of one or more actions. A pipelinecan have two or more stages.
○ Anactionis a task performed on a revision. Pipelineactions occur in a specified order, in serial
or in parallel, as determined in the configuration of the stage.
■ You can add actions to your pipeline that are in an AWS Region different from your
pipeline.
■ There are six types of actions
■ Source
■ Build
■ Test
■ Deploy
■ Approval
■ Invoke
○ When an action runs, it acts upon a file or set of files calledartifacts. These artifacts can be
worked upon by later actions in the pipeline. You have an artifact store which is an S3 bucket in
the same AWS Region as the pipeline to store items for all pipelines in that Region associated
with your account.
○ The stages in a pipeline are connected bytransitions.Transitions can be disabled or enabled
between stages. If all transitions are enabled, the pipeline runs continuously.
https://portal.tutorialsdojo.com/ 378
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ A napproval actionprevent a pipeline from transitioning to the next action until permission is
granted. This is useful when you are performing code reviews before the code is deployed to the
next stage.
● Features
○ AWS CodePipeline provides you with a graphical user interface to create, configure, and manage
your pipeline and its various stages and actions.
○ A pipeline starts automatically (default) when a change is made in the source location or when
you manually start the pipeline. You can also set up a rule in CloudWatch to automatically start
a pipeline when events you specify occur.
○ You can model your build, test, and deployment actions to runin parallelin order to increase
your workflow speeds.
○ AWS CodePipeline can pull source code for your pipeline directly from AWS CodeCommit,
GitHub, Amazon ECR, or Amazon S3.
○ It can run builds and unit tests in AWS CodeBuild.
○ It can deploy your changes using AWS CodeDeploy, AWS Elastic Beanstalk, Amazon ECS, AWS
Fargate, Amazon S3, AWS Service Catalog, and/or AWS CloudFormation.
○ You can use the CodePipeline Jenkins plugin to easily register your existing build servers as a
custom action.
○ When you use the console to create or edit a pipeline that has a GitHub source, CodePipeline
creates awebhook. A webhook is an HTTP notificationthat detects events in another tool, such
as a GitHub repository and connects those external events to a pipeline. CodePipeline deletes
your webhook when you delete your pipeline.
As a best practice, when you use a Jenkins build provider for your pipeline’s build or test action, install
●
Jenkins on an Amazon EC2 instance and configure a separate EC2 instance profile. Make sure the
instance profile grants Jenkins only the AWS permissions required to perform tasks for your project,
such as retrieving files from Amazon S3.
https://portal.tutorialsdojo.com/ 379
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
AWS X-Ray
● A WS X-Ray analyzes and debugs production, distributed applications, such as those built using a
microservices architecture. With X-Ray, you can identify performance bottlenecks, edge case errors, and
other hard-to-detect issues.
● Features
○ AWS X-Ray can be used with applications running on Amazon EC2, Amazon ECS, AWS Lambda,
and AWS Elastic Beanstalk. You just integrate the X-Ray SDK with your application and install
the X-Ray agent.
○ AWS X-Ray provides an end-to-end, cross-service, application-centric view of requests flowing
through your application by aggregating the data gathered from individual services in your
application into a single unit called atrace.
○ You can set thetrace sampling ratethat is best suitedfor your production applications or
applications in development. X-Ray continually traces requests made to your application and
stores a sampling of the requests for your analysis.
○ AWS X-Ray creates a map of services used by your application with trace data. This provides a
view of connections between services in your application and aggregated data for each service,
including average latency and failure rates. You can create dependency trees, perform
cross-availability zone or region call detections, and more.
○ AWS X-Ray lets you add annotations to data emitted from specific components or services in
your application.
● A WS SDKs and Tools provide programming interfaces and development tools that allow developers to
easily build applications on AWS.
● AWS SDKs are available for popular programming languages like Java, Python, Ruby, .NET, JavaScript
etc. The SDKs allow developers to access AWS services programmatically from their code. It handles
tasks like authentication, request signing and error handling.
● The Amazon Web Services CLI (Command Line Interface) is a comprehensive tool for managing
multiple AWS services using commands. For example aws ec2 run-instancesto launch EC2
instances.
● AWS provides plugins and extensions for popular IDEs like Eclipse, IntelliJ, VS Code, etc. These plugins
help in developing, debugging and deploying AWS applications from within the IDEs.
● AWS X-Ray helps developers analyze and debug distributed applications on AWS. It provides service
maps and traces to help identify issues.
● AWS Amplify is a framework for frontend web and mobile applications that can help with user
authentication, analytics, storage etc.
https://portal.tutorialsdojo.com/ 380
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
eferences:
R
https://docs.aws.amazon.com/sdk-for-net/v3/developer-guide/sdks-and-tools-ref.html
https://docs.aws.amazon.com/transcribe/latest/dg/getting-started-sdk.html
https://aws.amazon.com/developer/tools/
https://portal.tutorialsdojo.com/ 381
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
Amazon SNS
● A
web service that makes it easy to set up, operate, and send notifications from the cloud. SNS follows
the“publish-subscribe” (pub-sub)messagingparadigm,with notifications being delivered to clients
using a“push”mechanism rather than to periodicallycheck or “poll” for new information and updates.
Features
● S NS is anevent-drivencomputing hub that has nativeintegration with a wide variety of AWS event
sources (including EC2, S3, and RDS) and AWS event destinations (including SQS, and Lambda).
○ Event-driven computingis a model in which subscriberservices automatically perform work in
response to events triggered by publisher services. It can automate workflows while decoupling
the services that collectively and independently work to fulfill these workflows.
● Message filteringallows a subscriber to create afilter policy, so that it only gets the notifications it is
interested in.
● Message fanoutoccurs when a message is sent to atopic and then replicated and pushed to multiple
endpoints. Fanout provides asynchronous event notifications, which in turn allows for parallel
processing.
● SNS mobile notificationsallow you to fanout mobilepush notifications to iOS, Android, Fire OS,
Windows, and Baidu-based devices. You can also use SNS to fanout text messages (SMS) to 200+
countries and fanout email messages (SMTP).
● Application and system alertsare notifications triggeredby predefined thresholds, sent to specified
users by SMS and/or email.
● Push emailandtext messagingare two ways to transmitmessages to individuals or groups via email
and/or SMS.
● P ublishers communicate asynchronously with subscribers by producing and sending a message to a
topic, which is a logical access point and communication channel.
● Subscribers consume or receive the message or notification over one of the supported protocols when
they are subscribed to the topic.
● Publishers create topics to send messages, while subscribers subscribe to topics to receive messages.
● SNS FIFO topics support the forwarding of messages to SQS FIFO queues. You can also use SNS to
forward messages to standard queues.
SNS Topics
https://portal.tutorialsdojo.com/ 382
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● Instead of including a specific destination address in each message, a publisher sends a message to a
topic. SNS matches the topic to a list of subscriberswho have subscribed to that topic and delivers the
message to each of those subscribers.
● Each topic has a unique name that identifies the SNS endpoint for publishers to post messages and
subscribers to register for notifications.
● A topic can support subscriptions and notification deliveries over multiple transports.
● W hen a message is published to an SNS topic that has aLambda functionsubscribed to it, the Lambda
function is invoked with the payload of the published message. The Lambda function receives the
message payload as an input parameter and can manipulate the information in the message, publish
the message to other SNS topics, or send the message to other AWS services.
● When you subscribe aSQS queueto a SNS topic, youcan publish a message to the topic, and SNS
sends a SQS message to the subscribed queue. The SQS message contains the subject and message
that were published to the topic, along with metadata about the message in a JSON document.
● When you subscribe to anHTTP/s endpointto a topic,you can publish a notification to the topic, and
SNS sends an HTTP POST request delivering the contents of the notification to the subscribed
endpoint. When you subscribe to the endpoint, you select whether SNS uses HTTP or HTTPS to send
the POST request to the endpoint.
https://portal.tutorialsdojo.com/ 383
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A WS Step Functions is a web service that providesserverless orchestrationfor modern applications.It
enables you to coordinate the components of distributed applications and microservices using visual
workflows.
● Concepts
○ Step Functions is based on the concepts oftasksandstate machines.
■ A task performs work by using an activity or an AWS Lambda function or by passing
parameters to the API actions of other services.
■ A finite state machine can express an algorithm as a number of states, their
relationships, and their input and output.
○ You define state machines using theJSON-based AmazonStates Language.
○ A state is referred to by itsname, which can be anystring, but must be unique within the scope
of the entire state machine. An instance of a state exists until the end of its execution.
■ There are different types of states in AWS Step Functions
■ Task state - Do some work in your state machine. AWS Step Functions can
invoke Lambda functions directly from a task state.
■ Choice state – Make a choice between branches of execution
■ Fail state – Stops execution and marks it as failure
■ Succeed state – Stops execution and marks it as a success
■ Pass state – Simply pass its input to its output or inject some fixed data
■ Wait state – Provide a delay for a certain amount of time or until a specified
time/date
■ Parallel state – Begin parallel branches of execution
■ Map state – Adds a for-each loop condition
■ Common features between states
■ Each state must have aTypefield indicating whattype of state it is.
■ Each state can have an optionalCommentfield to holda human-readable
comment about, or description of, the state.
■ Each state (except a Succeed or Fail state) requires aNextfield or, alternatively,
can become a terminal state by specifying anEndfield.
○ Activitiesenable you to place a task in your statemachine where the work is performed by an
activity workerthat can be hosted on Amazon EC2,Amazon ECS, or mobile devices.
○ Activity tasks let you assign a specific step in your workflow to code running in an activity
worker. Service tasks let you connect a step in your workflow to a supported AWS service.
○ WithTransitions, after executing a state, AWS StepFunctions uses the value of theNextfield to
determine the next state to advance to. States can have multiple incoming transitions from
other states.
○ Individual states receive JSON as input and usually pass JSON as output to the next state.
https://portal.tutorialsdojo.com/ 384
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ A state machine executionoccurs when a state machineruns and performs its tasks. Each Step
Functions state machine can have multiple simultaneous executions.
○ State machine updatesin AWS Step Functions areeventuallyconsistent.
○ By default, when a state reports an error, AWS Step Functions causes the execution tofail
entirely.
■ Task and Parallel states can have a field namedRetryandCatchto retry an execution or
to have a fallback state.
○ The Step Functions console displays a graphical view of your state machine's structure, which
provides a way to visually check a state machine's logic and monitor executions.
https://portal.tutorialsdojo.com/ 385
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● C loudWatch is a monitoring service for AWS resources and applications. CloudTrail is a web service
that records API activity in your AWS account. They are both useful monitoring tools in AWS.
● By default, CloudWatch offers free basic monitoring for your resources, such as EC2 instances, EBS
volumes, and RDS DB instances. CloudTrail is also enabled by default when you create your AWS
account.
● With CloudWatch, you can collect and track metrics, collect and monitor log files, and set alarms.
CloudTrail, on the other hand, logs information on who made a request, the services used, the actions
performed, parameters for the actions, and the response elements returned by the AWS service.
CloudTrail Logs are then stored in an S3 bucket or a CloudWatch Logs log group that you specify.
● You can enable detailed monitoring from your AWS resources to send metric data to CloudWatch more
frequently, with an additional cost.
● CloudTrail delivers one free copy of management event logs for each AWS region. Management events
include management operations performed on resources in your AWS account, such as when a user
logs in to your account. Logging data events are charged. Data events include resource operations
performed on or within the resource itself, such as S3 object-level API activity or Lambda function
execution activity.
● CloudTrail helps you ensure compliance and regulatory standards.
● CloudWatch Logs reports on application logs, while CloudTrail Logs provide you with specific
information on what occurred in your AWS account.
● Amazon EventBridge (Amazon CloudWatch Events) is a near real time stream of system events
describing changes to your AWS resources. CloudTrail focuses more on AWS API calls made in your
AWS account.
● Typically, CloudTrail delivers an event within 15 minutes of the API call. CloudWatch delivers metric
data in 5 minutes periods for basic monitoring and 1-minute periods for detailed monitoring. The
CloudWatch Logs Agent will send log data every five seconds by default.
https://portal.tutorialsdojo.com/ 386
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
v ia Command Line
●
● via SSM Agent
● via AWS CloudFormation
https://portal.tutorialsdojo.com/ 387
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
● A
mazon ECS is a highly scalable, high ● A
WS Lambda is afunction-as-a-service
performance container management service offering that runs your code in response to
that supports Docker container and allows events and automatically manages the
you to easily run applicationsonamanaged compute resources for you, since Lambda is
cluster of Amazon EC2 instances. ECS a serverless compute service. With Lambda,
eliminatestheneedforyoutoinstall,operate, you do not have to worry about managing
and scale your own cluster management servers, and directly focus on your
infrastructure. application code.
● L
ambda automatically scales your function
● W
ith ECS, deploying containerized to meet demands. It is noteworthy, however,
applications is easily accomplished. This that Lambda has a maximum execution
service fits well in running batch jobs or in a duration of 900 seconds or 15 minutes.
microservice architecture.
You have a central repository where you can ● T
o allow your Lambda function to access
upload your Docker Images from ECS other services such as CloudWatch Logs,
container for safekeeping calledAmazon you would need to create an execution role
ECR. that has the necessary permission to do so.
● A
pplication in ECS can be written in a stateful ● Y
ou can easily integrate your function with
or stateless matter. different services such as API Gateway,
Dynamo DB, CloudFront, etc. using the
● T
heAmazon ECSCLI supports Docker Lambda console.
Compose, which allows you to simplify your
local development experience as well as ● Y
ou can test your function code locally in the
easily set up and run your container on Lambda console before launching it into
Amazon ECS. production. Currently, Lambda supports only
a number of programming languages such
● S
ince your application still run on EC2 as Java, Go, PowerShell, Node.js, C#, Python,
instances,server management is your and Ruby. In contrast, ECS is not limited by
responsibility.This gives you more granular programming languages, since it focuses
control over your system. primarily on containerization with Docker.
https://portal.tutorialsdojo.com/ 388
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 389
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
EC2 Instance Health Check vs ELB Health Check vs Auto Scaling and Custom Health Check
EC2 Instance Health Check Elastic Load Balancer (ELB) Auto Scaling and Custom
Health Check Health Checks
https://portal.tutorialsdojo.com/ 390
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
○ A
n SSL health check If you attached a load balancer or
System Status Checks succeeds if the SSL target group to your Auto Scaling
● These checks detect handshake Group, Amazon EC2 Auto Scaling
underlying problems with succeeds. determines the health status of the
your instance that require p
● ing path instances by checkingboth the
AWS involvement to repair. EC2 status checks and the Elastic
When a system status Load Balancing health checks.
check fails, you can choose
to wait for AWS to fix the LB health checks do not support
E
issue, or you can resolve it Web Sockets. mazon EC2 Auto Scaling waits
A
yourself. until the health check grace period
The load balancer routes requests ends before checking the health
Instance Status Checks only to the healthy instances. status of the instance. Ensure that
● Monitor the software and When an instance becomes the health check grace period
network configuration of impaired, the load balancer covers the expected startup time
your individual instance. resumes routing requests to the for your application.
Amazon EC2 checks the instance only when it has been
health of an instance by restored to a healthy state.
sending an address
resolution protocol (ARP) ealth check grace period does
H
request to the ENI. These not start until lifecycle hook
checks detect problems The load balancer checks the actions are completed and the
that require your health of the registered instances instance enters the InService state.
involvement to repair. using either:
● the default health check
configuration provided by
Elastic Load Balancing or ith custom health checks, you
W
● a health check can send an instance’s health
configuration that you can information directly from your
configure (auto scaling or system to Amazon EC2 Auto
custom health checks for Scaling.
example).
https://portal.tutorialsdojo.com/ 391
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 392
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
hisplatform-as-a-service
T he main difference between
T nlike Elastic Beanstalk,
U
solutionis typically for those who CloudFormation and Elastic CodeDeploy does not
want to deploy and manage their Beanstalk is that CloudFormation automatically handle capacity
applications within minutes in the deals more with the AWS provisioning, scaling and
AWS Cloud without worrying about infrastructure rather than monitoring.
the underlying infrastructure. applications. AWS
CloudFormationintroduces two
concepts:
● Thetemplate,a JSON or
YAML-format, text-based
file that describes all the
AWS resources and
configurations you need to
deploy to run your
application.
● Thestack,which is the set
of AWS resources that
created and managed as a
single unit when AWS
CloudFormation
instaniates a template.
https://portal.tutorialsdojo.com/ 393
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
WS CodeDeploy is a
A
recommended adjunct to
CloudFormation for managing the
application deployments and
https://portal.tutorialsdojo.com/ 394
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
SCP takes precedence over IAM Policies. n IAM Policy can allow or deny actions. An explicit
A
allow overrides an implicit deny. An explicit deny
overrides an explicit allow.
https://portal.tutorialsdojo.com/ 396
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
e also recommend that before taking the actual DOP-C02 exam, allocate some time to check your readiness
W
by taking ourAWS practice test coursein the TutorialsDojo Portal. You can also try the free sampler version of
our full practice test coursehere. This will helpyou identify the topics that you need to improve on and help
reinforce the concepts that you need to fully understand in order to pass this certification exam. It also has
different training modes that you can choose from such as Timed mode, Review mode, Section-Based tests,
and Final test plus bonus flashcards. In addition, you can read the technical discussions in our forums or post
your queries if you have one. If you have any issues, concerns or constructive feedback on our eBook, feel free
to contact us at[email protected].
n behalf of the Tutorials Dojo team, we wish you all the best on your upcoming AWS Certified DevOps
O
Engineer Professional exam. May it help advance your career, as well as increase your earning potential.
ith the right strategy, hard work, and unrelenting persistence, you can definitely make your dreams a reality!
W
You can make it!
incerely,
S
Jon Bonso, Kenneth Samonte, and the Tutorials Dojo Team
https://portal.tutorialsdojo.com/ 397
Tutorials Dojo Study Guide andCheat Sheets - AWS Certified DevOps Engineer Professional
by Jon Bonso and Kenneth Samonte
https://portal.tutorialsdojo.com/ 398