AWS Stephane Maarek Practice Test1 From Udemy June
AWS Stephane Maarek Practice Test1 From Udemy June
Associate - Results
Return to review
Chart
Pie chart with 4 slices.
End of interactive chart.
Attempt 1
All knowledge areas
All questions
Question 1: Correct
An organization wants to delegate access to a set of users from the development
environment so that they can access some resources in the production environment
which is managed under another AWS account.
Both IAM roles and IAM users can be used interchangeably for cross-account
access
Create a new IAM role with the required permissions to access the resources
in the production environment. The users can then assume this IAM role while
accessing the resources from the production environment
(Correct)
Create new IAM user credentials for the production environment and share
these credentials with the set of users from the development environment
Explanation
Correct option:
Create a new IAM role with the required permissions to access the resources in the
production environment. The users can then assume this IAM role while accessing the
resources from the production environment
IAM roles allow you to delegate access to users or services that normally don't have
access to your organization's AWS resources. IAM users or AWS services can assume a
role to obtain temporary security credentials that can be used to make AWS API calls.
Consequently, you don't have to share long-term credentials for access to a resource.
Using IAM roles, it is possible to access cross-account resources.
Incorrect options:
Create new IAM user credentials for the production environment and share these
credentials with the set of users from the development environment - There is no need
to create new IAM user credentials for the production environment, as you can use IAM
roles to access cross-account resources.
It is not possible to access cross-account resources - You can use IAM roles to access
cross-account resources.
Both IAM roles and IAM users can be used interchangeably for cross-account access -
IAM roles and IAM users are separate IAM entities and should not be mixed. Only IAM
roles can be used to access cross-account resources.
Reference:
https://aws.amazon.com/iam/features/manage-roles/
Question 2: Incorrect
A research group needs a fleet of EC2 instances for a specialized task that must deliver
high random I/O performance. Each instance in the fleet would have access to a
dataset that is replicated across the instances. Because of the resilient application
architecture, the specialized task would continue to be processed even if any instance
goes down, as the underlying application architecture would ensure the replacement
instance has access to the required dataset.
Which of the following options is the MOST cost-optimal and resource-efficient solution
to build this fleet of EC2 instances?
(Incorrect)
(Correct)
An instance store provides temporary block-level storage for your instance. This storage
is located on disks that are physically attached to the host instance. Instance store is
ideal for the temporary storage of information that changes frequently such as buffers,
caches, scratch data, and other temporary content, or for data that is replicated across
a fleet of instances, such as a load-balanced pool of web servers. Instance store
volumes are included as part of the instance's usage cost.
As Instance Store based volumes provide high random I/O performance at low cost (as
the storage is part of the instance's usage cost) and the resilient architecture can adjust
for the loss of any instance, therefore you should use Instance Store based EC2
instances for this use-case.
Incorrect options:
Use EBS based EC2 instances - EBS based volumes would need to use Provisioned
IOPS (io1) as the storage type and that would incur additional costs. As we are looking
for the most cost-optimal solution, this option is ruled out.
Use EC2 instances with EFS mount points - Using EFS implies that extra resources
would have to be provisioned (compared to using instance store where the storage is
located on disks that are physically attached to the host instance itself). As we are
looking for the most resource-efficient solution, this option is also ruled out.
Use EC2 instances with access to S3 based storage - Using EC2 instances with access
to S3 based storage does not deliver high random I/O performance, this option is just
added as a distractor.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
Question 3: Incorrect
A retail company uses Amazon EC2 instances, API Gateway, Amazon RDS, Elastic Load
Balancer and CloudFront services. To improve the security of these services, the Risk
Advisory group has suggested a feasibility check for using the Amazon GuardDuty
service.
Which of the following would you identify as data sources supported by GuardDuty?
(Correct)
(Incorrect)
VPC Flow Logs, DNS logs, CloudTrail events - Amazon GuardDuty is a threat detection
service that continuously monitors for malicious activity and unauthorized behavior to
protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud,
the collection and aggregation of account and network activities is simplified, but it can
be time-consuming for security teams to continuously analyze event log data for
potential threats. With GuardDuty, you now have an intelligent and cost-effective option
for continuous threat detection in AWS. The service uses machine learning, anomaly
detection, and integrated threat intelligence to identify and prioritize potential threats.
GuardDuty analyzes tens of billions of events across multiple AWS data sources, such
as AWS CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no
software or hardware to deploy or maintain. By integrating with Amazon EventBridge
Events, GuardDuty alerts are actionable, easy to aggregate across multiple accounts,
and straightforward to push into existing event management and workflow systems.
How GuardDuty
works:
via - https://aws.amazon.com/guardduty/
Incorrect options:
These three options contradict the explanation provided above, so these options are
incorrect.
Reference:
https://aws.amazon.com/guardduty/
Question 4: Correct
A company runs a data processing workflow that takes about 60 minutes to complete.
The workflow can withstand disruptions and it can be started and stopped multiple
times.
Which is the most cost-effective solution to build a solution for the workflow?
(Correct)
•
EC2 instance
types:
via - https://aws.amazon.com/ec2/pricing/
Amazon EC2 Spot instances allow you to request spare Amazon EC2 computing
capacity for up to 90% off the On-Demand price.
Spot instances are recommended for:
Applications that have flexible start and end times Applications that are feasible only at
very low compute prices Users with urgent computing needs for large amounts of
additional capacity
For the given use case, spot instances offer the most cost-effective solution as the
workflow can withstand disruptions and can be started and stopped multiple times.
For example, considering a process that runs for an hour and needs about 1024 MB of
memory, spot instance pricing for a t2.micro instance (having 1024 MB of RAM) is
$0.0035 per hour.
Spot instance
pricing:
via - https://aws.amazon.com/ec2/spot/pricing/
Contrast this with the pricing of a Lambda function (having 1024 MB of allocated
memory), which comes out to $0.0000000167 per 1ms or $0.06 per hour
($0.0000000167 * 1000 * 60 * 60 per hour).
Lambda function
pricing:
via - https://aws.amazon.com/lambda/pricing/
Thus, a spot instance turns out to be about 20 times cost effective than a Lambda
function to meet the requirements of the given use case.
Incorrect options:
Use AWS Lambda function to run the workflow processes - As mentioned in the
explanation above, a Lambda function turns out to be 20 times more expensive than a
spot instance to meet the workflow requirements of the given use case, so this option is
incorrect. You should also note that the maximum execution time of a Lambda function
is 15 minutes, so the workflow process would be disrupted for sure. On the other hand,
it is certainly possible that the workflow process can be completed in a single run on the
spot instance (the average frequency of stop instance interruption across all Regions
and instance types is <10%).
You should note that both on-demand and reserved instances are more expensive than
spot instances. In addition, reserved instances have a term of 1 year or 3 years, so they
are not suited for the given workflow. Therefore, both these options are incorrect.
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/ec2/spot/pricing/
https://aws.amazon.com/lambda/pricing/
https://aws.amazon.com/ec2/spot/instance-advisor/
Question 5: Incorrect
A retail company has developed a REST API which is deployed in an Auto Scaling group
behind an Application Load Balancer. The API stores the user data in DynamoDB and
any static content, such as images, are served via S3. On analyzing the usage trends, it
is found that 90% of the read requests are for commonly accessed data across all
users.
As a Solutions Architect, which of the following would you suggest as the MOST
efficient solution to improve the application performance?
(Incorrect)
•
Enable DAX for DynamoDB and ElastiCache Memcached for S3
(Correct)
Explanation
Correct option:
DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for
Amazon DynamoDB that delivers up to a 10 times performance improvement—from
milliseconds to microseconds—even at millions of requests per second.
DAX is tightly integrated with DynamoDB—you simply provision a DAX cluster, use the
DAX client SDK to point your existing DynamoDB API calls at the DAX cluster, and let
DAX handle the rest. Because DAX is API-compatible with DynamoDB, you don't have to
make any functional application code changes. DAX is used to natively cache
DynamoDB reads.
CloudFront is a content delivery network (CDN) service that delivers static and dynamic
web content, video streams, and APIs around the world, securely and at scale. By
design, delivering data out of CloudFront can be more cost-effective than delivering it
from S3 directly to your users.
When a user requests content that you serve with CloudFront, their request is routed to
a nearby Edge Location. If CloudFront has a cached copy of the requested file,
CloudFront delivers it to the user, providing a fast (low-latency) response. If the file
they’ve requested isn’t yet cached, CloudFront retrieves it from your origin – for
example, the S3 bucket where you’ve stored your content.
So, you can use CloudFront to improve application performance to serve static content
from S3.
Incorrect options:
via - https://aws.amazon.com/elasticache/redis/
Although you can integrate Redis with DynamoDB, it's much more involved than using
DAX which is a much better fit.
ElastiCache Memcached cannot be used as a cache to serve static content from S3, so
both these options are incorrect.
References:
https://aws.amazon.com/dynamodb/dax/
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-
cloudfront-a-match-made-in-the-cloud/
https://aws.amazon.com/elasticache/redis/
Question 6: Incorrect
The engineering team at an in-home fitness company is evaluating multiple in-memory
data stores with the ability to power its on-demand, live leaderboard. The company's
leaderboard requires high availability, low latency, and real-time processing to deliver
customizable user data for the community of users working out together virtually from
the comfort of their home.
Power the on-demand, live leaderboard using DynamoDB as it meets the in-
memory, high availability, low latency requirements
(Incorrect)
(Correct)
Power the on-demand, live leaderboard using RDS Aurora as it meets the in-
memory, high availability, low latency requirements
Power the on-demand, live leaderboard using AWS Neptune as it meets the in-
memory, high availability, low latency requirements
(Correct)
Explanation
Correct options:
Power the on-demand, live leaderboard using ElastiCache Redis as it meets the in-
memory, high availability, low latency requirements
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-
millisecond latency to power internet-scale real-time applications. Amazon ElastiCache
for Redis is a great choice for real-time transactional and analytical processing use
cases such as caching, chat/messaging, gaming leaderboards, geospatial, machine
learning, media streaming, queues, real-time analytics, and session store. ElastiCache
for Redis can be used to power the live leaderboard, so this option is correct.
Power the on-demand, live leaderboard using DynamoDB with DynamoDB Accelerator
(DAX) as it meets the in-memory, high availability, low latency requirements
Incorrect options:
Power the on-demand, live leaderboard using AWS Neptune as it meets the in-
memory, high availability, low latency requirements - Amazon Neptune is a fast,
reliable, fully-managed graph database service that makes it easy to build and run
applications that work with highly connected datasets. Neptune is not an in-memory
database, so this option is not correct.
Power the on-demand, live leaderboard using DynamoDB as it meets the in-memory,
high availability, low latency requirements - DynamoDB is not an in-memory database,
so this option is not correct.
Power the on-demand, live leaderboard using RDS Aurora as it meets the in-memory,
high availability, low latency requirements - Amazon Aurora is a MySQL and
PostgreSQL-compatible relational database built for the cloud, that combines the
performance and availability of traditional enterprise databases with the simplicity and
cost-effectiveness of open source databases. Amazon Aurora features a distributed,
fault-tolerant, self-healing storage system that auto-scales up to 128TB per database
instance. Aurora is not an in-memory database, so this option is not correct.
References:
https://aws.amazon.com/elasticache/
https://aws.amazon.com/elasticache/redis/
https://aws.amazon.com/dynamodb/dax/
Question 7: Incorrect
A gaming company uses Amazon Aurora as its primary database service. The company
has now deployed 5 multi-AZ read replicas to increase the read throughput and for use
as failover target. The replicas have been assigned the following failover priority tiers
and corresponding instance sizes are given in parentheses: tier-1 (16TB), tier-1 (32TB),
tier-10 (16TB), tier-15 (16TB), tier-15 (32TB).
In the event of a failover, Amazon Aurora will promote which of the following read
replicas?
Tier-1 (32TB)
(Correct)
Tier-15 (32TB)
Tier-1 (16TB)
(Incorrect)
Tier-10 (16TB)
Explanation
Correct option:
Tier-1 (32TB)
For Amazon Aurora, each Read Replica is associated with a priority tier (0-15). In the
event of a failover, Amazon Aurora will promote the Read Replica that has the highest
priority (the lowest numbered tier). If two or more Aurora Replicas share the same
priority, then Amazon RDS promotes the replica that is largest in size. If two or more
Aurora Replicas share the same priority and size, then Amazon Aurora promotes an
arbitrary replica in the same promotion tier.
Therefore, for this problem statement, the Tier-1 (32TB) replica will be promoted.
Incorrect options:
Tier-15 (32TB)
Tier-1 (16TB)
Tier-10 (16TB)
Given the failover rules discussed earlier in the explanation, these three options are
incorrect.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.
Backups.html
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/Aurora.Mana
ging.Backups.html#Aurora.Managing.FaultTolerance
Question 8: Correct
A technology blogger wants to write a review on the comparative pricing for various
storage types available on AWS Cloud. The blogger has created a test file of size 1GB
with some random data. Next he copies this test file into AWS S3 Standard storage
class, provisions an EBS volume (General Purpose SSD (gp2)) with 100GB of
provisioned storage and copies the test file into the EBS volume, and lastly copies the
test file into an EFS Standard Storage filesystem. At the end of the month, he analyses
the bill for costs incurred on the respective storage types for the test file.
What is the correct order of the storage charges incurred for the test file on these three
storage types?
Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost
of test file storage on EBS
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost
of test file storage on EBS
(Correct)
Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost
of test file storage on EFS
Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost
of test file storage on EFS
Explanation
Correct option:
Cost of test file storage on S3 Standard < Cost of test file storage on EFS < Cost of test
file storage on EBS
With Amazon EFS, you pay only for the resources that you use. The EFS Standard
Storage pricing is $0.30 per GB per month. Therefore the cost for storing the test file on
EFS is $0.30 for the month.
For EBS General Purpose SSD (gp2) volumes, the charges are $0.10 per GB-month of
provisioned storage. Therefore, for a provisioned storage of 100GB for this use-case,
the monthly cost on EBS is $0.10*100 = $10. This cost is irrespective of how much
storage is actually consumed by the test file.
For S3 Standard storage, the pricing is $0.023 per GB per month. Therefore, the monthly
storage cost on S3 for the test file is $0.023.
Therefore this is the correct option.
Incorrect options:
Cost of test file storage on S3 Standard < Cost of test file storage on EBS < Cost of test
file storage on EFS
Cost of test file storage on EFS < Cost of test file storage on S3 Standard < Cost of test
file storage on EBS
Cost of test file storage on EBS < Cost of test file storage on S3 Standard < Cost of test
file storage on EFS
Following the computations shown earlier in the explanation, these three options are
incorrect.
References:
https://aws.amazon.com/ebs/pricing/
https://aws.amazon.com/s3/pricing/(https://aws.amazon.com/s3/pricing/)
https://aws.amazon.com/efs/pricing/
Question 9: Correct
A company has a web application that runs 24*7 in the production environment. The
development team at the company runs a clone of the same application in the dev
environment for up to 8 hours every day. The company wants to build the MOST cost-
optimal solution by deploying these applications using the best-fit pricing options for
EC2 instances.
Use on-demand EC2 instances for the production application and spot
instances for the dev application
Use reserved EC2 instances for the production application and spot instances
for the dev application
•
Use reserved EC2 instances for the production application and spot block
instances for the dev application
Use reserved EC2 instances for the production application and on-demand
instances for the dev application
(Correct)
Explanation
Correct option:
Use reserved EC2 instances for the production application and on-demand instances
for the dev application
There are multiple pricing options for EC2 instances, such as On-Demand, Savings
Plans, Reserved Instances, and Spot Instances.
via - https://aws.amazon.com/ec2/pricing/
Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%)
compared to On-Demand pricing and provide a capacity reservation when used in a
specific Availability Zone. RIs provide you with a significant discount (up to 72%)
compared to On-Demand instance pricing. You have the flexibility to change families,
OS types, and tenancies while benefitting from RI pricing when you use Convertible RIs.
via - https://aws.amazon.com/ec2/pricing/
For the given use case, you can use reserved EC2 instances for the production
application as it is run 24*7. This way you can get a 72% discount if you avail a 3-year
term. You can use on-demand instances for the dev application since it is only used for
up to 8 hours per day. On-demand offers the flexibility to only pay for the EC2 instance
when it is being used (0 to 8 hours for the given use case).
Incorrect options:
Use reserved EC2 instances for the production application and spot block instances
for the dev application - Spot blocks can only be used for a span of up to 6 hours, so
this option does not meet the requirements of the given use case where the dev
application can be up and running up to 8 hours. You should also note that AWS has
stopped offering Spot blocks to new customers.
Use reserved EC2 instances for the production application and spot instances for the
dev application
Use on-demand EC2 instances for the production application and spot instances for
the dev application
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS
cloud. Spot Instances are available at up to a 90% discount compared to On-Demand
prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible
applications.
via - https://aws.amazon.com/ec2/spot/
Spot instances can be taken back by AWS with two minutes of notice, so spot instances
cannot be reliably used for running the dev application (which can be up and running for
up to 8 hours). So both these options are incorrect.
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/blogs/aws/new-ec2-spot-blocks-for-defined-duration-
workloads/
https://aws.amazon.com/ec2/spot/
Question 10: Correct
One of the biggest football leagues in Europe has granted the distribution rights for live
streaming its matches in the US to a silicon valley based streaming services company.
As per the terms of distribution, the company must make sure that only users from the
US are able to live stream the matches on their platform. Users from other countries in
the world must be denied access to these live-streamed matches.
Which of the following options would allow the company to enforce these streaming
restrictions? (Select two)
(Correct)
(Correct)
You can use georestriction, also known as geo-blocking, to prevent users in specific
geographic locations from accessing content that you're distributing through a
CloudFront web distribution. When a user requests your content, CloudFront typically
serves the requested content regardless of where the user is located. If you need to
prevent users in specific countries from accessing your content, you can use the
CloudFront geo restriction feature to do one of the following: Allow your users to access
your content only if they're in one of the countries on a whitelist of approved countries.
Prevent your users from accessing your content if they're in one of the countries on a
blacklist of banned countries. So this option is also correct.
Incorrect options:
Use Route 53 based latency routing policy to restrict distribution of content to only the
locations in which you have distribution rights - Use latency based routing when you
have resources in multiple AWS Regions and you want to route traffic to the region that
provides the lowest latency. To use latency-based routing, you create latency records
for your resources in multiple AWS Regions. When Route 53 receives a DNS query for
your domain or subdomain (example.com or acme.example.com), it determines which
AWS Regions you've created latency records for, determines which region gives the user
the lowest latency, and then selects a latency record for that region. Route 53 responds
with the value from the selected record, such as the IP address for a web server.
Use Route 53 based weighted routing policy to restrict distribution of content to only
the locations in which you have distribution rights - Weighted routing lets you associate
multiple resources with a single domain name (example.com) or subdomain name
(acme.example.com) and choose how much traffic is routed to each resource. This can
be useful for a variety of purposes, including load balancing and testing new versions of
the software.
Use Route 53 based failover routing policy to restrict distribution of content to only the
locations in which you have distribution rights - Failover routing lets you route traffic to
a resource when the resource is healthy or to a different resource when the first
resource is unhealthy. The primary and secondary records can route traffic to anything
from an Amazon S3 bucket that is configured as a website to a complex tree of records
Weighted routing or failover routing or latency routing cannot be used to restrict the
distribution of content to only the locations in which you have distribution rights. So all
three options above are incorrect.
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-geo
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-geo
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-
policy.html#routing-policy-geo
As a solutions architect, which of the following would you suggest as the BEST possible
solution to this issue?
The engineering team needs to provision more servers running the SNS
service
•
Amazon SNS message deliveries to AWS Lambda have crossed the account
concurrency quota for Lambda, so the team needs to contact AWS support to
raise the account limit
(Correct)
Amazon SNS has hit a scalability limit, so the team needs to contact AWS
support to raise the account limit
The engineering team needs to provision more servers running the Lambda
service
Explanation
Correct option:
Amazon SNS message deliveries to AWS Lambda have crossed the account
concurrency quota for Lambda, so the team needs to contact AWS support to raise the
account limit
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully
managed pub/sub messaging service that enables you to decouple microservices,
distributed systems, and serverless applications.
How SNS
Works:
via - https://aws.amazon.com/sns/
With AWS Lambda, you can run code without provisioning or managing servers. You pay
only for the compute time that you consume—there’s no charge when your code isn’t
running.
AWS Lambda currently supports 1000 concurrent executions per AWS account per
region. If your Amazon SNS message deliveries to AWS Lambda contribute to crossing
these concurrency quotas, your Amazon SNS message deliveries will be throttled. You
need to contact AWS support to raise the account limit. Therefore this option is correct.
Incorrect options:
Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to
raise the account limit - Amazon SNS leverages the proven AWS cloud to dynamically
scale with your application. You don't need to contact AWS support, as SNS is a fully
managed service, taking care of the heavy lifting related to capacity planning,
provisioning, monitoring, and patching. Therefore, this option is incorrect.
The engineering team needs to provision more servers running the SNS service
The engineering team needs to provision more servers running the Lambda service
As both Lambda and SNS are serverless and fully managed services, the engineering
team cannot provision more servers. Both of these options are incorrect.
References:
https://aws.amazon.com/sns/
https://aws.amazon.com/sns/faqs/
Push score updates to an SNS topic, subscribe a Lambda function to this SNS
topic to process the updates and then store these processed updates in a SQL
database running on Amazon EC2
•
Push score updates to Kinesis Data Streams which uses a Lambda function to
process these updates and then store these processed updates in DynamoDB
(Correct)
Push score updates to an SQS queue which uses a fleet of EC2 instances (with
Auto Scaling) to process these updates in the SQS queue and then store these
processed updates in an RDS MySQL database
Push score updates to Kinesis Data Streams which uses a fleet of EC2 instances
(with Auto Scaling) to process the updates in Kinesis Data Streams and then
store these processed updates in DynamoDB
Explanation
Correct option:
Push score updates to Kinesis Data Streams which uses a Lambda function to process
these updates and then store these processed updates in DynamoDB
To help ingest real-time data or streaming data at large scales, you can use Amazon
Kinesis Data Streams (KDS). KDS can continuously capture gigabytes of data per
second from hundreds of thousands of sources. The data collected is available in
milliseconds, enabling real-time analytics. KDS provides ordering of records, as well as
the ability to read and/or replay records in the same order to multiple Amazon Kinesis
Applications.
Lambda integrates natively with Kinesis Data Streams. The polling, checkpointing, and
error handling complexities are abstracted when you use this native integration. The
processed data can then be configured to be saved in DynamoDB.
Incorrect options:
Push score updates to an SQS queue which uses a fleet of EC2 instances (with Auto
Scaling) to process these updates in the SQS queue and then store these processed
updates in an RDS MySQL database
Push score updates to Kinesis Data Streams which uses a fleet of EC2 instances (with
Auto Scaling) to process the updates in Kinesis Data Streams and then store these
processed updates in DynamoDB
Push score updates to an SNS topic, subscribe a Lambda function to this SNS topic to
process the updates, and then store these processed updates in a SQL database
running on Amazon EC2
These three options use EC2 instances as part of the solution architecture. The use-
case seeks to minimize the management overhead required to maintain the solution.
However, EC2 instances involve several maintenance activities such as managing the
guest operating system and software deployed to the guest operating system, including
updates and security patches, etc. Hence these options are incorrect.
Reference:
https://aws.amazon.com/blogs/big-data/best-practices-for-consuming-amazon-
kinesis-data-streams-using-aws-lambda/
Which of the following AWS service is the MOST efficient solution for the given use-
case?
(Correct)
(Incorrect)
Explanation
Correct option:
AWS Storage Gateway - File Gateway
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises
access to virtually unlimited cloud storage. The service provides three different types of
gateways – Tape Gateway, File Gateway, and Volume Gateway – that seamlessly
connect on-premises applications to cloud storage, caching data locally for low-latency
access.
AWS Storage Gateway's file interface, or file gateway, offers you a seamless way to
connect to the cloud in order to store application data files and backup images as
durable objects on Amazon S3 cloud storage. File gateway offers SMB or NFS-based
access to data in Amazon S3 with local caching. As the company wants to integrate
data files from its analytical instruments into AWS via an NFS interface, therefore AWS
Storage Gateway - File Gateway is the correct answer.
Incorrect options:
AWS Storage Gateway - Volume Gateway - You can configure the AWS Storage
Gateway service as a Volume Gateway to present cloud-based iSCSI block storage
volumes to your on-premises applications. Volume Gateway does not support NFS
interface, so this option is not correct.
AWS Storage Gateway - Tape Gateway - AWS Storage Gateway - Tape Gateway allows
moving tape backups to the cloud. Tape Gateway does not support NFS interface, so
this option is not correct.
AWS Site-to-Site VPN - AWS Site-to-Site VPN enables you to securely connect your on-
premises network or branch office site to your Amazon Virtual Private Cloud (Amazon
VPC). You can securely extend your data center or branch office network to the cloud
with an AWS Site-to-Site VPN (Site-to-Site VPN) connection. It uses internet protocol
security (IPSec) communications to create encrypted VPN tunnels between two
locations. You cannot use AWS Site-to-Site VPN to integrate data files via the NFS
interface, so this option is not correct.
References:
https://aws.amazon.com/storagegateway/
https://aws.amazon.com/storagegateway/volume/
https://aws.amazon.com/storagegateway/file/
https://aws.amazon.com/storagegateway/vtl/
Given these constraints, which of the following solutions is the BEST fit to develop this
car-as-a-sensor service?
Ingest the sensor data in an Amazon SQS standard queue, which is polled by an
application running on an EC2 instance and the data is written into an auto-
scaled DynamoDB table for downstream processing
Ingest the sensor data in an Amazon SQS standard queue, which is polled by a
Lambda function in batches and the data is written into an auto-scaled
DynamoDB table for downstream processing
(Correct)
Ingest the sensor data in Kinesis Data Firehose, which directly writes the data
into an auto-scaled DynamoDB table for downstream processing
(Incorrect)
Explanation
Correct option:
Ingest the sensor data in an Amazon SQS standard queue, which is polled by a Lambda
function in batches and the data is written into an auto-scaled DynamoDB table for
downstream processing
AWS Lambda lets you run code without provisioning or managing servers. You pay only
for the compute time you consume. Amazon Simple Queue Service (SQS) is a fully
managed message queuing service that enables you to decouple and scale
microservices, distributed systems, and serverless applications. SQS offers two types of
message queues. Standard queues offer maximum throughput, best-effort ordering, and
at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are
processed exactly once, in the exact order that they are sent.
AWS manages all ongoing operations and underlying infrastructure needed to provide a
highly available and scalable message queuing service. With SQS, there is no upfront
cost, no need to acquire, install, and configure messaging software, and no time-
consuming build-out and maintenance of supporting infrastructure. SQS queues are
dynamically created and scale automatically so you can build and grow applications
quickly and efficiently.
As there is no need to manually provision the capacity, so this is the correct option.
Incorrect options:
Ingest the sensor data in Kinesis Data Firehose, which directly writes the data into an
auto-scaled DynamoDB table for downstream processing
Amazon Kinesis Data Firehose is a fully managed service for delivering real-time
streaming data to destinations such as Amazon Simple Storage Service (Amazon S3),
Amazon Redshift, Amazon OpenSearch Service, Splunk, and any custom HTTP endpoint
or HTTP endpoints owned by supported third-party service providers, including Datadog,
Dynatrace, LogicMonitor, MongoDB, New Relic, and Sumo Logic.
Firehose cannot directly write into a DynamoDB table, so this option is incorrect.
Ingest the sensor data in an Amazon SQS standard queue, which is polled by an
application running on an EC2 instance and the data is written into an auto-scaled
DynamoDB table for downstream processing
Ingest the sensor data in a Kinesis Data Streams, which is polled by an application
running on an EC2 instance and the data is written into an auto-scaled DynamoDB
table for downstream processing
Using an application on an EC2 instance is ruled out as the carmaker wants to use fully
serverless components. So both these options are incorrect.
References:
https://aws.amazon.com/sqs/
https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
https://aws.amazon.com/kinesis/data-streams/faqs/
(Correct)
•
Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-
premises access to fully managed file shares in Amazon S3. The applications
deployed on AWS can access this data directly from Amazon S3
Explanation
Correct option:
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully
managed file shares in Amazon FSx for Windows File Server. The applications
deployed on AWS can access this data directly from Amazon FSx in AWS
For user or team file shares, and file-based application migrations, Amazon FSx File
Gateway provides low-latency, on-premises access to fully managed file shares in
Amazon FSx for Windows File Server. For applications deployed on AWS, you may
access your file shares directly from Amazon FSx in AWS.
For your native Windows workloads and users, or your SMB clients, Amazon FSx for
Windows File Server provides all of the benefits of a native Windows SMB environment
that is fully managed and secured and scaled like any other AWS service. You get
detailed reporting, replication, backup, failover, and support for native Windows tools
like DFS and Active Directory.
via - https://aws.amazon.com/storagegateway/file/
Incorrect options:
Amazon Storage Gateway’s File Gateway does not support file shares for native
Windows workloads, so this option is incorrect.
The given use case requires native Windows support for the applications. File Gateway
can only be used to access S3 objects using a file system protocol, so this option is
incorrect.
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully
managed file shares in Amazon EFS. The applications deployed on AWS can access
this data directly from Amazon EFS - Amazon FSx File Gateway provides access to fully
managed file shares in Amazon FSx for Windows File Server and it does not support
EFS. You should also note that EFS uses the Network File System version 4 (NFS v4)
protocol and it does not support SMB protocol. Therefore this option is incorrect for the
given use case.
References:
https://aws.amazon.com/storagegateway/file/fsx/
https://aws.amazon.com/storagegateway/faqs/
https://aws.amazon.com/blogs/storage/aws-reinvent-recap-choosing-storage-for-on-
premises-file-based-workloads/
As a solutions architect, which of the following solutions would you suggest to help
address the given requirement?
Use Amazon Inspector to monitor any malicious activity on data stored in S3.
Use security assessments provided by Amazon Inspector to check for
vulnerabilities on EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3.
Use security assessments provided by Amazon GuardDuty to check for
vulnerabilities on EC2 instances
(Incorrect)
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3.
Use security assessments provided by Amazon GuardDuty to check for
vulnerabilities on EC2 instances
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3.
Use security assessments provided by Amazon Inspector to check for
vulnerabilities on EC2 instances
(Correct)
Explanation
Correct option:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use
security assessments provided by Amazon Inspector to check for vulnerabilities on
EC2 instances
Amazon GuardDuty offers threat detection that enables you to continuously monitor
and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty
analyzes continuous streams of meta-data generated from your account and network
activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also
uses integrated threat intelligence such as known malicious IP addresses, anomaly
detection, and machine learning to identify threats more accurately.
How GuardDuty
works:
via - https://aws.amazon.com/guardduty/
Amazon Inspector security assessments help you check for unintended network
accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2
instances. Amazon Inspector assessments are offered to you as pre-defined rules
packages mapped to common security best practices and vulnerability definitions.
Incorrect options:
Use Amazon GuardDuty to monitor any malicious activity on data stored in S3. Use
security assessments provided by Amazon GuardDuty to check for vulnerabilities on
EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use
security assessments provided by Amazon Inspector to check for vulnerabilities on
EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in S3. Use
security assessments provided by Amazon GuardDuty to check for vulnerabilities on
EC2 instances
These three options contradict the explanation provided above, so these options are
incorrect.
References:
https://aws.amazon.com/guardduty/
https://aws.amazon.com/inspector/
Which of the following solutions would you recommend for the given use-case? (Select
two)
(Correct)
(Correct)
•
Use AWS Shield
Explanation
Correct options:
You can use Aurora replicas and CloudFront distribution to make the application more
resilient to spikes in request rates.
Aurora Replicas have two main purposes. You can issue queries to them to scale the
read operations for your application. You typically do so by connecting to the reader
endpoint of the cluster. That way, Aurora can spread the load for read-only connections
across as many Aurora Replicas as you have in the cluster. Aurora Replicas also help to
increase availability. If the writer instance in a cluster becomes unavailable, Aurora
automatically promotes one of the reader instances to take its place as the new writer.
Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB
cluster spans within an AWS Region.
Amazon CloudFront is a fast content delivery network (CDN) service that securely
delivers data, videos, applications, and APIs to customers globally with low latency, high
transfer speeds, all within a developer-friendly environment. CloudFront points of
presence (POPs) (edge locations) make sure that popular content can be served quickly
to your viewers. CloudFront also has regional edge caches that bring more of your
content closer to your viewers, even when the content is not popular enough to stay at a
POP, to help improve performance for that content.
CloudFront offers an origin failover feature to help support your data resiliency needs.
CloudFront is a global service that delivers your content through a worldwide network of
data centers called edge locations or points of presence (POPs). If your content is not
already cached in an edge location, CloudFront retrieves it from an origin that you've
identified as the source for the definitive version of the content.
Incorrect options:
* Use AWS Shield* - AWS Shield is a managed Distributed Denial of Service (DDoS)
protection service that safeguards applications running on AWS. AWS Shield provides
always-on detection and automatic inline mitigations that minimize application
downtime and latency. There are two tiers of AWS Shield - Standard and Advanced.
Shield cannot be used to improve application resiliency to handle spikes in traffic.
Use AWS Global Accelerator - AWS Global Accelerator is a service that improves the
availability and performance of your applications with local or global users. It provides
static IP addresses that act as a fixed entry point to your application endpoints in a
single or multiple AWS Regions, such as your Application Load Balancers, Network Load
Balancers or Amazon EC2 instances. Global Accelerator is a good fit for non-HTTP use
cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use
cases that specifically require static IP addresses or deterministic, fast regional failover.
Since CloudFront is better for improving application resiliency to handle spikes in traffic,
so this option is ruled out.
Use AWS Direct Connect - AWS Direct Connect lets you establish a dedicated network
connection between your network and one of the AWS Direct Connect locations. Using
industry-standard 802.1q VLANs, this dedicated connection can be partitioned into
multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it
uses dedicated, private network connections between your intranet and Amazon VPC.
Direct Connect cannot be used to improve application resiliency to handle spikes in
traffic.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/disaster-recovery-
resiliency.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replicatio
n.html
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/global-accelerator/latest/dg/disaster-recovery-
resiliency.html
Can you help the intern by identifying those storage volume types that CANNOT be used
as boot volumes while creating the instances? (Select two)
Instance Store
•
(Correct)
(Correct)
Explanation
Correct options:
Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be used
as a boot volume, so these two options are correct.
Please see this detailed overview of the volume types for EBS
volumes.
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
Incorrect options:
Instance Store
General Purpose SSD (gp2), Provisioned IOPS SSD (io1), and Instance Store can be used
as a boot volume.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
(Correct)
(Incorrect)
Explanation
Correct option:
AMI
Overview:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Incorrect options:
As mentioned earlier in the explanation, when the new AMI is copied from region A into
region B, it also creates a snapshot in region B because AMIs are based on the
underlying snapshots. In addition, an instance is created from this AMI in region B. So,
we have 1 EC2 instance, 1 AMI and 1 snapshot in region B. Hence all three options are
incorrect.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Which of the following are the MOST cost-effective options for completing the data
transfer and establishing connectivity? (Select two)
(Incorrect)
(Correct)
(Incorrect)
(Correct)
Explanation
Correct options:
Order 10 Snowball Edge Storage Optimized devices to complete the one-time data
transfer
Snowball Edge Storage Optimized is the optimal choice if you need to securely and
quickly transfer dozens of terabytes to petabytes of data to AWS. It provides up to 80
TB of usable HDD storage, 40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gb
network connectivity to address large scale data transfer and pre-processing use cases.
As each Snowball Edge Storage Optimized device can handle 80TB of data, you can
order 10 such devices to take care of the data transfer for all applications.
Exam Alert:
The original Snowball devices were transitioned out of service and Snowball Edge
Storage Optimized are now the primary devices used for data transfer. You may see the
Snowball device on the exam, just remember that the original Snowball device had 80TB
of storage space.
AWS Site-to-Site VPN enables you to securely connect your on-premises network or
branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can
securely extend your data center or branch office network to the cloud with an AWS
Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish
encrypted network connectivity between your intranet and Amazon VPC over the
Internet. VPN Connections can be configured in minutes and are a good solution if you
have an immediate need, have low to modest bandwidth requirements, and can tolerate
the inherent variability in Internet-based connectivity.
Therefore this option is the right fit for the given use-case as the connectivity can be
easily established within the given timeframe.
Incorrect options:
Order 1 Snowmobile to complete the one-time data transfer - Each Snowmobile has a
total capacity of up to 100 petabytes. To migrate large datasets of 10PB or more in a
single location, you should use Snowmobile. For datasets less than 10PB or distributed
in multiple locations, you should use Snowball. So Snowmobile is not the right fit for this
use-case.
Setup AWS direct connect to establish connectivity between the on-premises data
center and AWS Cloud - AWS Direct Connect lets you establish a dedicated network
connection between your network and one of the AWS Direct Connect locations. Using
industry-standard 802.1q VLANs, this dedicated connection can be partitioned into
multiple virtual interfaces. AWS Direct Connect does not involve the Internet; instead, it
uses dedicated, private network connections between your intranet and Amazon VPC.
Direct Connect involves significant monetary investment and takes at least a month to
set up, therefore it's not the correct fit for this use-case.
Order 70 Snowball Edge Storage Optimized devices to complete the one-time data
transfer - As the data-transfer can be completed with just 10 Snowball Edge Storage
Optimized devices, there is no need to order 70 devices.
References:
https://aws.amazon.com/snowball/faqs/
https://aws.amazon.com/vpn/
https://aws.amazon.com/snowmobile/faqs/
https://aws.amazon.com/directconnect/
Which of the following content types skip the regional edge cache? (Select two)
(Correct)
User-generated videos
Amazon CloudFront is a fast content delivery network (CDN) service that securely
delivers data, videos, applications, and APIs to customers globally with low latency, high
transfer speeds, all within a developer-friendly environment.
CloudFront points of presence (POPs) (edge locations) make sure that popular content
can be served quickly to your viewers. CloudFront also has regional edge caches that
bring more of your content closer to your viewers, even when the content is not popular
enough to stay at a POP, to help improve performance for that content.
Incorrect Options:
User-generated videos
The following type of content flows through the regional edge caches - user-generated
content, such as video, photos, or artwork; e-commerce assets such as product photos
and videos and static content such as style sheets, JavaScript files. Hence these three
options are not correct.
Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFr
ontWorks.html
Question 22: Incorrect
A file-hosting service uses Amazon S3 under the hood to power its storage offerings.
Currently all the customer files are uploaded directly under a single S3 bucket. The
engineering team has started seeing scalability issues where customer file uploads
have started failing during the peak access hours with more than 5000 requests per
second.
Which of the following is the MOST resource efficient and cost-optimal way of
addressing this issue?
(Correct)
Change the application architecture to create a new S3 bucket for each day's
data and then upload the daily files directly under that day's bucket
(Incorrect)
Explanation
Correct option:
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. Your
applications can easily achieve thousands of transactions per second in request
performance when uploading and retrieving storage from Amazon S3. Amazon S3
automatically scales to high request rates. For example, your application can achieve at
least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per
prefix in a bucket.
There are no limits to the number of prefixes in a bucket. You can increase your read or
write performance by parallelizing reads. For example, if you create 10 prefixes in an
Amazon S3 bucket to parallelize reads, you could scale your read performance to
55,000 read requests per second. Please see this example for more clarity on prefixes:
if you have a file f1 stored in an S3 object path like
so s3://your_bucket_name/folder1/sub_folder_1/f1 ,
then /folder1/sub_folder_1/ becomes the prefix for file f1.
Some data lake applications on Amazon S3 scan millions or billions of objects for
queries that run over petabytes of data. These data lake applications achieve single-
instance transfer rates that maximize the network interface used for their Amazon EC2
instance, which can be up to 100 Gb/s on a single instance. These applications then
aggregate throughput across multiple instances to get multiple terabits per second.
Therefore creating customer-specific custom prefixes within the single bucket and then
uploading the daily files into those prefixed locations is the BEST solution for the given
constraints.
Optimizing Amazon S3
Performance:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-
performance.html
Incorrect options:
Change the application architecture to create a new S3 bucket for each customer and
then upload each customer's files directly under the respective buckets - Creating a
new S3 bucket for each new customer is an inefficient way of handling resource
availability (S3 buckets need to be globally unique) as some customers may use the
service sparingly but the bucket name is locked for them forever. Moreover, this is really
not required as we can use S3 prefixes to improve the performance.
Change the application architecture to create a new S3 bucket for each day's data and
then upload the daily files directly under that day's bucket - Creating a new S3 bucket
for each new day's data is also an inefficient way of handling resource availability (S3
buckets need to be globally unique) as some of the bucket names may not be available
for daily data processing. Moreover, this is really not required as we can use S3 prefixes
to improve the performance.
Change the application architecture to use EFS instead of Amazon S3 for storing the
customers' uploaded files - EFS is a costlier storage option compared to S3, so it is
ruled out.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html
Which of the following is the MOST cost-effective strategy for storing this intermediary
query data?
(Correct)
•
Store the intermediary query results in S3 Standard-Infrequent Access storage
class
(Incorrect)
Explanation
Correct option:
S3 Standard offers high durability, availability, and performance object storage for
frequently accessed data. Because it delivers low latency and high throughput, S3
Standard is appropriate for a wide variety of use cases, including cloud applications,
dynamic websites, content distribution, mobile and gaming applications, and big data
analytics. As there is no minimum storage duration charge and no retrieval fee
(remember that intermediary query results are heavily referenced by other parts of the
analytics pipeline), this is the MOST cost-effective storage class amongst the given
options.
Incorrect options:
Store the intermediary query results in S3 Glacier Instant Retrieval storage class - S3
Glacier Instant Retrieval delivers the fastest access to archive storage, with the same
throughput and milliseconds access as the S3 Standard and S3 Standard-IA storage
classes. S3 Glacier Instant Retrieval is ideal for archive data that needs immediate
access, such as medical images, news media assets, or user-generated content
archives.
The minimum storage duration charge is 90 days, so this option is NOT cost-effective
because intermediary query results need to be kept only for 24 hours. Hence this option
is not correct.
Store the intermediary query results in S3 One Zone-Infrequent Access storage class -
S3 One Zone-IA is for data that is accessed less frequently but requires rapid access
when needed. Unlike other S3 Storage Classes which store data in a minimum of three
Availability Zones (AZs), S3 One Zone-IA stores data in a single AZ and costs 20% less
than S3 Standard-IA. The minimum storage duration charge is 30 days, so this option is
NOT cost-effective because intermediary query results need to be kept only for 24
hours. Hence this option is not correct.
Reference:
https://aws.amazon.com/s3/storage-classes/
•
• {
• "Action": [
• "s3:DeleteObject"
• ],
• "Resource": [
• "arn:aws:s3:::example-bucket*"
• ],
• "Effect": "Allow"
• }
•
• {
• "Action": [
• "s3:*Object"
• ],
• "Resource": [
• "arn:aws:s3:::example-bucket/*"
• ],
• "Effect": "Allow"
• }
•
• {
• "Action": [
• "s3:*"
• ],
• "Resource": [
• "arn:aws:s3:::example-bucket/*"
• ],
• "Effect": "Allow"
• }
•
• {
• "Action": [
• "s3:DeleteObject"
• ],
• "Resource": [
• "arn:aws:s3:::example-bucket/*"
• ],
• "Effect": "Allow"
• }
(Correct)
Explanation
Correct option:
**
{
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
}
**
1. Effect: Specifies whether the statement will Allow or Deny an action ( Allow is the
effect defined here).
2. Action: Describes a specific action or actions that will either be allowed or denied
to run based on the Effect entered. API actions are unique to each service
( DeleteObject is the action defined here).
3. Resource: Specifies the resources—for example, an S3 bucket or objects—that
the policy applies to in Amazon Resource Name (ARN) format ( example-
bucket/* is the resource defined here).
This policy provides the necessary delete permissions on the resources of the S3
bucket to the group.
Incorrect options:
**
{
"Action": [
"s3:*Object"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
}
** - This policy is incorrect as the action value is invalid
**
{
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
}
** - This policy is incorrect since it allows all actions on the resource, which violates the
principle of least privilege, as required by the given use case.
**
{
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket*"
],
"Effect": "Allow"
}
** - This is incorrect, as the resource name is incorrect. It should have a /* after the
bucket name.
Reference:
https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-
policies/
As a solutions architect, which of the following steps would you recommend to solve
this issue?
(Incorrect)
As the CMK was deleted a day ago, it must be in the 'pending deletion' status
and hence you can just cancel the CMK deletion and recover the key
(Correct)
•
The company should issue a notification on its web application informing the
users about the loss of their data
Explanation
Correct option:
As the CMK was deleted a day ago, it must be in the 'pending deletion' status and
hence you can just cancel the CMK deletion and recover the key
AWS Key Management Service (KMS) makes it easy for you to create and manage
cryptographic keys and control their use across a wide range of AWS services and in
your applications. AWS KMS is a secure and resilient service that uses hardware
security modules that have been validated under FIPS 140-2.
Deleting a customer master key (CMK) in AWS Key Management Service (AWS KMS) is
destructive and potentially dangerous. Therefore, AWS KMS enforces a waiting period.
To delete a CMK in AWS KMS you schedule key deletion. You can set the waiting period
from a minimum of 7 days up to a maximum of 30 days. The default waiting period is
30 days. During the waiting period, the CMK status and key state is Pending deletion. To
recover the CMK, you can cancel key deletion before the waiting period ends. After the
waiting period ends you cannot cancel key deletion, and AWS KMS deletes the CMK.
Incorrect options:
The AWS root account user cannot recover CMK and the AWS support does not have
access to CMK via any backups. Both these options just serve as distractors.
The company should issue a notification on its web application informing the users
about the loss of their data - This option is not required as the data can be recovered via
the cancel key deletion feature.
Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html
Question 26: Incorrect
The engineering team at an e-commerce company wants to establish a dedicated,
encrypted, low latency, and high throughput connection between its data center and
AWS Cloud. The engineering team has set aside sufficient time to account for the
operational overhead of establishing this connection.
As a solutions architect, which of the following solutions would you recommend to the
company?
Use VPC transit gateway to establish a connection between the data center and
AWS Cloud
Use AWS Direct Connect to establish a connection between the data center and
AWS Cloud
(Incorrect)
Use AWS Direct Connect plus VPN to establish a connection between the data
center and AWS Cloud
(Correct)
Use site-to-site VPN to establish a connection between the data center and
AWS Cloud
Explanation
Correct option:
Use AWS Direct Connect plus VPN to establish a connection between the data center
and AWS Cloud
AWS Direct Connect is a cloud service solution that makes it easy to establish a
dedicated network connection from your premises to AWS. AWS Direct Connect lets
you establish a dedicated network connection between your network and one of the
AWS Direct Connect locations.
With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect
dedicated network connections with the Amazon VPC VPN. This combination provides
an IPsec-encrypted private connection that also reduces network costs, increases
bandwidth throughput, and provides a more consistent network experience than
internet-based VPN connections.
This solution combines the AWS managed benefits of the VPN solution with low
latency, increased bandwidth, more consistent benefits of the AWS Direct Connect
solution, and an end-to-end, secure IPsec connection. Therefore, AWS Direct Connect
plus VPN is the correct solution for this use-case.
via - https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-
options/aws-direct-connect-vpn.html
Incorrect options:
Use site-to-site VPN to establish a connection between the data center and AWS
Cloud - AWS Site-to-Site VPN enables you to securely connect your on-premises
network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). A
VPC VPN Connection utilizes IPSec to establish encrypted network connectivity
between your intranet and Amazon VPC over the Internet. VPN Connections are a good
solution if you have an immediate need, have low to modest bandwidth requirements,
and can tolerate the inherent variability in Internet-based connectivity. However, Site-to-
site VPN cannot provide low latency and high throughput connection, therefore this
option is ruled out.
Use VPC transit gateway to establish a connection between the data center and AWS
Cloud - A transit gateway is a network transit hub that you can use to interconnect your
virtual private clouds (VPC) and on-premises networks. A transit gateway by itself
cannot establish a low latency and high throughput connection between a data center
and AWS Cloud. Hence this option is incorrect.
Use AWS Direct Connect to establish a connection between the data center and AWS
Cloud - AWS Direct Connect by itself cannot provide an encrypted connection between a
data center and AWS Cloud, so this option is ruled out.
References:
https://aws.amazon.com/directconnect/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-
direct-connect-plus-vpn-network-to-amazon.html
(Correct)
(Incorrect)
Throttling is the process of limiting the number of requests an authorized program can
submit to a given operation in a given amount of time.
Amazon API Gateway, Amazon SQS and Amazon Kinesis - To prevent your API from
being overwhelmed by too many requests, Amazon API Gateway throttles requests to
your API using the token bucket algorithm, where a token counts for a request.
Specifically, API Gateway sets a limit on a steady-state rate and a burst of request
submissions against all APIs in your account. In the token bucket algorithm, the burst is
the maximum bucket size.
Amazon SQS - Amazon Simple Queue Service (SQS) is a fully managed message
queuing service that enables you to decouple and scale microservices, distributed
systems, and serverless applications. Amazon SQS offers buffer capabilities to smooth
out temporary volume spikes without losing messages or increasing latency.
Amazon Kinesis - Amazon Kinesis is a fully managed, scalable service that can ingest,
buffer, and process streaming data in real-time.
Incorrect options:
Amazon SQS, Amazon SNS and AWS Lambda - Amazon SQS has the ability to buffer its
messages. Amazon Simple Notification Service (SNS) cannot buffer messages and is
generally used with SQS to provide the buffering facility. When requests come in faster
than your Lambda function can scale, or when your function is at maximum
concurrency, additional requests fail as the Lambda throttles those requests with error
code 429 status code. So, this combination of services is incorrect.
Amazon Gateway Endpoints, Amazon SQS and Amazon Kinesis - A Gateway Endpoint
is a gateway that you specify as a target for a route in your route table for traffic
destined to a supported AWS service. This cannot help in throttling or buffering of
requests. Amazon SQS and Kinesis can buffer incoming data. Since Gateway Endpoint
is an incorrect service for throttling or buffering, this option is incorrect.
Elastic Load Balancer, Amazon SQS, AWS Lambda - Elastic Load Balancer cannot
throttle requests. Amazon SQS can be used to buffer messages. When requests come
in faster than your Lambda function can scale, or when your function is at maximum
concurrency, additional requests fail as the Lambda throttles those requests with error
code 429 status code. So, this combination of services is incorrect.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-
throttling.html
https://aws.amazon.com/sqs/features/
Configure your Auto Scaling group by creating a scheduled action that kicks-
off at the designated hour on the last day of the month. Set the desired
capacity of instances to 10. This causes the scale-out to happen before peak
traffic kicks in at the designated hour
(Correct)
Configure your Auto Scaling group by creating a target tracking policy and
setting the instance count to 10 at the designated hour. This causes the scale-
out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a simple tracking policy and
setting the instance count to 10 at the designated hour. This causes the scale-
out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a scheduled action that kicks-
off at the designated hour on the last day of the month. Set the min count as
well as the max count of instances to 10. This causes the scale-out to happen
before peak traffic kicks in at the designated hour
Explanation
Correct option:
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the
designated hour on the last day of the month. Set the desired capacity of instances to
10. This causes the scale-out to happen before peak traffic kicks in at the designated
hour
Scheduled scaling allows you to set your own scaling schedule. For example, let's say
that every week the traffic to your web application starts to increase on Wednesday,
remains high on Thursday, and starts to decrease on Friday. You can plan your scaling
actions based on the predictable traffic patterns of your web application. Scaling
actions are performed automatically as a function of time and date.
A scheduled action sets the minimum, maximum, and desired sizes to what is specified
by the scheduled action at the time specified by the scheduled action. For the given use
case, the correct solution is to set the desired capacity to 10. When we want to specify a
range of instances, then we must use min and max values.
Incorrect options:
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the
designated hour on the last day of the month. Set the min count as well as the max
count of instances to 10. This causes the scale-out to happen before peak traffic kicks
in at the designated hour - As mentioned earlier in the explanation, only when we want
to specify a range of instances, then we must use min and max values. As the given
use-case requires exactly 10 instances to be available during the peak hour, so we must
set the desired capacity to 10. Hence this option is incorrect.
Configure your Auto Scaling group by creating a target tracking policy and setting the
instance count to 10 at the designated hour. This causes the scale-out to happen
before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a simple tracking policy and setting the
instance count to 10 at the designated hour. This causes the scale-out to happen
before peak traffic kicks in at the designated hour
Target tracking policy or simple tracking policy cannot be used to effect a scaling action
at a certain designated hour. Both these options have been added as distractors.
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html
(Correct)
AWS WAF is a web application firewall service that lets you monitor web requests and
protect your web applications from malicious requests. Use AWS WAF to block or allow
requests based on conditions that you specify, such as the IP addresses. You can also
use AWS WAF preconfigured protections to block common attacks like SQL injection or
cross-site scripting.
You can use AWS WAF with your Application Load Balancer to allow or block requests
based on the rules in a web access control list (web ACL). Geographic (Geo) Match
Conditions in AWS WAF allows you to use AWS WAF to restrict application access
based on the geographic location of your viewers. With geo match conditions you can
choose the countries from which AWS WAF should allow access.
Geo match conditions are important for many customers. For example, legal and
licensing requirements restrict some customers from delivering their applications
outside certain countries. These customers can configure a whitelist that allows only
viewers in those countries. Other customers need to prevent the downloading of their
encrypted software by users in certain countries. These customers can configure a
blacklist so that end-users from those countries are blocked from downloading their
software.
Incorrect options:
Use Geo Restriction feature of Amazon CloudFront in a VPC - Geo Restriction feature of
CloudFront helps in restricting traffic based on the user's geographic location. But,
CloudFront works from edge locations and doesn't belong to a VPC. Hence, this option
itself is incorrect and given only as a distractor.
Security Groups cannot restrict access based on the user's geographic location.
References:
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-
geographic-match/
https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-
load-balancers/
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-
Application-Load-Balancer/
As a solutions architect, which is the MOST cost-effective storage class that you would
recommend to be used for this use-case?
(Correct)
Amazon S3 Standard
(Incorrect)
Since the data is accessed only twice in a financial year but needs rapid access when
required, the most cost-effective storage class for this use-case is S3 Standard-IA. S3
Standard-IA storage class is for data that is accessed less frequently but requires rapid
access when needed. S3 Standard-IA matches the high durability, high throughput, and
low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee.
Standard-IA is designed for 99.9% availability compared to 99.99% availability of S3
Standard. However, the report creation process has failover and retry scenarios built
into the workflow, so in case the data is not available owing to the 99.9% availability of
S3 Standard-IA, the job will be auto re-invoked till data is successfully retrieved.
Therefore this is the correct option.
S3 Storage Classes
Overview:
via - https://aws.amazon.com/s3/storage-classes/
Incorrect options:
Amazon S3 Glacier Deep Archive - S3 Glacier Deep Archive is a secure, durable, and
low-cost storage class for data archiving. S3 Glacier Deep Archive does not support
millisecond latency, so this option is ruled out.
For more details on the durability, availability, cost and access latency - please review
this reference link: https://aws.amazon.com/s3/storage-classes
Send an email to the business owner with details of the login username and
password for the AWS root user. This will help the business owner to
troubleshoot any login issues in future
Create AWS account root user access keys and share those keys only with the
business owner
(Correct)
•
Enable Multi Factor Authentication (MFA) for the AWS account root user
account
(Correct)
Explanation
Correct options:
Enable Multi Factor Authentication (MFA) for the AWS account root user account
Here are some of the best practices while creating an AWS account root user:
1) Use a strong password to help protect account-level access to the AWS Management
Console. 2) Never share your AWS account root user password or access keys with
anyone. 3) If you do have an access key for your AWS account root user, delete it. If you
must keep it, rotate (change) the access key regularly. You should not encrypt the
access keys and save them on Amazon S3. 4) If you don't already have an access key
for your AWS account root user, don't create one unless you absolutely need to. 5)
Enable AWS multi-factor authentication (MFA) on your AWS account root user account.
via - https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Incorrect options:
Encrypt the access keys and save them on Amazon S3 - AWS recommends that if you
don't already have an access key for your AWS account root user, don't create one
unless you absolutely need to. Even an encrypted access key for the root user poses a
significant security risk. Therefore, this option is incorrect.
Create AWS account root user access keys and share those keys only with the
business owner - AWS recommends that if you don't already have an access key for
your AWS account root user, don't create one unless you absolutely need to. Hence, this
option is incorrect.
Send an email to the business owner with details of the login username and password
for the AWS root user. This will help the business owner to troubleshoot any login
issues in future - AWS recommends that you should never share your AWS account root
user password or access keys with anyone. Sending an email with AWS account root
user credentials creates a security risk as it can be misused by anyone reading the
email. Hence, this option is incorrect.
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#create-iam-
users
Which of the following AWS services is BEST suited to accelerate the aforementioned
chip design process?
AWS Glue
(Incorrect)
Amazon EMR
(Correct)
Explanation
Correct option:
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s
most popular high-performance file system. It is used for workloads such as machine
learning, high-performance computing (HPC), video processing, and financial modeling.
The open-source Lustre file system is designed for applications that require fast
storage – where you want your storage to keep up with your compute. FSx for Lustre
integrates with Amazon S3, making it easy to process data sets with the Lustre file
system. When linked to an S3 bucket, an FSx for Lustre file system transparently
presents S3 objects as files and allows you to write changed data back to S3.
FSx for Lustre provides the ability to both process the 'hot data' in a parallel and
distributed fashion as well as easily store the 'cold data' on Amazon S3. Therefore this
option is the BEST fit for the given problem statement.
Incorrect options:
Amazon FSx for Windows File Server - Amazon FSx for Windows File Server provides
fully managed, highly reliable file storage that is accessible over the industry-standard
Service Message Block (SMB) protocol. It is built on Windows Server, delivering a wide
range of administrative features such as user quotas, end-user file restore, and
Microsoft Active Directory (AD) integration. FSx for Windows does not allow you to
present S3 objects as files and does not allow you to write changed data back to S3.
Therefore you cannot reference the "cold data" with quick access for reads and updates
at low cost. Hence this option is not correct.
Amazon EMR - Amazon EMR is the industry-leading cloud big data platform for
processing vast amounts of data using open source tools such as Apache Spark,
Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR uses
Hadoop, an open-source framework, to distribute your data and processing across a
resizable cluster of Amazon EC2 instances. EMR does not offer the same storage and
processing speed as FSx for Lustre. So it is not the right fit for the given high-
performance workflow scenario.
AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that
makes it easy for customers to prepare and load their data for analytics. AWS Glue job
is meant to be used for batch ETL data processing. AWS Glue does not offer the same
storage and processing speed as FSx for Lustre. So it is not the right fit for the given
high-performance workflow scenario.
References:
https://aws.amazon.com/fsx/lustre/
https://aws.amazon.com/fsx/windows/faqs/
As a solutions architect, which of the following AWS services would you recommend as
a caching layer for this use-case? (Select two)
Redshift
(Incorrect)
Elasticsearch
(Correct)
ElastiCache
(Correct)
RDS
Explanation
Correct options:
DAX
Overview:
via
- https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concep
ts.html
ElastiCache - Amazon ElastiCache for Memcached is an ideal front-end for data stores
like Amazon RDS or Amazon DynamoDB, providing a high-performance middle tier for
applications with extremely high request rates and/or low latency requirements.
Therefore, this is also a correct option.
Incorrect options:
RDS - Amazon Relational Database Service (Amazon RDS) makes it easy to set up,
operate, and scale a relational database in the cloud. It provides cost-efficient and
resizable capacity while automating time-consuming administration tasks such as
hardware provisioning, database setup, patching, and backups. RDS cannot be used as
a caching layer for DynamoDB.
References:
https://aws.amazon.com/dynamodb/dax/
https://aws.amazon.com/elasticache/faqs/
(Correct)
Explanation
Correct option:
You can use Kinesis Data Analytics to transform and analyze streaming data in real-
time with Apache Flink. Kinesis Data Analytics enables you to quickly build end-to-end
stream processing applications for log analytics, clickstream analytics, Internet of
Things (IoT), ad tech, gaming, etc. The four most common use cases are streaming
extract-transform-load (ETL), continuous metric generation, responsive real-time
analytics, and interactive querying of data streams. Kinesis Data Analytics for Apache
Flink applications provides your application 50 GB of running application storage per
Kinesis Processing Unit (KPU).
Amazon API Gateway is a fully managed service that allows you to publish, maintain,
monitor, and secure APIs at any scale. Amazon API Gateway offers two options to
create RESTful APIs, HTTP APIs and REST APIs, as well as an option to create
WebSocket APIs.
Amazon API
Gateway:
via - https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-
capabilities-in-database-environment/
For the given use case, you can use Amazon API Gateway to create a REST API that
handles incoming requests having location data from the trucks and sends it to the
Kinesis Data Analytics application on the back end.
Kinesis Data
Analytics:
via - https://aws.amazon.com/kinesis/data-analytics/
Incorrect options:
Leverage Amazon Athena with S3 - Amazon Athena is an interactive query service that
makes it easy to analyze data in Amazon S3 using standard SQL. Athena cannot be
used to build a REST API to consume data from the source. So this option is incorrect.
Leverage Amazon API Gateway with AWS Lambda - You cannot use Lambda to store
and retrieve the location data for analysis, so this option is incorrect.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-
aws-services-kinesis.html
https://aws.amazon.com/kinesis/data-analytics/
https://aws.amazon.com/kinesis/data-analytics/faqs/
Given this scenario, which of the following is correct regarding the charges for this
image transfer?
The junior scientist needs to pay both S3 transfer charges and S3TA transfer
charges for the image upload
The junior scientist does not need to pay any transfer charges for the image
upload
(Correct)
The junior scientist only needs to pay S3TA transfer charges for the image
upload
(Incorrect)
The junior scientist only needs to pay S3 transfer charges for the image upload
Explanation
Correct option:
The junior scientist does not need to pay any transfer charges for the image upload
There are no S3 data transfer charges when data is transferred in from the internet.
Also with S3TA, you pay only for transfers that are accelerated. Therefore the junior
scientist does not need to pay any transfer charges for the image upload because S3TA
did not result in an accelerated transfer.
Incorrect options:
The junior scientist only needs to pay S3TA transfer charges for the image upload -
Since S3TA did not result in an accelerated transfer, there are no S3TA transfer charges
to be paid.
The junior scientist only needs to pay S3 transfer charges for the image upload - There
are no S3 data transfer charges when data is transferred in from the internet. So this
option is incorrect.
The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges
for the image upload - There are no S3 data transfer charges when data is transferred in
from the internet. Since S3TA did not result in an accelerated transfer, there are no
S3TA transfer charges to be paid.
References:
https://aws.amazon.com/s3/transfer-acceleration/
https://aws.amazon.com/s3/pricing/
Which of the following correctly summarizes these capabilities for the given database?
•
Multi-AZ follows asynchronous replication and spans at least two Availability
Zones within a single region. Read replicas follow asynchronous replication
and can be within an Availability Zone, Cross-AZ, or Cross-Region
(Incorrect)
(Correct)
Multi-AZ follows synchronous replication and spans at least two Availability Zones
within a single region. Read replicas follow asynchronous replication and can be within
an Availability Zone, Cross-AZ, or Cross-Region
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS
database (DB) instances, making them a natural fit for production database workloads.
When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a
primary DB Instance and synchronously replicates the data to a standby instance in a
different Availability Zone (AZ). Multi-AZ spans at least two Availability Zones within a
single region.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS
database (DB) instances. They make it easy to elastically scale out beyond the capacity
constraints of a single DB instance for read-heavy database workloads. For the MySQL,
MariaDB, PostgreSQL, Oracle, and SQL Server database engines, Amazon RDS creates a
second DB instance using a snapshot of the source DB instance. It then uses the
engines' native asynchronous replication to update the read replica whenever there is a
change to the source DB instance.
Amazon RDS replicates all databases in the source DB instance. Read replicas can be
within an Availability Zone, Cross-AZ, or Cross-Region.
Exam Alert:
via - https://aws.amazon.com/rds/features/multi-az/
Incorrect Options:
Multi-AZ follows asynchronous replication and spans one Availability Zone within a
single region. Read replicas follow synchronous replication and can be within an
Availability Zone, Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones
within a single region. Read replicas follow synchronous replication and can be within
an Availability Zone, Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones
within a single region. Read replicas follow asynchronous replication and can be within
an Availability Zone, Cross-AZ, or Cross-Region
These three options contradict the earlier details provided in the explanation. To
summarize, Multi-AZ follows synchronous replication for RDS. Hence these options are
incorrect.
References:
https://aws.amazon.com/rds/features/multi-az/
https://aws.amazon.com/rds/features/read-replicas/
Which of the following techniques will help the company meet this requirement?
Raise a service request with Amazon to completely delete the data from all
their backups
(Incorrect)
(Correct)
Explanation
Correct option:
Amazon GuardDuty offers threat detection that enables you to continuously monitor
and protect your AWS accounts, workloads, and data stored in Amazon S3. GuardDuty
analyzes continuous streams of meta-data generated from your account and network
activity found in AWS CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs. It also
uses integrated threat intelligence such as known malicious IP addresses, anomaly
detection, and machine learning to identify threats more accurately.
Disable the service in the general settings - Disabling the service will delete all
remaining data, including your findings and configurations before relinquishing the
service permissions and resetting the service. So, this is the correct option for our use
case.
Incorrect options:
Suspend the service in the general settings - You can stop Amazon GuardDuty from
analyzing your data sources at any time by choosing to suspend the service in the
general settings. This will immediately stop the service from analyzing data, but does
not delete your existing findings or configurations.
De-register the service under services tab - This is a made-up option, used only as a
distractor.
Raise a service request with Amazon to completely delete the data from all their
backups - There is no need to create a service request as you can delete the existing
findings by disabling the service.
Reference:
https://aws.amazon.com/guardduty/faqs/
Versioning
(Correct)
•
Requester Pays
(Incorrect)
Versioning
Versioning
Overview:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Incorrect options:
Server Access Logging
Requester Pays
Server Access Logging, Static Website Hosting and Requester Pays features can be
disabled even after they have been enabled.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Which of the following options represents the best solution for this use case?
Deploy the Oracle database layer on multiple EC2 instances spread across two
Availability Zones (AZ). This deployment configuration guarantees high
availability and also allows the database administrators to access and
customize the database environment and the underlying operating system
(Incorrect)
Leverage cross AZ read-replica configuration of RDS for Oracle that allows the
database administrators to access and customize the database environment
and the underlying operating system
Leverage multi-AZ configuration of RDS Custom for Oracle that allows the
database administrators to access and customize the database environment
and the underlying operating system
(Correct)
•
Leverage multi-AZ configuration of RDS for Oracle that allows the database
administrators to access and customize the database environment and the
underlying operating system
Explanation
Correct option:
Leverage multi-AZ configuration of RDS Custom for Oracle that allows the database
administrators to access and customize the database environment and the underlying
operating system
Amazon RDS is a managed service that makes it easy to set up, operate, and scale a
relational database in the cloud. It provides cost-efficient and resizable capacity while
managing time-consuming database administration tasks. Amazon RDS can
automatically back up your database and keep your database software up to date with
the latest version. However, RDS does not allow you to access the host OS of the
database.
For the given use-case, you need to use RDS Custom for Oracle as it allows you to
access and customize your database server host and operating system, for example by
applying special patches and changing the database software settings to support third-
party applications that require privileged access. RDS Custom for Oracle facilitates
these functionalities with minimum infrastructure maintenance effort. You need to set
up the RDS Custom for Oracle in multi-AZ configuration for high availability.
Incorrect options:
Leverage multi-AZ configuration of RDS for Oracle that allows the database
administrators to access and customize the database environment and the underlying
operating system
Leverage cross AZ read-replica configuration of RDS for Oracle that allows the
database administrators to access and customize the database environment and the
underlying operating system
RDS for Oracle does not allow you to access and customize your database server host
and operating system. Therefore, both these options are incorrect.
Deploy the Oracle database layer on multiple EC2 instances spread across two
Availability Zones (AZ). This deployment configuration guarantees high availability
and also allows the database administrators to access and customize the database
environment and the underlying operating system - The use case requires that the best
solution should involve minimum infrastructure maintenance effort. When you use EC2
instances to host the databases, you need to manage the server health, server
maintenance, server patching, and database maintenance tasks yourself. In addition,
you will also need to manage the multi-AZ configuration by deploying EC2 instances
across two Availability Zones, perhaps by using an Auto-scaling group. These steps
entail significant maintenance effort. Hence this option is incorrect.
References:
https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-
capabilities-in-database-environment/
https://aws.amazon.com/rds/faqs/
Which of the following EC2 instance topologies should this application be deployed on?
•
The EC2 instances should be deployed in a cluster placement group so that the
underlying workload can benefit from low network latency and high network
throughput
(Correct)
The EC2 instances should be deployed in a cluster placement group so that the
underlying workload can benefit from low network latency and high network
throughput
The key thing to understand in this question is that HPC workloads need to achieve low-
latency network performance necessary for tightly-coupled node-to-node
communication that is typical of HPC applications. Cluster placement groups pack
instances close together inside an Availability Zone. These are recommended for
applications that benefit from low network latency, high network throughput, or both.
Therefore this option is the correct answer.
Cluster Placement
Group:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Incorrect options:
The EC2 instances should be deployed in a partition placement group so that
distributed workloads can be handled effectively - A partition placement group spreads
your instances across logical partitions such that groups of instances in one partition
do not share the underlying hardware with groups of instances in different partitions.
This strategy is typically used by large distributed and replicated workloads, such as
Hadoop, Cassandra, and Kafka. A partition placement group can have a maximum of
seven partitions per Availability Zone. Since a partition placement group can have
partitions in multiple Availability Zones in the same region, therefore instances will not
have low-latency network performance. Hence the partition placement group is not the
right fit for HPC applications.
Partition Placement
Group:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
The EC2 instances should be deployed in a spread placement group so that there are
no correlated failures - A spread placement group is a group of instances that are each
placed on distinct racks, with each rack having its own network and power source. The
instances are placed across distinct underlying hardware to reduce correlated failures.
You can have a maximum of seven running instances per Availability Zone per group.
Since a spread placement group can span multiple Availability Zones in the same
Region, therefore instances will not have low-latency network performance. Hence
spread placement group is not the right fit for HPC applications.
Spread Placement
Group:
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
The EC2 instances should be deployed in an Auto Scaling group so that application
meets high availability requirements - An Auto Scaling group contains a collection of
Amazon EC2 instances that are treated as a logical grouping for the purposes of
automatic scaling. You do not use Auto Scaling groups per se to meet HPC
requirements.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Which of the following AWS services represents the best solution for this use-case?
(Correct)
Amazon Route 53
(Incorrect)
Amazon CloudFront
Explanation
Correct option:
AWS Global Accelerator - AWS Global Accelerator utilizes the Amazon global network,
allowing you to improve the performance of your applications by lowering first-byte
latency (the round trip time for a packet to go from a client to your endpoint and back
again) and jitter (the variation of latency), and increasing throughput (the amount of
time it takes to transfer data) as compared to the public internet.
Global Accelerator improves performance for a wide range of applications over TCP or
UDP by proxying packets at the edge to applications running in one or more AWS
Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming
(UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically
require static IP addresses or deterministic, fast regional failover.
Incorrect options:
AWS Global Accelerator and Amazon CloudFront are separate services that use the
AWS global network and its edge locations around the world. CloudFront improves
performance for both cacheable content (such as images and videos) and dynamic
content (such as API acceleration and dynamic site delivery), while Global Accelerator
improves performance for a wide range of applications over TCP or UDP.
AWS Elastic Load Balancing (ELB) - Both of the services, ELB and Global Accelerator
solve the challenge of routing user requests to healthy application endpoints. AWS
Global Accelerator relies on ELB to provide the traditional load balancing features such
as support for internal and non-AWS endpoints, pre-warming, and Layer 7 routing.
However, while ELB provides load balancing within one Region, AWS Global Accelerator
provides traffic management across multiple Regions.
A regional ELB load balancer is an ideal target for AWS Global Accelerator. By using a
regional ELB load balancer, you can precisely distribute incoming application traffic
across backends, such as Amazon EC2 instances or Amazon ECS tasks, within an AWS
Region.
If you have workloads that cater to a global client base, AWS recommends that you use
AWS Global Accelerator. If you have workloads hosted in a single AWS Region and used
by clients in and around the same Region, you can use an Application Load Balancer or
Network Load Balancer to manage your resources.
Amazon Route 53 - Amazon Route 53 is a highly available and scalable cloud Domain
Name System (DNS) web service. It is designed to give developers and businesses an
extremely reliable and cost-effective way to route end users to Internet applications by
translating names like www.example.com into the numeric IP addresses like 192.0.2.1
that computers use to connect to each other. Route 53 is ruled out as the company
wants to continue using its own custom DNS service.
Reference:
https://aws.amazon.com/global-accelerator/faqs/
As a solutions architect, which best practices would you recommend (Select two)?
Use user credentials to provide access specific permissions for Amazon EC2
instances
(Correct)
•
Enable MFA for privileged users
(Correct)
Explanation
Correct options:
Enable MFA for privileged users - As per the AWS best practices, it is better to enable
Multi Factor Authentication (MFA) for privileged users via an MFA-enabled mobile
device or hardware MFA token.
Configure AWS CloudTrail to record all account activity - AWS recommends to turn on
CloudTrail to log all IAM actions for monitoring and audit purposes.
Incorrect options:
Create a minimum number of accounts and share these account credentials among
employees - AWS recommends that user account credentials should not be shared
between users. So, this option is incorrect.
Use user credentials to provide access specific permissions for Amazon EC2
instances - It is highly recommended to use roles to grant access permissions for EC2
instances working on different AWS services. So, this option is incorrect.
References:
https://aws.amazon.com/iam/
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://aws.amazon.com/cloudtrail/faqs/
Which of the following would you attribute as the underlying reason for the
unexpectedly high costs for AWS Shield Advanced service?
•
Savings Plans has not been enabled for the AWS Shield Advanced service
across all the AWS accounts
(Incorrect)
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting
in increased costs
AWS Shield Advanced is being used for custom servers, that are not part of
AWS Cloud, thereby resulting in increased costs
Consolidated billing has not been enabled. All the AWS accounts should fall
under a single consolidated billing for the monthly fee to be charged only once
(Correct)
Explanation
Correct option:
Consolidated billing has not been enabled. All the AWS accounts should fall under a
single consolidated billing for the monthly fee to be charged only once - If your
organization has multiple AWS accounts, then you can subscribe multiple AWS
Accounts to AWS Shield Advanced by individually enabling it on each account using the
AWS Management Console or API. You will pay the monthly fee once as long as the
AWS accounts are all under a single consolidated billing, and you own all the AWS
accounts and resources in those accounts.
Incorrect options:
AWS Shield Advanced is being used for custom servers, that are not part of AWS
Cloud, thereby resulting in increased costs - AWS Shield Advanced does offer
protection to resources outside of AWS. This should not cause unexpected spike in
billing costs.
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in
increased costs - AWS Shield Standard is automatically enabled for all AWS customers
at no additional cost. AWS Shield Advanced is an optional paid service.
Savings Plans has not been enabled for the AWS Shield Advanced service across all
the AWS accounts - This option has been added as a distractor. Savings Plans is a
flexible pricing model that offers low prices on EC2, Lambda, and Fargate usage, in
exchange for a commitment to a consistent amount of usage (measured in $/hour) for
a 1 or 3 year term. Savings Plans is not applicable for the AWS Shield Advanced service.
References:
https://aws.amazon.com/shield/faqs/
https://aws.amazon.com/savingsplans/faq/
Can you spot the INVALID lifecycle transitions from the options below? (Select two)
(Incorrect)
(Correct)
(Correct)
As the question wants to know about the INVALID lifecycle transitions, the following
options are the correct answers -
Following are the unsupported life cycle transitions for S3 storage classes - Any storage
class to the S3 Standard storage class. Any storage class to the Reduced Redundancy
storage class. The S3 Intelligent-Tiering storage class to the S3 Standard-IA storage
class. The S3 One Zone-IA storage class to the S3 Standard-IA or S3 Intelligent-Tiering
storage classes.
Incorrect options:
Here are the supported life cycle transitions for S3 storage classes - The S3 Standard
storage class to any other storage class. Any storage class to the S3 Glacier or S3
Glacier Deep Archive storage classes. The S3 Standard-IA storage class to the S3
Intelligent-Tiering or S3 One Zone-IA storage classes. The S3 Intelligent-Tiering storage
class to the S3 One Zone-IA storage class. The S3 Glacier storage class to the S3
Glacier Deep Archive storage class.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-
considerations.html
Which of the following AWS services can facilitate the migration of these workloads?
(Incorrect)
(Correct)
Explanation
Correct option:
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage
that is accessible over the industry-standard Service Message Block (SMB) protocol. It
is built on Windows Server, delivering a wide range of administrative features such as
user quotas, end-user file restore, and Microsoft Active Directory (AD) integration.
Amazon FSx supports the use of Microsoft’s Distributed File System (DFS) to organize
shares into a single folder structure up to hundreds of PB in size. So this option is
correct.
Incorrect options:
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s
most popular high-performance file system. It is used for workloads such as machine
learning, high-performance computing (HPC), video processing, and financial modeling.
Amazon FSx enables you to use Lustre file systems for any workload where storage
speed matters. FSx for Lustre does not support Microsoft’s Distributed File System
(DFS), so this option is incorrect.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed
Microsoft AD, enables your directory-aware workloads and AWS resources to use
managed Active Directory in the AWS Cloud. AWS Managed Microsoft AD is built on the
actual Microsoft Active Directory and does not require you to synchronize or replicate
data from your existing Active Directory to the cloud. AWS Managed Microsoft AD does
not support Microsoft’s Distributed File System (DFS), so this option is incorrect.
Microsoft SQL Server on AWS offers you the flexibility to run Microsoft SQL Server
database on AWS Cloud. Microsoft SQL Server on AWS does not support Microsoft’s
Distributed File System (DFS), so this option is incorrect.
Reference:
https://aws.amazon.com/fsx/windows/
Question 46: Correct
A US-based healthcare startup is building an interactive diagnostic tool for COVID-19
related assessments. The users would be required to capture their personal health
records via this tool. As this is sensitive health information, the backup of the user data
must be kept encrypted in S3. The startup does not want to provide its own encryption
keys but still wants to maintain an audit trail of when an encryption key was used and by
whom.
Use client-side encryption with client provided keys and then upload the
encrypted user data to S3
(Correct)
Explanation
Correct option:
AWS Key Management Service (AWS KMS) is a service that combines secure, highly
available hardware and software to provide a key management system scaled for the
cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify
a customer-managed CMK that you have already created. SSE-KMS provides you with
an audit trail that shows when your CMK was used and by whom. Therefore SSE-KMS is
the correct solution for this use-case.
Incorrect options:
Use SSE-S3 to encrypt the user data on S3 - When you use Server-Side Encryption with
Amazon S3-Managed Keys (SSE-S3), each object is encrypted with a unique key.
However this option does not provide the ability to audit trail the usage of the encryption
keys.
Use SSE-C to encrypt the user data on S3 - With Server-Side Encryption with Customer-
Provided Keys (SSE-C), you manage the encryption keys and Amazon S3 manages the
encryption, as it writes to disks, and decryption when you access your objects. However
this option does not provide the ability to audit trail the usage of the encryption keys.
Use client-side encryption with client provided keys and then upload the encrypted
user data to S3 - Using client-side encryption is ruled out as the startup does not want
to provide the encryption keys.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Which of the following options represent a valid configuration for setting up retention
periods for objects in Amazon S3 buckets? (Select two)
Different versions of a single object can have different retention modes and
periods
(Correct)
When you apply a retention period to an object version explicitly, you specify
a Retain Until Date for the object version
(Correct)
When you use bucket default settings, you specify a Retain Until Date for the
object version
(Incorrect)
The bucket default settings will override any explicit retention mode or
period you request on an object version
When you apply a retention period to an object version explicitly, you specify a Retain
Until Date for the object version - You can place a retention period on an object
version either explicitly or through a bucket default setting. When you apply a retention
period to an object version explicitly, you specify a Retain Until Date for the object
version. Amazon S3 stores the Retain Until Date setting in the object version's metadata
and protects the object version until the retention period expires.
Different versions of a single object can have different retention modes and periods -
Like all other Object Lock settings, retention periods apply to individual object versions.
Different versions of a single object can have different retention modes and periods.
For example, suppose that you have an object that is 15 days into a 30-day retention
period, and you PUT an object into Amazon S3 with the same name and a 60-day
retention period. In this case, your PUT succeeds, and Amazon S3 creates a new version
of the object with a 60-day retention period. The older version maintains its original
retention period and becomes deletable in 15 days.
Incorrect options:
You cannot place a retention period on an object version through a bucket default
setting - You can place a retention period on an object version either explicitly or
through a bucket default setting.
When you use bucket default settings, you specify a Retain Until Date for the object
version - When you use bucket default settings, you don't specify a Retain Until Date.
Instead, you specify a duration, in either days or years, for which every object version
placed in the bucket should be protected.
The bucket default settings will override any explicit retention mode or period you
request on an object version - If your request to place an object version in a bucket
contains an explicit retention mode and period, those settings override any bucket
default settings for that object version.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
(Incorrect)
(Correct)
API Gateway creates RESTful APIs that enable stateless client-server communication
and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol,
which enables stateful, full-duplex communication between client and server
Amazon API Gateway is a fully managed service that makes it easy for developers to
create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the front
door for applications to access data, business logic, or functionality from your backend
services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that
enable real-time two-way communication applications.
Are HTTP-based.
Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.
So API Gateway supports stateless RESTful APIs as well as stateful WebSocket APIs.
Therefore this option is correct.
Incorrect options:
API Gateway creates RESTful APIs that enable stateful client-server communication
and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol,
which enables stateful, full-duplex communication between client and server
API Gateway creates RESTful APIs that enable stateless client-server communication
and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol,
which enables stateless, full-duplex communication between client and server
API Gateway creates RESTful APIs that enable stateful client-server communication
and API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol,
which enables stateless, full-duplex communication between client and server
These three options contradict the earlier details provided in the explanation. To
summarize, API Gateway supports stateless RESTful APIs and stateful WebSocket
APIs. Hence these options are incorrect.
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html
The company has hired you as an AWS Certified Solutions Architect Associate to build
the best-fit solution that does not require custom development/scripting effort. Which
of the following will you suggest?
Set up a cron job on the EC2 instances to inspect the web application's logs at a
regular frequency. When HTTP errors are detected, force an application
restart
•
Replace the Network Load Balancer (NLB) with an Application Load Balancer
(ALB) and configure HTTP health checks on the ALB by pointing to the URL of
the application. Leverage the Auto Scaling group to replace unhealthy
instances
(Correct)
Explanation
Correct option:
Replace the Network Load Balancer (NLB) with an Application Load Balancer (ALB)
and configure HTTP health checks on the ALB by pointing to the URL of the
application. Leverage the Auto Scaling group to replace unhealthy instances
A Network Load Balancer (NLB) functions at the fourth layer of the Open Systems
Interconnection (OSI) model. It can handle millions of requests per second. After the
load balancer receives a connection request, it selects a target from the target group for
the default rule. It attempts to open a TCP connection to the selected target on the port
specified in the listener configuration.
A load balancer serves as the single point of contact for clients. The load balancer
distributes incoming traffic across multiple targets, such as Amazon EC2 instances.
This increases the availability of your application. You add one or more listeners to your
load balancer.
A listener checks for connection requests from clients, using the protocol and port that
you configure, and forwards requests to a target group. Each target group routes
requests to one or more registered targets, such as EC2 instances, using the TCP
protocol and the port number that you specify.
via - https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-
health-checks.html
For the given use case, you need to swap out the NLB with an ALB. This would allow you
to use HTTP-based health checks to detect when the web application faces errors. You
can then leverage the Auto Scaling group to use the ALB's health checks to identify and
replace unhealthy instances.
via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-
healthcheck.html
Incorrect options:
Set up a CloudWatch alarm to monitor the UnhealthyHostCount metric for the NLB.
Leverage the Auto Scaling group to replace unhealthy instances when the alarm is in
the ALARM state - The Elastic Load Balancing (ELB) service provides you with Amazon
CloudWatch metrics (HealthyHostCount and UnhealthyHostCount) to monitor the
targets behind your load balancers. Although the unhealthy host count metric gives the
aggregate number of failed hosts, there is a common pain point when you create an
alarm for unhealthy hosts based on these metrics. This is because there is no easy way
for you to tell which target was or is unhealthy. Building a solution using the Cloudwatch
alarm requires significant development/scripting effort to identify the unhealthy target,
so this option is incorrect.
Configure HTTP health checks on the Network Load Balancer (NLB) by pointing to the
URL of the application. Leverage the Auto Scaling group to replace unhealthy
instances - The NLB uses HTTP, HTTPS, and TCP as possible protocols when
performing health checks on targets. The default is the TCP protocol. If the target type
is ALB, the supported health check protocols are HTTP and HTTPS. Although it is now
possible to configure an ALB as a target of an NLB, it would end up being a costlier and
inefficient solution than just swapping out the NLB with the ALB, so this solution is not
the best fit.
Set up a cron job on the EC2 instances to inspect the web application's logs at a
regular frequency. When HTTP errors are detected, force an application restart - This
option requires significant development/scripting effort to identify the unhealthy target.
It's not as elegant a solution as directly leveraging the HTTP health check capabilities of
the ALB. So this option is incorrect.
References:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-
health-checks.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-
healthcheck.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-
cloudwatch-metrics.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-
troubleshooting.html
https://aws.amazon.com/blogs/networking-and-content-delivery/identifying-unhealthy-
targets-of-elastic-load-balancer/
As a Solutions Architect, can you suggest a way to lower the storage costs while
fulfilling the business requirements?
(Incorrect)
•
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-
Infrequent Access (S3 One Zone-IA) after 7 days
(Correct)
Explanation
Correct option:
S3 One Zone-IA offers the same high durability, high throughput, and low latency of S3
Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes
can be configured at the object level, and a single bucket can contain objects stored
across S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can
also use S3 Lifecycle policies to automatically transition objects between storage
classes without any application changes.
Supported S3 lifecycle
transitions:
via
- https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-
considerations.html
Incorrect options:
As mentioned earlier, the minimum storage duration is 30 days before you can
transition objects from S3 Standard to S3 One Zone-IA or S3 Standard-IA, so both these
options are added as distractors.
References:
https://aws.amazon.com/s3/storage-classes/
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-
considerations.html
Which is the MOST effective way to address this issue so that such incidents do not
recur?
(Correct)
The CTO should review the permissions for each new developer's IAM user so
that such incidents don't recur
Only root user should have full database access in the organization
Remove full database access for all IAM users in the organization
Explanation
Correct option:
Use permissions boundary to control the maximum permissions employees can grant
to the IAM principals
Permission Boundary
Example:
via - https://aws.amazon.com/blogs/security/delegate-permission-management-to-
developers-using-iam-permissions-boundaries/
Incorrect options:
Remove full database access for all IAM users in the organization - It is not practical to
remove full access for all IAM users in the organization because a select set of users
need this access for database administration. So this option is not correct.
The CTO should review the permissions for each new developer's IAM user so that
such incidents don't recur - Likewise the CTO is not expected to review the permissions
for each new developer's IAM user, as this is best done via an automated procedure.
This option has been added as a distractor.
Only root user should have full database access in the organization - As a best
practice, the root user should not access the AWS account to carry out any
administrative procedures. So this option is not correct.
Reference:
https://aws.amazon.com/blogs/security/delegate-permission-management-to-
developers-using-iam-permissions-boundaries/
Question 52: Correct
An e-commerce company is looking for a solution with high availability, as it plans to
migrate its flagship application to a fleet of Amazon EC2 instances. The solution should
allow for content-based routing as part of the architecture.
As a Solutions Architect, which of the following will you suggest for the company?
Use an Application Load Balancer for distributing traffic to the EC2 instances
spread across different Availability Zones. Configure Auto Scaling group to
mask any failure of an instance
(Correct)
Use a Network Load Balancer for distributing traffic to the EC2 instances
spread across different Availability Zones. Configure a Private IP address to
mask any failure of an instance
Use an Auto Scaling group for distributing traffic to the EC2 instances spread
across different Availability Zones. Configure a Public IP address to mask any
failure of an instance
Use an Auto Scaling group for distributing traffic to the EC2 instances spread
across different Availability Zones. Configure an Elastic IP address to mask
any failure of an instance
Explanation
Correct option:
Use an Application Load Balancer for distributing traffic to the EC2 instances spread
across different Availability Zones. Configure Auto Scaling group to mask any failure
of an instance
The Application Load Balancer (ALB) is best suited for load balancing HTTP and HTTPS
traffic and provides advanced request routing targeted at the delivery of modern
application architectures, including microservices and containers. Operating at the
individual request level (Layer 7), the Application Load Balancer routes traffic to targets
within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the
request.
This is the correct option since the question has a specific requirement for content-
based routing which can be configured via the Application Load Balancer. Different AZs
provide high availability to the overall architecture and Auto Scaling group will help
mask any instance failures.
via - https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
Incorrect options:
Use a Network Load Balancer for distributing traffic to the EC2 instances spread
across different Availability Zones. Configure a Private IP address to mask any failure
of an instance - Network Load Balancer cannot facilitate content-based routing so this
option is incorrect.
Use an Auto Scaling group for distributing traffic to the EC2 instances spread across
different Availability Zones. Configure an Elastic IP address to mask any failure of an
instance
Use an Auto Scaling group for distributing traffic to the EC2 instances spread across
different Availability Zones. Configure a Public IP address to mask any failure of an
instance
Both these options are incorrect as you cannot use the Auto Scaling group to distribute
traffic to the EC2 instances.
An Elastic IP address is a static, public, IPv4 address allocated to your AWS account.
With an Elastic IP address, you can mask the failure of an instance or software by
rapidly remapping the address to another instance in your account. Elastic IPs do not
change and remain allocated to your account until you delete them.
via - https://docs.aws.amazon.com/whitepapers/latest/fault-tolerant-
components/fault-tolerant-components.pdf
You can span your Auto Scaling group across multiple Availability Zones within a
Region and then attaching a load balancer to distribute incoming traffic across those
zones.
via - https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-
zone.html
References:
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
https://docs.aws.amazon.com/whitepapers/latest/fault-tolerant-components/fault-
tolerant-components.pdf
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-
zone.html
•
Change the configuration on AWS S3 console so that the user needs to provide
additional confirmation while deleting any S3 object
(Correct)
Create an event trigger on deleting any S3 object. The event invokes an SNS
notification via email to the IT manager
(Incorrect)
(Correct)
Explanation
Correct options:
For example:
If you overwrite an object, it results in a new object version in the bucket. You can
always restore the previous version. If you delete an object, instead of removing it
permanently, Amazon S3 inserts a delete marker, which becomes the current object
version. You can always restore the previous version. Hence, this is the correct option.
Versioning
Overview:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Incorrect options:
Create an event trigger on deleting any S3 object. The event invokes an SNS
notification via email to the IT manager - Sending an event trigger after object deletion
does not meet the objective of preventing object deletion by mistake because the object
has already been deleted. So, this option is incorrect.
Establish a process to get managerial approval for deleting S3 objects - This option for
getting managerial approval is just a distractor.
Change the configuration on AWS S3 console so that the user needs to provide
additional confirmation while deleting any S3 object - There is no provision to set up S3
configuration to ask for additional confirmation before deleting an object. This option is
incorrect.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
Which of the following is the fastest way to upload the daily compressed file into S3?
FTP the compressed file into an EC2 instance that runs in the same region as
the S3 bucket. Then transfer the file from the EC2 instance into the S3 bucket
(Correct)
Upload the compressed file using multipart upload with S3 transfer acceleration
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over
long distances between your client and an S3 bucket. Transfer Acceleration takes
advantage of Amazon CloudFront’s globally distributed edge locations. As the data
arrives at an edge location, data is routed to Amazon S3 over an optimized network
path.
Multipart upload allows you to upload a single object as a set of parts. Each part is a
contiguous portion of the object's data. You can upload these object parts
independently and in any order. If transmission of any part fails, you can retransmit that
part without affecting other parts. After all parts of your object are uploaded, Amazon
S3 assembles these parts and creates the object. If you're uploading large objects over
a stable high-bandwidth network, use multipart uploading to maximize the use of your
available bandwidth by uploading object parts in parallel for multi-threaded
performance. If you're uploading over a spotty network, use multipart uploading to
increase resiliency to network errors by avoiding upload restarts.
Incorrect options:
Upload the compressed file in a single operation - In general, when your object size
reaches 100 MB, you should consider using multipart uploads instead of uploading the
object in a single operation. Multipart upload provides improved throughput - you can
upload parts in parallel to improve throughput. Therefore, this option is not correct.
Upload the compressed file using multipart upload - Although using multipart upload
would certainly speed up the process, combining with S3 transfer acceleration would
further improve the transfer speed. Therefore just using multipart upload is not the
correct option.
FTP the compressed file into an EC2 instance that runs in the same region as the S3
bucket. Then transfer the file from the EC2 instance into the S3 bucket - This is a
roundabout process of getting the file into S3 and added as a distractor. Although it is
technically feasible to follow this process, it would involve a lot of scripting and certainly
would not be the fastest way to get the file into S3.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
Path-based Routing
(Correct)
Host-based Routing
Path-based Routing
Host-based Routing:
You can route a client request based on the Host field of the HTTP header allowing you
to route to multiple domains from the same load balancer.
Path-based Routing:
You can route a client request based on the URL path of the HTTP header.
You can route a client request based on any standard or custom HTTP method.
You can route a client request based on the query string or query parameters.
You can route a client request based on source IP address CIDR from where the request
originates.
You can use path conditions to define rules that route requests based on the URL in the
request (also known as path-based routing).
The path pattern is applied only to the path of the URL, not to its query
parameters.
via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-
balancer-listeners.html#path-conditions
Incorrect options:
Host-based Routing
As mentioned earlier in the explanation, none of these three types of routing support
requests based on the URL path of the HTTP header. Hence these three are incorrect.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
listeners.html
The spreadsheet will have to be copied into EFS file systems of other AWS
regions as EFS is a regional service and it does not allow access from other
AWS regions
The spreadsheet on the EFS file system can be accessed in other AWS regions
by using an inter-region VPC peering connection
(Correct)
The spreadsheet data will have to be moved into an RDS MySQL database
which can then be accessed from any AWS region
Explanation
Correct option:
The spreadsheet on the EFS file system can be accessed in other AWS regions by
using an inter-region VPC peering connection
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed
elastic NFS file system for use with AWS Cloud services and on-premises resources.
Amazon EFS is a regional service storing data within and across multiple Availability
Zones (AZs) for high availability and durability. Amazon EC2 instances can access your
file system across AZs, regions, and VPCs, while on-premises servers can access using
AWS Direct Connect or AWS VPN.
You can connect to Amazon EFS file systems from EC2 instances in other AWS regions
using an inter-region VPC peering connection, and from on-premises servers using an
AWS VPN connection. So this is the correct option.
Incorrect options:
The spreadsheet will have to be copied in Amazon S3 which can then be accessed
from any AWS region
The spreadsheet data will have to be moved into an RDS MySQL database which can
then be accessed from any AWS region
Copying the spreadsheet into S3 or RDS database is not the correct solution as it
involves a lot of operational overhead. For RDS, one would need to write custom code to
replicate the spreadsheet functionality running off of the database. S3 does not allow in-
place edit of an object. Additionally, it's also not POSIX compliant. So one would need to
develop a custom application to "simulate in-place edits" to support collabaration as per
the use-case. So both these options are ruled out.
The spreadsheet will have to be copied into EFS file systems of other AWS regions as
EFS is a regional service and it does not allow access from other AWS regions -
Creating copies of the spreadsheet into EFS file systems of other AWS regions would
mean no collaboration would be possible between the teams. In this case, each team
would work on "its own file" instead of a single file accessed and updated by all teams.
Hence this option is incorrect.
Reference:
https://aws.amazon.com/efs/
Which of the following is correct regarding the pricing for these two services?
Both ECS with EC2 launch type and ECS with Fargate launch type are just
charged based on Elastic Container Service used per hour
Both ECS with EC2 launch type and ECS with Fargate launch type are charged
based on EC2 instances and EBS volumes used
Both ECS with EC2 launch type and ECS with Fargate launch type are charged
based on vCPU and memory resources that the containerized application
requests
ECS with EC2 launch type is charged based on EC2 instances and EBS volumes
used. ECS with Fargate launch type is charged based on vCPU and memory
resources that the containerized application requests
(Correct)
Explanation
Correct option:
ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used.
ECS with Fargate launch type is charged based on vCPU and memory resources that
the containerized application requests
Amazon Elastic Container Service (Amazon ECS) is a fully managed container
orchestration service. ECS allows you to easily run, scale, and secure Docker container
applications on AWS.
ECS
Overview:
via - https://aws.amazon.com/ecs/
With the Fargate launch type, you pay for the amount of vCPU and memory resources
that your containerized application requests. vCPU and memory resources are
calculated from the time your container images are pulled until the Amazon ECS Task*
terminates, rounded up to the nearest second. With the EC2 launch type, there is no
additional charge for the EC2 launch type. You pay for AWS resources (e.g. EC2
instances or EBS volumes) you create to store and run your application.
Incorrect options:
Both ECS with EC2 launch type and ECS with Fargate launch type are charged based
on vCPU and memory resources that the containerized application requests
Both ECS with EC2 launch type and ECS with Fargate launch type are charged based
on EC2 instances and EBS volumes used
As mentioned above - with the Fargate launch type, you pay for the amount of vCPU and
memory resources. With EC2 launch type, you pay for AWS resources (e.g. EC2
instances or EBS volumes). Hence both these options are incorrect.
Both ECS with EC2 launch type and ECS with Fargate launch type are just charged
based on Elastic Container Service used per hour
References:
https://aws.amazon.com/ecs/pricing/
As a solutions architect, which are the MOST time/resource efficient steps that you
would recommend so that the maintenance work can be completed at the earliest?
(Select two)
Suspend the ReplaceUnhealthy process type for the Auto Scaling group and
apply the maintenance patch to the instance. Once the instance is ready, you
can manually set the instance's health status back to healthy and activate the
ReplaceUnhealthy process type again
(Correct)
Put the instance into the Standby state and then update the instance by
applying the maintenance patch. Once the instance is ready, you can exit the
Standby state and then return the instance to service
(Correct)
Take a snapshot of the instance, create a new AMI and then launch a new
instance using this AMI. Apply the maintenance patch to this new instance and
then add it back to the Auto Scaling Group by using the manual scaling policy.
Terminate the earlier instance that had the maintenance issue
(Incorrect)
Delete the Auto Scaling group and apply the maintenance fix to the given
instance. Create a new Auto Scaling group and add all the instances again
using the manual scaling policy
•
Suspend the ScheduledActions process type for the Auto Scaling group and
apply the maintenance patch to the instance. Once the instance is ready, you
can you can manually set the instance's health status back to healthy and
activate the ScheduledActions process type again
Explanation
Correct options:
Put the instance into the Standby state and then update the instance by applying the
maintenance patch. Once the instance is ready, you can exit the Standby state and
then return the instance to service - You can put an instance that is in the InService
state into the Standby state, update some software or troubleshoot the instance, and
then return the instance to service. Instances that are on standby are still part of the
Auto Scaling group, but they do not actively handle application traffic.
Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the
maintenance patch to the instance. Once the instance is ready, you can manually set
the instance's health status back to healthy and activate the ReplaceUnhealthy
process type again - The ReplaceUnhealthy process terminates instances that are
marked as unhealthy and then creates new instances to replace them. Amazon EC2
Auto Scaling stops replacing instances that are marked as unhealthy. Instances that fail
EC2 or Elastic Load Balancing health checks are still marked as unhealthy. As soon as
you resume the ReplaceUnhealthly process, Amazon EC2 Auto Scaling replaces
instances that were marked unhealthy while this process was suspended.
Incorrect options:
Take a snapshot of the instance, create a new AMI and then launch a new instance
using this AMI. Apply the maintenance patch to this new instance and then add it back
to the Auto Scaling Group by using the manual scaling policy. Terminate the earlier
instance that had the maintenance issue - Taking the snapshot of the existing instance
to create a new AMI and then creating a new instance in order to apply the maintenance
patch is not time/resource optimal, hence this option is ruled out.
Delete the Auto Scaling group and apply the maintenance fix to the given instance.
Create a new Auto Scaling group and add all the instances again using the manual
scaling policy - It's not recommended to delete the Auto Scaling group just to apply a
maintenance patch on a specific instance.
Suspend the ScheduledActions process type for the Auto Scaling group and apply the
maintenance patch to the instance. Once the instance is ready, you can you can
manually set the instance's health status back to healthy and activate the
ScheduledActions process type again - Amazon EC2 Auto Scaling does not execute
scaling actions that are scheduled to run during the suspension period. This option is
not relevant to the given use-case.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-
processes.html
Which of the following are the MOST cost-effective options to improve the file upload
speed into S3? (Select two)
Create multiple site-to-site VPN connections between the AWS Cloud and
branch offices in Europe and Asia. Use these VPN connections for faster file
uploads into S3
Use AWS Global Accelerator for faster file uploads into the destination S3
bucket
(Incorrect)
Use multipart uploads for faster file uploads into the destination S3 bucket
(Correct)
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the
destination S3 bucket
(Correct)
Create multiple AWS direct connect connections between the AWS Cloud and
branch offices in Europe and Asia. Use the direct connect connections for
faster file uploads into S3
Explanation
Correct options:
Use Amazon S3 Transfer Acceleration to enable faster file uploads into the destination
S3 bucket - Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers
of files over long distances between your client and an S3 bucket. Transfer Acceleration
takes advantage of Amazon CloudFront’s globally distributed edge locations. As the
data arrives at an edge location, data is routed to Amazon S3 over an optimized network
path.
Use multipart uploads for faster file uploads into the destination S3 bucket - Multipart
upload allows you to upload a single object as a set of parts. Each part is a contiguous
portion of the object's data. You can upload these object parts independently and in any
order. If transmission of any part fails, you can retransmit that part without affecting
other parts. After all parts of your object are uploaded, Amazon S3 assembles these
parts and creates the object. In general, when your object size reaches 100 MB, you
should consider using multipart uploads instead of uploading the object in a single
operation. Multipart upload provides improved throughput, therefore it facilitates faster
file uploads.
Incorrect options:
Create multiple AWS direct connect connections between the AWS Cloud and branch
offices in Europe and Asia. Use the direct connect connections for faster file uploads
into S3 - AWS Direct Connect is a cloud service solution that makes it easy to establish
a dedicated network connection from your premises to AWS. AWS Direct Connect lets
you establish a dedicated network connection between your network and one of the
AWS Direct Connect locations. Direct connect takes significant time (several months) to
be provisioned and is an overkill for the given use-case.
Create multiple site-to-site VPN connections between the AWS Cloud and branch
offices in Europe and Asia. Use these VPN connections for faster file uploads into S3 -
AWS Site-to-Site VPN enables you to securely connect your on-premises network or
branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can
securely extend your data center or branch office network to the cloud with an AWS
Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish
encrypted network connectivity between your intranet and Amazon VPC over the
Internet. VPN Connections are a good solution if you have low to modest bandwidth
requirements and can tolerate the inherent variability in Internet-based connectivity.
Site-to-site VPN will not help in accelerating the file transfer speeds into S3 for the given
use-case.
Use AWS Global Accelerator for faster file uploads into the destination S3 bucket -
AWS Global Accelerator is a service that improves the availability and performance of
your applications with local or global users. It provides static IP addresses that act as a
fixed entry point to your application endpoints in a single or multiple AWS Regions, such
as your Application Load Balancers, Network Load Balancers or Amazon EC2 instances.
AWS Global Accelerator will not help in accelerating the file transfer speeds into S3 for
the given use-case.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
As a solutions architect, what would you recommend so that the application runs near
its peak performance state?
Configure the Auto Scaling group to use target tracking policy and set the CPU
utilization as the target metric with a target value of 50%
(Correct)
Configure the Auto Scaling group to use step scaling policy and set the CPU
utilization as the target metric with a target value of 50%
Configure the Auto Scaling group to use simple scaling policy and set the CPU
utilization as the target metric with a target value of 50%
(Incorrect)
Explanation
Correct option:
Configure the Auto Scaling group to use target tracking policy and set the CPU
utilization as the target metric with a target value of 50%
An Auto Scaling group contains a collection of Amazon EC2 instances that are treated
as a logical grouping for the purposes of automatic scaling and management. An Auto
Scaling group also enables you to use Amazon EC2 Auto Scaling features such as
health check replacements and scaling policies.
With target tracking scaling policies, you select a scaling metric and set a target value.
Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the
scaling policy and calculates the scaling adjustment based on the metric and the target
value. The scaling policy adds or removes capacity as required to keep the metric at, or
close to, the specified target value.
Configure a target tracking scaling policy to keep the average aggregate CPU utilization
of your Auto Scaling group at 50 percent. This meets the requirements specified in the
given use-case and therefore, this is the correct option.
Incorrect options:
Configure the Auto Scaling group to use step scaling policy and set the CPU utilization
as the target metric with a target value of 50%
Configure the Auto Scaling group to use simple scaling policy and set the CPU
utilization as the target metric with a target value of 50%
With step scaling and simple scaling, you choose scaling metrics and threshold values
for the CloudWatch alarms that trigger the scaling process. Neither step scaling nor
simple scaling can be configured to use a target metric for CPU utilization, hence both
these options are incorrect.
Configure the Auto Scaling group to use a Cloudwatch alarm triggered on a CPU
utilization threshold of 50% - An Auto Scaling group cannot directly use a Cloudwatch
alarm as the source for a scale-in or scale-out event, hence this option is incorrect.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-
tracking.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html
Use an Amazon Aurora Global Database for the games table and use
DynamoDB tables for the users and games_played tables
Use an Amazon Aurora Global Database for the games table and use Amazon
Aurora for the users and games_played tables
(Correct)
Use a DynamoDB global table for the games table and use Amazon Aurora for
the users and games_played tables
Use a DynamoDB global table for the games table and use DynamoDB tables for
the users and games_played tables
(Incorrect)
Explanation
Correct option:
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora
for the users and games_played tables
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the
cloud, that combines the performance and availability of traditional enterprise
databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that
auto-scales up to 128TB per database instance. Aurora is not an in-memory database.
For the given use-case, we, therefore, need to have two Aurora clusters, one for the
global table (games table) and the other one for the local tables (users and
games_played tables).
Incorrect options:
Use an Amazon Aurora Global Database for the games table and use DynamoDB tables
for the users and games_played tables
Use a DynamoDB global table for the games table and use Amazon Aurora for
the users and games_played tables
Use a DynamoDB global table for the games table and use DynamoDB tables for
the users and games_played tables
Reference:
https://aws.amazon.com/rds/aurora/faqs/
What AWS services would you use to build the most cost-effective solution with the
LEAST amount of infrastructure maintenance?
Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming
transformations before writing to S3
Ingest the data in Kinesis Data Firehose and use an intermediary Lambda
function to filter and transform the incoming stream before the output is
dumped on S3
(Correct)
Ingest the data in Kinesis Data Streams and use an intermediary Lambda
function to filter and transform the incoming stream before the output is
dumped on S3
(Incorrect)
Ingest the data in Kinesis Data Analytics and use SQL queries to filter and
transform the data before writing to S3
Explanation
Correct option:
Ingest the data in Kinesis Data Firehose and use an intermediary Lambda function to
filter and transform the incoming stream before the output is dumped on S3
Amazon Kinesis Data Firehose is the easiest way to load streaming data into data
stores and analytics tools. It can capture, transform, and load streaming data into
Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near
real-time analytics with existing business intelligence tools and dashboards you’re
already using today. It is a fully managed service that automatically scales to match the
throughput of your data and requires no ongoing administration. It can also batch,
compress, and encrypt the data before loading it, minimizing the amount of storage
used at the destination and increasing security.
via - https://aws.amazon.com/kinesis/data-firehose/
The correct option is to ingest the data in Kinesis Data Firehose and use a Lambda
function to filter and transform the incoming data before the output is dumped on S3.
This way you only need to store a sliced version of the data with only the relevant data
attributes required for your model. Also it should be noted that this solution is entirely
serverless and requires no infrastructure maintenance.
Incorrect options:
Ingest the data in Kinesis Data Analytics and use SQL queries to filter and transform
the data before writing to S3 - Amazon Kinesis Data Analytics is the easiest way to
analyze streaming data in real-time. Kinesis Data Analytics enables you to easily and
quickly build queries and sophisticated streaming applications in three simple steps:
setup your streaming data sources, write your queries or streaming applications, and
set up your destination for processed data. Kinesis Data Analytics cannot directly ingest
data from the source as it ingests data either from Kinesis Data Streams or Kinesis Data
Firehose, so this option is ruled out.
Ingest the data in Kinesis Data Streams and use an intermediary Lambda function to
filter and transform the incoming stream before the output is dumped on S3 - Amazon
Kinesis Data Streams (KDS) is a massively scalable, highly durable data ingestion and
processing service optimized for streaming data. Amazon Kinesis Data Streams is
integrated with a number of AWS services, including Amazon Kinesis Data Firehose for
near real-time transformation.
Kinesis Data Streams cannot directly write the output to S3. Unlike Firehose, KDS does
not offer a ready-made integration via an intermediary Lambda function to reliably dump
data into S3. You will need to do a lot of custom coding to get the Lambda function to
process the incoming stream and then store the transformed output to S3 with the
constraint that the buffer is maintained reliably and no transformed data is lost. So this
option is incorrect.
Ingest the data in a Spark Streaming Cluster on EMR use Spark Streaming
transformations before writing to S3 - Amazon EMR is the industry-leading cloud big
data platform for processing vast amounts of data using open source tools such as
Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto.
Amazon EMR uses Hadoop, an open-source framework, to distribute your data and
processing across a resizable cluster of Amazon EC2 instances. Using an EMR cluster
would imply managing the underlying infrastructure so it’s ruled out because the correct
solution for the given use-case should require the least amount of infrastructure
maintenance.
Reference:
https://aws.amazon.com/kinesis/data-firehose/
As a Solutions Architect, which of the following solutions would you suggest, so that
both the applications can consume the real-time status data concurrently?
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service
(Amazon SES)
(Correct)
Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service
(SNS)
(Incorrect)
Explanation
Correct option:
Amazon Kinesis Data Streams - Amazon Kinesis Data Streams enables real-time
processing of streaming big data. It provides ordering of records, as well as the ability
to read and/or replay records in the same order to multiple Amazon Kinesis
Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given
partition key to the same record processor, making it easier to build multiple
applications reading from the same Amazon Kinesis data stream (for example, to
perform counting, aggregation, and filtering).
AWS recommends Amazon Kinesis Data Streams for use cases with requirements that
are similar to the following:
Incorrect options:
Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS)
is a highly available, durable, secure, fully managed pub/sub messaging service that
enables you to decouple microservices, distributed systems, and serverless
applications. Amazon SNS provides topics for high-throughput, push-based, many-to-
many messaging. SNS is a notification service and cannot be used for real-time
processing of data.
Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) -
Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly scalable hosted
queue for storing messages as they travel between computers. Amazon SQS lets you
easily move data between distributed application components and helps you build
applications in which messages are processed independently (with message-level
ack/fail semantics), such as automated workflows. Since multiple applications need to
consume the same data stream concurrently, Kinesis is a better choice when compared
to the combination of SQS with SNS.
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon
SES) - As discussed above, Kinesis is a better option for this use case in comparison to
SQS. Also, SES does not fit this use-case. Hence, this option is an incorrect answer.
Reference:
https://aws.amazon.com/kinesis/data-streams/faqs/
Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to
process the messages at the peak rate
Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to
process the messages at the peak rate
(Correct)
Use Amazon SQS FIFO queue in batch mode of 4 messages per operation to process
the messages at the peak rate
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that
enables you to decouple and scale microservices, distributed systems, and serverless
applications. SQS offers two types of message queues - Standard queues vs FIFO
queues.
For FIFO queues, the order in which messages are sent and received is strictly
preserved (i.e. First-In-First-Out). On the other hand, the standard SQS queues offer best-
effort ordering. This means that occasionally, messages might be delivered in an order
different from which they were sent.
By default, FIFO queues support up to 300 messages per second (300 send, receive, or
delete operations per second). When you batch 10 messages per operation (maximum),
FIFO queues can support up to 3,000 messages per second. Therefore you need to
process 4 messages per operation so that the FIFO queue can support up to 1200
messages per second, which is well within the peak rate.
FIFO Queues
Overview:
via
- https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/
FIFO-queues.html
Incorrect options:
Use Amazon SQS standard queue to process the messages - As messages need to be
processed in order, therefore standard queues are ruled out.
Use Amazon SQS FIFO queue to process the messages - By default, FIFO queues
support up to 300 messages per second and this is not sufficient to meet the message
processing throughput per the given use-case. Hence this option is incorrect.
Use Amazon SQS FIFO queue in batch mode of 2 messages per operation to process
the messages at the peak rate - As mentioned earlier in the explanation, you need to
use FIFO queues in batch mode and process 4 messages per operation, so that the FIFO
queue can support up to 1200 messages per second. With 2 messages per operation,
you can only support up to 600 messages per second.
References:
https://aws.amazon.com/sqs/
https://aws.amazon.com/sqs/features/
(Correct)
Use Amazon CloudFront with a custom origin pointing to the DNS record of the
website on Route 53
•
Migrate the website to Amazon S3. Use cross-Region replication between AWS
Regions in the US and Asia
Explanation
Correct option:
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
Amazon CloudFront is a web service that gives businesses and web application
developers an easy and cost-effective way to distribute content with low latency and
high data transfer speeds. Amazon CloudFront uses standard cache control headers
you set on your files to identify static and dynamic content. You can use different
origins for different types of content on a single site – e.g. Amazon S3 for static objects,
Amazon EC2 for dynamic content, and custom origins for third-party content.
Amazon
CloudFront:
via - https://aws.amazon.com/cloudfront/
An origin server stores the original, definitive version of your objects. If you're serving
content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server,
such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud
(Amazon EC2) instance or on a server that you manage; these servers are also known
as custom origins.
via
- https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introductio
n.html
Amazon CloudFront employs a global network of edge locations and regional edge
caches that cache copies of your content close to your viewers. Amazon CloudFront
ensures that end-user requests are served by the closest edge location. As a result,
viewer requests travel a short distance, improving performance for your viewers.
Therefore for the given use case, the users in Asia will enjoy a low latency experience
while using the website even though the on-premises servers continue to be in the US.
Incorrect options:
Use Amazon CloudFront with a custom origin pointing to the DNS record of the
website on Route 53 - This option has been added as a distractor. CloudFront cannot
have a custom origin pointing to the DNS record of the website on Route 53.
Migrate the website to Amazon S3. Use cross-Region replication between AWS
Regions in the US and Asia - The use case states that the company operates a dynamic
website. You can use Amazon S3 to host a static website. On a static website, individual
web pages include static content. They might also contain client-side scripts. By
contrast, a dynamic website relies on server-side processing, including server-side
scripts, such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side
scripting, but AWS has other resources for hosting dynamic websites. So this option is
incorrect.
References:
https://aws.amazon.com/cloudfront/
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction
.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html