0% found this document useful (0 votes)
34 views

PCA Set1

Uploaded by

Jahnavi Thota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

PCA Set1

Uploaded by

Jahnavi Thota
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.

com

Professional Cloud Architect on Google Cloud Platform v1.0 (Professional Cloud Architect) -
Full Access
Question 1 ( Testlet 1 )

Company Overview -
JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company‫ג‬€™s core values is excellent customer service. In
addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.

Company Background -
JencoMart started as a general store in 1931, and has grown into one of the world‫ג‬€™s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a
stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.

Solution Concept -
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently
host all of these applications on infrastructure that is at its end of life and is no longer supported.

Existing Technical Environment -


JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.

Database -
Oracle Database stores user profiles
- 20 TB
- Complex table structure
- Well maintained, clean data
- Strong backup strategy
PostgreSQL database stores user credentials
- Single-homed in US West
- No redundancy
- Backed up every 12 hours
- 100% uptime service level agreement (SLA)
- Authenticates all users

Compute -
30 machines in US West Coast, each machine has:
- Twin, dual core CPUs
- 32 GB of RAM
- Twin 250 GB HDD (RAID 1)
20 machines in US East Coast, each machine has:
- Single, dual-core CPU
- 24 GB of RAM
- Twin 250 GB HDD (RAID 1)

Storage -
Access to shared 100 TB SAN in each location
Tape backup every week

Business Requirements -
Optimize for capacity during peak periods and value during off-peak periods
Guarantee service availability and support
Reduce on-premises footprint and associated financial and environmental impact
Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
Expand services into Asia

Technical Requirements -
Assess key application for cloud suitability
Modify applications for the cloud
Move applications to a new infrastructure
Leverage managed services wherever feasible
Sunset 20% of capacity in existing data centers
Decrease latency in Asia

CEO Statement -
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store
experiences. As a large, global company, we also have a responsibility to the environment through ‫ג‬€green‫ג‬€ initiatives and policies.

CTO Statement -
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine
learning to improve our service to customers.

CFO Statement -
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development
resources.
What Google domain and project structure should you recommend?

A. Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application
B. Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications
C. Create a single G Suite account to manage users with each stage of each application in its own project
D. Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment

Answer : D

Explanation:
Note: The principle of least privilege and separation of duties are concepts that, although semantically different, are intrinsically related from the standpoint of security. The intent behind both is to prevent people
from having higher privilege levels than they actually need
✑ Principle of Least Privilege: Users should only have the least amount of privileges required to perform their job and no more. This reduces authorization exploitation by limiting access to resources such as targets,
jobs, or monitoring templates for which they are not authorized.
✑ Separation of Duties: Beyond limiting user privilege level, you also limit user duties, or the specific jobs they can perform. No user should be given responsibility for more than one related function. This limits the
ability of a user to perform a malicious action and then cover up that action.
Reference:
https://cloud.google.com/kms/docs/separation-of-duties

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 1/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Question 2 ( Testlet 1 )

Company Overview -

JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company‫ג‬€™s core values is excellent customer service. In
addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.

Company Background -
JencoMart started as a general store in 1931, and has grown into one of the world‫ג‬€™s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a
stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.

Solution Concept -
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently
host all of these applications on infrastructure that is at its end of life and is no longer supported.

Existing Technical Environment -


JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.

Database -
Oracle Database stores user profiles
- 20 TB
- Complex table structure
- Well maintained, clean data
- Strong backup strategy
PostgreSQL database stores user credentials
- Single-homed in US West
- No redundancy
- Backed up every 12 hours
- 100% uptime service level agreement (SLA)
- Authenticates all users

Compute -
30 machines in US West Coast, each machine has:
- Twin, dual core CPUs
- 32 GB of RAM
- Twin 250 GB HDD (RAID 1)
20 machines in US East Coast, each machine has:
- Single, dual-core CPU
- 24 GB of RAM
- Twin 250 GB HDD (RAID 1)

Storage -
Access to shared 100 TB SAN in each location
Tape backup every week

Business Requirements -
Optimize for capacity during peak periods and value during off-peak periods
Guarantee service availability and support
Reduce on-premises footprint and associated financial and environmental impact
Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
Expand services into Asia

Technical Requirements -
Assess key application for cloud suitability
Modify applications for the cloud
Move applications to a new infrastructure
Leverage managed services wherever feasible
Sunset 20% of capacity in existing data centers
Decrease latency in Asia

CEO Statement -
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store
experiences. As a large, global company, we also have a responsibility to the environment through ‫ג‬€green‫ג‬€ initiatives and policies.

CTO Statement -
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine
learning to improve our service to customers.

CFO Statement -
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs.

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database
requests to the application servers correctly.
What three steps should you take to diagnose the problem? (Choose three.)

A. Delete the virtual machine (VM) and disks and create a new one
B. Delete the instance, attach the disk to a new VM, and investigate
C. Take a snapshot of the disk and connect to a new machine to investigate
D. Check inbound firewall rules for the network the machine is connected to
E. Connect the machine to another network with very simple firewall rules and investigate
F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate

Answer : CDF

Explanation:
D: Handling "Unable to connect on port 22" error message
Possible causes include:
✑ There is no firewall rule allowing SSH access on the port. SSH access on port 22 is enabled on all Compute Engine instances by default. If you have disabled access, SSH from the Browser will not work. If you run
sshd on a port other than 22, you need to enable the access to that port with a custom firewall rule.
✑ The firewall rule allowing SSH access is enabled, but is not configured to allow connections from GCP Console services. Source IP addresses for browser- based SSH sessions are dynamically allocated by GCP
Console and can vary from session to session.
F: Handling "Could not connect, retrying..." error
You can verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do
not see these output prefixes in the serial console output, the daemon might be stopped. Reboot the instance to restart the daemon.
Reference:
https://cloud.google.com/compute/docs/ssh-in-browser
https://cloud.google.com/compute/docs/ssh-in-browser

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 2/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Question 3 ( Testlet 1 )

Company Overview -
JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company‫ג‬€™s core values is excellent customer service. In
addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.

Company Background -
JencoMart started as a general store in 1931, and has grown into one of the world‫ג‬€™s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a
stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.

Solution Concept -
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently
host all of these applications on infrastructure that is at its end of life and is no longer supported.

Existing Technical Environment -


JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.

Database -
Oracle Database stores user profiles
- 20 TB
- Complex table structure
- Well maintained, clean data
- Strong backup strategy
PostgreSQL database stores user credentials
- Single-homed in US West
- No redundancy
- Backed up every 12 hours
- 100% uptime service level agreement (SLA)
- Authenticates all users

Compute -
30 machines in US West Coast, each machine has:
- Twin, dual core CPUs
- 32 GB of RAM
- Twin 250 GB HDD (RAID 1)
20 machines in US East Coast, each machine has:
- Single, dual-core CPU
- 24 GB of RAM
- Twin 250 GB HDD (RAID 1)

Storage -
Access to shared 100 TB SAN in each location
Tape backup every week

Business Requirements -
Optimize for capacity during peak periods and value during off-peak periods
Guarantee service availability and support
Reduce on-premises footprint and associated financial and environmental impact
Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
Expand services into Asia

Technical Requirements -
Assess key application for cloud suitability
Modify applications for the cloud
Move applications to a new infrastructure
Leverage managed services wherever feasible
Sunset 20% of capacity in existing data centers
Decrease latency in Asia

CEO Statement -
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store
experiences. As a large, global company, we also have a responsibility to the environment through ‫ג‬€green‫ג‬€ initiatives and policies.

CTO Statement -
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine
learning to improve our service to customers.

CFO Statement -
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to
Datastore to upload the data.
What service account key-management strategy should you recommend?

A. Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs)
B. Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs
C. Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs
D. Deploy a custom authentication service on GCE/Google Kubernetes Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs

Answer : C

Explanation:
Migrating data to Google Cloud Platform
Let‫ג‬€™s say that you have some data processing that happens on another cloud provider and you want to transfer the processed data to Google Cloud Platform. You can use a service account from the virtual
machines on the external cloud to push the data to Google Cloud Platform. To do this, you must create and download a service account key when you create the service account and then use that key from the external
process to call the Cloud Platform APIs.
Reference:
https://cloud.google.com/iam/docs/understanding-service-accounts#migrating_data_to_google_cloud_platform

Question 4 ( Testlet 1 )

Company Overview -
JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company‫ג‬€™s core values is excellent customer service. In
addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 3/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Company Background -
JencoMart started as a general store in 1931, and has grown into one of the world‫ג‬€™s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a
stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.

Solution Concept -
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently
host all of these applications on infrastructure that is at its end of life and is no longer supported.

Existing Technical Environment -


JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.

Database -
Oracle Database stores user profiles
- 20 TB
- Complex table structure
- Well maintained, clean data
- Strong backup strategy
PostgreSQL database stores user credentials
- Single-homed in US West
- No redundancy
- Backed up every 12 hours
- 100% uptime service level agreement (SLA)
- Authenticates all users

Compute -
30 machines in US West Coast, each machine has:
- Twin, dual core CPUs
- 32 GB of RAM
- Twin 250 GB HDD (RAID 1)
20 machines in US East Coast, each machine has:
- Single, dual-core CPU
- 24 GB of RAM
- Twin 250 GB HDD (RAID 1)

Storage -
Access to shared 100 TB SAN in each location
Tape backup every week

Business Requirements -
Optimize for capacity during peak periods and value during off-peak periods
Guarantee service availability and support
Reduce on-premises footprint and associated financial and environmental impact
Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
Expand services into Asia

Technical Requirements -
Assess key application for cloud suitability
Modify applications for the cloud
Move applications to a new infrastructure
Leverage managed services wherever feasible
Sunset 20% of capacity in existing data centers
Decrease latency in Asia

CEO Statement -
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store
experiences. As a large, global company, we also have a responsibility to the environment through ‫ג‬€green‫ג‬€ initiatives and policies.

CTO Statement -
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine
learning to improve our service to customers.

CFO Statement -
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals.
Which metrics should you track?

A. Error rates for requests from Asia


B. Latency difference between US and Asia
C. Total visits, error rates, and latency from Asia
D. Total visits and average latency for users from Asia
E. The number of character sets present in the database

Answer : D

Explanation:
From scenario:
Business Requirements include: Expand services into Asia
Technical Requirements include: Decrease latency in Asia

Question 5 ( Testlet 1 )

Company Overview -
JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company‫ג‬€™s core values is excellent customer service. In
addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.

Company Background -
JencoMart started as a general store in 1931, and has grown into one of the world‫ג‬€™s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a
stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.

Solution Concept -
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently
host all of these applications on infrastructure that is at its end of life and is no longer supported.

Existing Technical Environment -


JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 4/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
p g p
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.

Database -
Oracle Database stores user profiles
- 20 TB
- Complex table structure
- Well maintained, clean data
- Strong backup strategy
PostgreSQL database stores user credentials
- Single-homed in US West
- No redundancy
- Backed up every 12 hours
- 100% uptime service level agreement (SLA)
- Authenticates all users

Compute -
30 machines in US West Coast, each machine has:
- Twin, dual core CPUs
- 32 GB of RAM
- Twin 250 GB HDD (RAID 1)
20 machines in US East Coast, each machine has:
- Single, dual-core CPU
- 24 GB of RAM
- Twin 250 GB HDD (RAID 1)

Storage -
Access to shared 100 TB SAN in each location
Tape backup every week

Business Requirements -
Optimize for capacity during peak periods and value during off-peak periods
Guarantee service availability and support
Reduce on-premises footprint and associated financial and environmental impact
Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
Expand services into Asia

Technical Requirements -
Assess key application for cloud suitability
Modify applications for the cloud
Move applications to a new infrastructure
Leverage managed services wherever feasible
Sunset 20% of capacity in existing data centers
Decrease latency in Asia

CEO Statement -
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store
experiences. As a large, global company, we also have a responsibility to the environment through ‫ג‬€green‫ג‬€ initiatives and policies.

CTO Statement -
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine
learning to improve our service to customers.

CFO Statement -
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs.

The migration of JencoMart‫ג‬€™s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to
maximize throughput.
What are three potential bottlenecks? (Choose three.)

A. A single VPN tunnel, which limits throughput


B. A tier of Google Cloud Storage that is not suited for this task
C. A copy command that is not suited to operate over long distances
D. Fewer virtual machines (VMs) in GCP than on-premises machines
E. A separate storage layer outside the VMs, which is not suited for this task
F. Complicated internet connectivity between the on-premises infrastructure and GCP

Answer : ACE
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 5/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Question 6 ( Testlet 1 )

Company Overview -
JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company‫ג‬€™s core values is excellent customer service. In
addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.

Company Background -
JencoMart started as a general store in 1931, and has grown into one of the world‫ג‬€™s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a
stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.

Solution Concept -
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently
host all of these applications on infrastructure that is at its end of life and is no longer supported.

Existing Technical Environment -


JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.

Database -
Oracle Database stores user profiles
- 20 TB
- Complex table structure
- Well maintained, clean data
- Strong backup strategy
PostgreSQL database stores user credentials
- Single-homed in US West
- No redundancy
- Backed up every 12 hours
- 100% uptime service level agreement (SLA)
- Authenticates all users

Compute -
30 machines in US West Coast, each machine has:
- Twin, dual core CPUs
- 32 GB of RAM
- Twin 250 GB HDD (RAID 1)
20 machines in US East Coast, each machine has:
- Single, dual-core CPU
- 24 GB of RAM
- Twin 250 GB HDD (RAID 1)

Storage -
Access to shared 100 TB SAN in each location
Tape backup every week

Business Requirements -
Optimize for capacity during peak periods and value during off-peak periods
Guarantee service availability and support
Reduce on-premises footprint and associated financial and environmental impact
Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
Expand services into Asia

Technical Requirements -
Assess key application for cloud suitability
Modify applications for the cloud
Move applications to a new infrastructure
Leverage managed services wherever feasible
Sunset 20% of capacity in existing data centers
Decrease latency in Asia

CEO Statement -
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store
experiences. As a large, global company, we also have a responsibility to the environment through ‫ג‬€green‫ג‬€ initiatives and policies.

CTO Statement -
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine
learning to improve our service to customers.

CFO Statement -
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This
model will allow us to respond to increasing customer demand during peak periods and reduce costs.

JencoMart wants to move their User Profiles database to Google Cloud Platform.
Which Google Database should they use?

A. Cloud Spanner
B. Google BigQuery
C. Google Cloud SQL
D. Google Cloud Datastore

Answer : D

Explanation:
Common workloads for Google Cloud Datastore:
✑ User profiles
✑ Product catalogs
✑ Game state
Reference:
https://cloud.google.com/storage-options/
https://cloud.google.com/datastore/docs/concepts/overview

Question 7 ( Testlet 2 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for the most popular mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers
to lease physical servers.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 6/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
p y
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices
6. Use only fully managed services

CEO Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.

CTO Statement -
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing
physical servers.

CFO Statement -
We are not capturing enough user demographic data, usage metrics, and other KPIs. As a result, we do not engage the right users, we are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

A. Tests should scale well beyond the prior approaches


B. Unit tests are no longer required, only end-to-end tests
C. Tests should be applied after the release is in the production environment
D. Tests should include directly testing the Google Cloud Platform (GCP) infrastructure

Answer : A

Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity

Question 8 ( Testlet 2 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for the most popular mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers
to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices
6. Use only fully managed services

CEO Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.

CTO Statement -
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing
physical servers.

CFO Statement -
We are not capturing enough user demographic data, usage metrics, and other KPIs. As a result, we do not engage the right users, we are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a through testing process for new versions of the backend before they are released to the public. You want the
testing environment to scale in an economical way. How should you design the process?

A. Create a scalable environment in GCP for simulating production load


B. Use the existing infrastructure to test the GCP-based backend at scale
C. Build stress tests into each component of your application using resources internal to GCP to simulate load
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 7/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
C. Build stress tests into each component of your application using resources internal to GCP to simulate load
D. Create a set of static environments in GCP to test different levels of load ‫ג‬€" for example, high, medium, and low

Answer : A

Explanation:
From scenario: Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro

Question 9 ( Testlet 2 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for the most popular mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers
to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices
6. Use only fully managed services

CEO Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.

CTO Statement -
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing
physical servers.

CFO Statement -
We are not capturing enough user demographic data, usage metrics, and other KPIs. As a result, we do not engage the right users, we are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following
requirements:
✑ Services are deployed redundantly across multiple regions in the US and Europe
✑ Only frontend services are exposed on the public internet
✑ They can provide a single frontend IP for their fleet of services
✑ Deployment artifacts are immutable
Which set of products should they use?

A. Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine


B. Google Cloud Storage, Google App Engine, Google Network Load Balancer
C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer
D. Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Answer : C

Question 10 ( Testlet 2 ) 
Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for the most popular mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers
to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 8/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data

5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices


6. Use only fully managed services

CEO Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.

CTO Statement -
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing
physical servers.

CFO Statement -
We are not capturing enough user demographic data, usage metrics, and other KPIs. As a result, we do not engage the right users, we are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.

Mountkirk Games‫ג‬€™ gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service,
but many of them are getting 503 errors and very slow response times. What should they investigate first?

A. Verify that the database is online


B. Verify that the project quota hasn‫ג‬€™t been exceeded
C. Verify that the new feature code did not introduce any performance bugs
D. Verify that the load-testing team is not running their tool against production

Answer : B

Explanation: 503 is service unavailable error. If the database was online everyone would get the 503 error.

Question 11 ( Testlet 2 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for the most popular mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers
to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices
6. Use only fully managed services

CEO Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.

CTO Statement -
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing
physical servers.

CFO Statement -
We are not capturing enough user demographic data, usage metrics, and other KPIs. As a result, we do not engage the right users, we are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.

Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other‫ג‬€™s environments and resources, but they
cannot access staging or production resources. The staging environment needs access to some services from production.
What should you do to isolate development environments from staging and production?

A. Create a project for development and test and another for staging and production
B. Create a network for development and test and another for staging and production
C. Create one subnetwork for development and another for staging and production
D. Create one project for development, a second for staging and a third for production

Answer : D

Question 12 ( Testlet 2 ) 
Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for the most popular mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers
to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game which they expect to be very popular They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 9/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€ s backend on Google Compute Engine so they can capture streaming metrics run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -

Increase to a global footprint


Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity
2. Connect to a managed NoSQL database service
3. Run customize Linux distro
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices
6. Use only fully managed services

CEO Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users.

CTO Statement -
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing
physical servers.

CFO Statement -
We are not capturing enough user demographic data, usage metrics, and other KPIs. As a result, we do not engage the right users, we are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements.
Which combination of Google technologies will meet all of their requirements?

A. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL


B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
C. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
D. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow
E. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Answer : B

Explanation:
Ingest millions of streaming events per second from anywhere in the world with Cloud Pub/Sub, powered by Google's unique, high-speed private network. Process the streams with Cloud Dataflow to ensure reliable,
exactly-once, low-latency data transformation. Stream the transformed data into BigQuery, the cloud-native data warehousing service, for immediate analysis via SQL or popular visualization tools.
From scenario: They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics.
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks
4. Allow SQL queries to access at least 10 TB of historical data
5. Process files that are regularly uploaded by users‫ג‬€™ mobile devices
6. Use only fully managed services
Reference:
https://cloud.google.com/solutions/big-data/stream-analytics/

Question 13 ( Testlet 3 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google
Cloud Platform.
Which two steps should be part of their migration plan? (Choose two.)

A. Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 10/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
B. Write a schema migration plan to denormalize data for better performance in BigQuery.
C. Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.
D. Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully.

E. Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Answer : AB

Question 14 ( Testlet 3 ) 
Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk
Games business and technical requirements, what should you do?

A. Create network load balancers. Use preemptible Compute Engine instances.


B. Create network load balancers. Use non-preemptible Compute Engine instances.
C. Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances.
D. Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Answer : D

Question 15 ( Testlet 3 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available.
Which two steps should they take? (Choose two.)

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 11/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

A. Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future.
B. Begin packaging their game backend artifacts in container images and running them on Google Kubernetes Engine to improve the ability to scale up or down based on game activity.
C. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity.

D. Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database.
E. Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.

Answer : CE

Question 16 ( Testlet 3 ) 
Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform‫ג‬€™s resilience to changes in mobile network latency. What should you do?

A. Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.
B. Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.
C. Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.
D. Create an opt-in beta of the game that runs on players‫ג‬€™ mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.

Answer : C

Question 17 ( Testlet 3 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and
technical requirements, what should you do?

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 12/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

A. Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.
B. Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.
C. Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.

D. Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.

Answer : C

Question 18 ( Testlet 3 ) 
Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk‫ג‬€™s technical requirement for storing game activity in a time series database service?

A. Cloud Bigtable
B. Cloud Spanner
C. BigQuery
D. Cloud Datastore

Answer : A

Question 19 ( Testlet 3 )

Company Overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration.
Historically, they have used cloud providers to lease physical servers.
Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools.
Their current model is to write game statistics to files and send them through an ETL tool that loads them into a centralized MySQL database for reporting.

Solution Concept -
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics,
and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements -
Increase to a global footprint
Improve uptime ‫ג‬€" downtime is loss of players
Increase efficiency of the cloud resources we use
Reduce latency to all customers

Technical Requirements -
Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a transactional database service to manage user profiles and game state
Store game activity in a timeseries database service for future analysis
As the system scales, ensure that data is not lost due to processing backlogs
Run hardened Linux distro
Requirements for Game Analytics Platform
Dynamically scale up or down based on game activity
Process incoming data on the fly directly from the game servers
Process data that arrives late because of slow mobile networks
Allow queries to access at least 10 TB of historical data
Process files that are regularly uploaded by users‫ג‬€™ mobile devices

Executive Statement -
Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption and affecting the game‫ג‬€™s reputation. Our investors want more key performance indicators (KPIs) to
evaluate the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide
the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees us up from managing physical servers.

For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the backend over a REST API.
You want to follow Google-recommended practices. How should you design the backend?

A. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L4 load balancer.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 13/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
. C eate a sta ce te p ate o t e bac e d. o eve y eg o , dep oy t o a u t o e a aged sta ce g oup. Use a 4 oad ba a ce .
B. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L4 load balancer.
C. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance group. Use an L7 load balancer.
D. Create an instance template for the backend. For every region, deploy it on a single-zone managed instance group. Use an L7 load balancer.

Answer : A

Question 20 ( Testlet 4 )

Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.
Support eventual migration of legacy games to this new platform.

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet
on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

You need to optimize batch file transfers into Cloud Storage for Mountkirk Games‫ג‬€™ new Google Cloud solution. The batch files contain game statistics that need to be staged in Cloud Storage and be processed by
an extract transform load (ETL) tool. What should you do?

A. Use gsutil to batch move files in sequence.


B. Use gsutil to batch copy the files in parallel.
C. Use gsutil to extract the files as the first part of ETL.
D. Use gsutil to load the files as the last part of ETL.

Answer : B

Reference:
https://cloud.google.com/storage/docs/gsutil/commands/cp

Question 21 ( Testlet 4 )

Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.
Support eventual migration of legacy games to this new platform.

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet
on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 14/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game programmatic access to a legacy game's Firestore database.
Access should be as restricted as possible. What should you do?

A. Create a service account (SA) in the legacy game‫ג‬€™s Google Cloud project, add a second SA in the new game‫ג‬€™s IAM page, and then give the Organization Admin role to both SAs.
B. Create a service account (SA) in the legacy game‫ג‬€™s Google Cloud project, give the SA the Organization Admin role, and then give it the Firebase Admin role in both projects.
C. Create a service account (SA) in the legacy game‫ג‬€™s Google Cloud project, add this SA in the new game‫ג‬€™s IAM page, and then give it the Firebase Admin role in both projects.
D. Create a service account (SA) in the legacy game‫ג‬€™s Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game‫ג‬€™s project.

Answer : C

Question 22 ( Testlet 4 ) 
Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.
Support eventual migration of legacy games to this new platform.

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet
on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

Mountkirk Games wants to limit the physical location of resources to their operating Google Cloud regions. What should you do?

A. Configure an organizational policy which constrains where resources can be deployed.


B. Configure IAM conditions to limit what resources can be configured.
C. Configure the quotas for resources in the regions not being used to 0.
D. Configure a custom alert in Cloud Monitoring so you can disable resources as they are created in other regions.

Answer : C

Question 23 ( Testlet 4 )

Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.
Support eventual migration of legacy games to this new platform.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 15/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Suppo t eve tua g at o o egacy ga es to t s ew p at o .

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet

on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

You need to implement a network ingress for a new game that meets the defined business and technical requirements. Mountkirk Games wants each regional game instance to be located in multiple Google Cloud
regions. What should you do?

A. Configure a global load balancer connected to a managed instance group running Compute Engine instances.
B. Configure kubemci with a global load balancer and Google Kubernetes Engine.
C. Configure a global load balancer with Google Kubernetes Engine.
D. Configure Ingress for Anthos with a global load balancer and Google Kubernetes Engine.

Answer : A

Question 24 ( Testlet 4 )

Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.
Support eventual migration of legacy games to this new platform.

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet
on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

Your development teams release new versions of games running on Google Kubernetes Engine (GKE) daily. You want to create service level indicators (SLIs) to evaluate the quality of the new versions from the
user‫ג‬€™s perspective. What should you do?

A. Create CPU Utilization and Request Latency as service level indicators.


B. Create GKE CPU Utilization and Memory Utilization as service level indicators.
C. Create Request Latency and Error Rate as service level indicators.
D. Create Server Uptime and Error Rate as service level indicators.

Answer : A

Question 25 ( Testlet 4 )

Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 16/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
y a ca y sca e based o ga e act v ty.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.

Support eventual migration of legacy games to this new platform.

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet
on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google Cloud. You want to streamline the process and follow
Google-recommended practices. What should you do?

A. Configure Workload Identity and service accounts to be used by the application platform.
B. Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the application platform.
C. Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be
used by the application platform.
D. Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used by
the application platform.

Answer : A

Question 26 ( Testlet 4 ) 
Company overview -
Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They have recently started expanding to other platforms after successfully migrating their on-premises environments to
Google Cloud.
Their most recent endeavor is to create a retro-style first-person shooter (FPS) game that allows hundreds of simultaneous players to join a geo-specific digital arena from multiple platforms and locations. A real-time
digital banner will display a global leaderboard of all the top players across every active arena.

Solution concept -
Mountkirk Games is building a new multiplayer game that they expect to be very popular. They plan to deploy the game‫ג‬€™s backend on Google Kubernetes Engine so they can scale rapidly and use Google‫ג‬€™s
global load balancer to route players to the closest regional game arenas. In order to keep the global leader board in sync, they plan to use a multi-region Spanner cluster.

Existing technical environment -


The existing environment was recently migrated to Google Cloud, and five games came across using lift-and-shift virtual machine migrations, with a few minor exceptions. Each new game exists in an isolated Google
Cloud project nested below a folder that maintains most of the permissions and network policies. Legacy games with low traffic have been consolidated into a single project. There are also separate environments for
development and testing.

Business requirements -
Support multiple gaming platforms.
Support multiple regions.
Support rapid iteration of game features.
Minimize latency.
Optimize for dynamic scaling.
Use managed services and pooled resources.
Minimize costs.

Technical requirements -
Dynamically scale based on game activity.
Publish scoring data on a near real-time global leaderboard.
Store game activity logs in structured files for future analysis.
Use GPU processing to render graphics server-side for multi-platform support.
Support eventual migration of legacy games to this new platform.

Executive statement -
Our last game was the first time we used Google Cloud, and it was a tremendous success. We were able to analyze player behavior and game telemetry in ways that we never could before. This success allowed us to bet
on a full migration to the cloud and to start building all-new games using cloud-native design principles.
Our new game is our most ambitious to date and will open up doors for us to support more gaming platforms beyond mobile. Latency is our top priority, although cost management is the next most important
challenge. As with our first cloud-based game, we have grown to expect the cloud to enable advanced analytics capabilities so we can rapidly iterate on our deployments of bug fixes and new functionality.

Your development team has created a mobile game app. You want to test the new mobile app on Android and iOS devices with a variety of configurations. You need to ensure that testing is efficient and cost-effective.
What should you do?

A. Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.
B. Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.
C. Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on the containers, and test the mobile app.
D. Upload your mobile app with different configurations to Firebase Hosting and test each configuration.

Answer : C

Question 27 ( Testlet 5 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 17/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

TerramEarth‫ג‬€™s CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the field will have a catastrophic
failure.
You want to allow analysts to centrally query the vehicle data.
Which architecture should you recommend?
A.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 18/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

B.

C.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 19/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

D.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 20/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Answer : A

Explanation:
The push endpoint can be a load balancer.
A container cluster can be used.
Cloud Pub/Sub for Stream Analytics

Reference:
https://cloud.google.com/pubsub/
https://cloud.google.com/solutions/iot/
https://cloud.google.com/solutions/designing-connected-vehicle-platform https://cloud.google.com/solutions/designing-connected-vehicle-
platform#data_ingestion http://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing-connected-car-data
https://cloud.google.com/solutions/iot/

Question 28 ( Testlet 5 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 21/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload

via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

The TerramEarth development team wants to create an API to meet the company‫ג‬€™s business requirements. You want the development team to focus their
development effort on business value versus creating a custom framework.
Which method should they use?

A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners
B. Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public
C. Use Google App Engine with the Swagger (Open API Specification) framework. Focus on an API for the public
D. Use Google Container Engine with a Django Python container. Focus on an API for the public
E. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners

Answer : A

Explanation:
Develop, deploy, protect and monitor your APIs with Google Cloud Endpoints. Using an Open API Specification or one of our API frameworks, Cloud Endpoints gives you the tools you need for every phase of API
development.
From scenario:

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers.
Reference:
https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth

Question 29 ( Testlet 5 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 22/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this
vehicle event data. You want to support delegated authorization against this data.
What should you do?

A. Build or leverage an OAuth-compatible access control system


B. Build SAML 2.0 SSO compatibility into your authentication system
C. Restrict data access based on the source IP address of the partner systems
D. Create secondary credentials for each dealer that can be given to the trusted third party

Answer : A

Explanation:
Delegate application authorization with OAuth2
Cloud Platform APIs support OAuth 2.0, and scopes provide granular authorization over the methods that are supported. Cloud Platform supports both service- account and user-account OAuth, also called three-
legged OAuth.
Reference:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_authorization_with_oauth2 https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps

Question 30 ( Testlet 5 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 23/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload

via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an
hour.
How should you design the data ingestion?

A. Vehicles write data directly to GCS


B. Vehicles write data directly to Google Cloud Pub/Sub
C. Vehicles stream data directly to Google BigQuery
D. Vehicles continue to write data using the existing system (FTP)

Answer : C

Question 31 ( Testlet 5 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale some customers are without their vehicles for up to 4 weeks while they wait for replacement parts
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 24/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

You analyzed TerramEarth‫ג‬€™s business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customer‫ג‬€™s
wait time for parts. You decided to focus on reduction of the 3 weeks aggregate reporting time.
Which modifications to the company‫ג‬€™s processes should you recommend?

A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics
B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics
C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics
D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor

Answer : C

Explanation:
The Avro binary format is the preferred format for loading compressed data. Avro data is faster to load because the data can be read in parallel, even when the data blocks are compressed.
Cloud Storage supports streaming transfers with the gsutil tool or boto library, based on HTTP chunked transfer encoding. Streaming data lets you stream data to and from your Cloud Storage account as soon as it
becomes available without requiring that the data be first saved to a separate file. Streaming transfers are useful if you have a process that generates data and you do not want to buffer it locally before uploading it, or
if you want to send the result from a computational pipeline directly into Cloud Storage.
Reference:
https://cloud.google.com/storage/docs/streaming
https://cloud.google.com/bigquery/docs/loading-data

Question 32 ( Testlet 5 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 25/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

Which of TerramEarth‫ג‬€™s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?

A. Opex/capex allocation, LAN changes, capacity planning


B. Capacity planning, TCO calculations, opex/capex allocation
C. Capacity planning, utilization measurement, data center expansion
D. Data Center expansion, TCO calculations, utilization measurement

Answer : B

Question 33 ( Testlet 5 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly and TerramEarth has taken advantage of connected devices technology to provide our customers with better services such as our intelligent farming
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 26/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is
error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the
solution and minimize data transfer time on the cellular connections.
What should you do?

A. Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket
B. Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket
C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket
D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket

Answer : D

Question 34 ( Testlet 5 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However new products with different approaches are constantly being developed and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 27/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

TerramEarth‫ג‬€™s 20 million vehicles are scattered around the world. Based on the vehicle‫ג‬€™s location, its telemetry data is stored in a Google Cloud Storage
(GCS) regional bucket (US, Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down
after 100 K miles. You want to run this job on all the data.
What is the most cost-effective way to run this job?

A. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job
B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job
C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job
D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the job

Answer : D

Question 35 ( Testlet 5 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

TerramEarth has equipped all connected trucks with servers and sensors to collect telemetry data. Next year they want to use the data to train machine learning
models. They want to store this data in the cloud while reducing costs.
What should they do?
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 28/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
What should they do?

A. Have the vehicle‫ג‬€™s computer compress the data in hourly snapshots, and store it in a Google Cloud Storage (GCS) Nearline bucket
B. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery
C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable
D. Have the vehicle‫ג‬€™s computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket

Answer : D

Explanation:
Storage is the best choice for data that you plan to access at most once a year, due to its slightly lower availability, 90-day minimum storage duration, costs for data access, and higher per-operation costs. For example:
Cold Data Storage - Infrequently accessed data, such as data stored for legal or regulatory reasons, can be stored at low cost as Coldline Storage, and be available when you need it.
Disaster recovery - In the event of a disaster recovery event, recovery time is key. Cloud Storage provides low latency access to data stored as Coldline Storage.
Reference:
https://cloud.google.com/storage/docs/storage-classes

Question 36 ( Testlet 5 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation.
Which two architectures should you consider? (Choose two.)

A. Treat every micro service call between modules on the vehicle as untrusted.
B. Require IPv6 for connectivity to ensure a secure address space.
C. Use a trusted platform module (TPM) and verify firmware and binaries on boot.
D Use a functional programming language to isolate code execution cycles
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 29/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
D. Use a functional programming language to isolate code execution cycles.
E. Use multiple connectivity subsystems for redundancy.
F. Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.

Answer : AC

Question 37 ( Testlet 5 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries: about 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Company background -
TerramEarth was formed in 1946, when several small, family owned companies combined to retool after World War II. The company cares about their employees and customers and considers them to be extended
members of their family.
TerramEarth is proud of their ability to innovate on their core products and find new markets as their customers‫ג‬€™ needs change. For the past 20 years, trends in the industry have been largely toward increasing
productivity by using larger vehicles with a human operator.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, Terram Earth collects a
total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux-based systems that reside in a data center. These systems gzip CSV files from the field and upload
via
FTP, transform and aggregate them, and place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3
weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However,
because the data is stale, some customers are without their vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create
compelling joint offerings for their customers.

CEO Statement -
We have been successful in capitalizing on the trend toward larger vehicles to increase the productivity of our customers. Technological change is occurring
rapidly, and TerramEarth has taken advantage of connected devices technology to provide our customers with better services, such as our intelligent farming
equipment. With this technology, we have been able to increase farmers‫ג‬€™ yields by 25%, by using past trends to adjust how our vehicles operate. These
advances have led to the rapid growth of our agricultural product line, which we expect will generate 50% of our revenues by 2020.

CTO Statement -
Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors.
However, new products with different approaches are constantly being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of
transformations in our industry. Unfortunately, our CEO doesn‫ג‬€™t take technology obsolescence seriously and he considers the many new companies in our
industry to be niche players. My goals are to build our skills while addressing immediate market needs through incremental innovations.

Operational parameters such as oil pressure are adjustable on each of TerramEarth‫ג‬€™s vehicles to increase their efficiency, depending on their environmental
conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field.
How can you accomplish this goal?

A. Have you engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically
B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically
C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically
D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically

Answer : B

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 30/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Question 38 ( Testlet 6 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with
22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -


TerramEarth‫ג‬€™s existing architecture is composed of Linux and Windows-based systems that reside in a single U.S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their
vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers

Technical Requirements -
Expand beyond a single datacenter to decrease latency to the American midwest and east coast
Create a backup strategy
Increase security of data transfer from equipment to the datacenter
Improve data in the data warehouse
Use customer and equipment data to anticipate customer needs

Application 1: Data ingest -


A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage

Application 2: Reporting -
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a
time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement -
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly
being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through
incremental innovations.

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its
European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and
BigQuery. What should you do?

A. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36
months.
B. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an
Age condition of 36 months.
D. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition
of 36 months.

Answer : C

Question 39 ( Testlet 6 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with
22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -


TerramEarth‫ג‬€™s existing architecture is composed of Linux and Windows-based systems that reside in a single U.S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their
vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers

Technical Requirements -
Expand beyond a single datacenter to decrease latency to the American midwest and east coast
Create a backup strategy
Increase security of data transfer from equipment to the datacenter
I d i h d h
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 31/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Improve data in the data warehouse
Use customer and equipment data to anticipate customer needs

Application 1: Data ingest -


A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage

Application 2: Reporting -
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a
time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement -
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly
being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through
incremental innovations.

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage
cost.
Which two actions should you take?

A. Create a Cloud Storage lifecycle rule with Age: ‫ג‬€30‫ג‬€, Storage Class: ‫ג‬€Standard‫ג‬€, and Action: ‫ג‬€Set to Coldline‫ג‬€, and create a second GCS life-cycle rule with Age: ‫ג‬€365‫ג‬€, Storage Class: ‫ג‬€Coldline‫ג‬€, and
Action: ‫ג‬€Delete‫ג‬€.
B. Create a Cloud Storage lifecycle rule with Age: ‫ג‬€30‫ג‬€, Storage Class: ‫ג‬€Coldline‫ג‬€, and Action: ‫ג‬€Set to Nearline‫ג‬€, and create a second GCS life-cycle rule with Age: ‫ג‬€91‫ג‬€, Storage Class: ‫ג‬€Coldline‫ג‬€, and
Action: ‫ג‬€Set to Nearline‫ג‬€.
C. Create a Cloud Storage lifecycle rule with Age: ‫ג‬€90‫ג‬€, Storage Class: ‫ג‬€Standard‫ג‬€, and Action: ‫ג‬€Set to Nearline‫ג‬€, and create a second GCS life-cycle rule with Age: ‫ג‬€91‫ג‬€, Storage Class: ‫ג‬€Nearline‫ג‬€, and
Action: ‫ג‬€Set to Coldline‫ג‬€.
D. Create a Cloud Storage lifecycle rule with Age: ‫ג‬€30‫ג‬€, Storage Class: ‫ג‬€Standard‫ג‬€, and Action: ‫ג‬€Set to Coldline‫ג‬€, and create a second GCS life-cycle rule with Age: ‫ג‬€365‫ג‬€, Storage Class: ‫ג‬€Nearline‫ג‬€, and
Action: ‫ג‬€Delete‫ג‬€.

Answer : A

Question 40 ( Testlet 6 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with
22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -


TerramEarth‫ג‬€™s existing architecture is composed of Linux and Windows-based systems that reside in a single U.S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their
vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers

Technical Requirements -
Expand beyond a single datacenter to decrease latency to the American midwest and east coast
Create a backup strategy
Increase security of data transfer from equipment to the datacenter
Improve data in the data warehouse
Use customer and equipment data to anticipate customer needs

Application 1: Data ingest -


A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage

Application 2: Reporting -
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a
time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement -
Our competitive advantage has always been in our manufacturing process with our ability to build better vehicles for lower cost than our competitors However new products with different approaches are constantly
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 32/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly
being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through
incremental innovations.

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company,
TerramEarth.
Considering the TerramEarth business and technical requirements, what should you do?

A. Replace the existing data warehouse with BigQuery. Use table partitioning.
B. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.
C. Replace the existing data warehouse with BigQuery. Use federated data sources.
D. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine preemptible instance with 32 CPUs.

Answer : A

Question 41 ( Testlet 6 ) 
Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with
22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -


TerramEarth‫ג‬€™s existing architecture is composed of Linux and Windows-based systems that reside in a single U.S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their
vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers

Technical Requirements -
Expand beyond a single datacenter to decrease latency to the American midwest and east coast
Create a backup strategy
Increase security of data transfer from equipment to the datacenter
Improve data in the data warehouse
Use customer and equipment data to anticipate customer needs

Application 1: Data ingest -


A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage

Application 2: Reporting -
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a
time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement -
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly
being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through
incremental innovations.

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an
automated daily basis while managing cost.
What should you do?

A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.
B. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger the Cloud Function from a Compute Engine instance.
C. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.
D. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Answer : D

Question 42 ( Testlet 6 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 33/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
pp o ate y 00,000 ve c es a e co ected to a ce u a etwo , a ow g e a a t to co ect data d ect y. t a ate o 0 e ds o data pe seco d, w t
22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -

TerramEarth‫ג‬€™s existing architecture is composed of Linux and Windows-based systems that reside in a single U.S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their
vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers

Technical Requirements -
Expand beyond a single datacenter to decrease latency to the American midwest and east coast
Create a backup strategy
Increase security of data transfer from equipment to the datacenter
Improve data in the data warehouse
Use customer and equipment data to anticipate customer needs

Application 1: Data ingest -


A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage

Application 2: Reporting -
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a
time.
Compute:
Off the shelf application. License tied to number of physical CPUs
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM
- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement -
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly
being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through
incremental innovations.

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
B. Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.
C. Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a Multi-Regional Cloud Storage bucket. Upload this data into BigQuery using gcloud. Use Google Data Studio for analysis and reporting.
D. Use Cloud Dataproc Hive as the data warehouse. Directly stream data into partitioned Hive tables. Use Pig scripts to analyze data.

Answer : A

Question 43 ( Testlet 6 )

Company Overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service
centers in 100 countries. Their mission is to build products that make their customers more productive.

Solution Concept -
There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is
downloaded via a maintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules.
Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second, with
22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

Existing Technical Environment -


TerramEarth‫ג‬€™s existing architecture is composed of Linux and Windows-based systems that reside in a single U.S, west coast based data center. These systems gzip CSV files from the field and upload via FTP, and
place the data in their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.
With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their
vehicles for up to 4 weeks while they wait for replacement parts.

Business Requirements -
Decrease unplanned vehicle downtime to less than 1 week
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies ‫ג‬€" especially with seed and fertilizer suppliers in the fast-growing agricultural business ‫ג‬€" to create compelling joint offerings for their customers

Technical Requirements -
Expand beyond a single datacenter to decrease latency to the American midwest and east coast
Create a backup strategy
Increase security of data transfer from equipment to the datacenter
Improve data in the data warehouse
Use customer and equipment data to anticipate customer needs

Application 1: Data ingest -


A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse.
Compute:
Windows Server 2008 R2
- 16 CPUs
- 128 GB of RAM
- 10 TB local HDD storage

Application 2: Reporting -
An off the shelf application that business analysts use to run a daily report to see what equipment needs repair. Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a
time.
Compute:
Off the shelf application. License tied to number of physical CPUs
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 34/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
- Windows Server 2008 R2
- 16 CPUs
- 32 GB of RAM

- 500 GB HDD
Data warehouse:
A single PostgreSQL server
- RedHat Linux
- 64 CPUs
- 128 GB of RAM
- 4x 6TB HDD in RAID 0

Executive Statement -
Our competitive advantage has always been in our manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly
being developed, and I‫ג‬€™m concerned that we lack the skills to undergo the next wave of transformations in our industry. My goals are to build our skills while addressing immediate market needs through
incremental innovations.

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow
Google-recommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?

A. Google Kubernetes Engine with an SSL Ingress


B. Cloud IoT Core with public/private key pairs
C. Compute Engine with project-wide SSH keys
D. Compute Engine with specific SSH keys

Answer : B

Question 44 ( Testlet 7 ) 
Company overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries.
Their mission is to build products that make their customers more productive.

Solution concept -
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from
the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500
megabytes of data per day.

Existing technical environment -


TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main
manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud.
The web frontend for dealers and customers is running in
Google Cloud and allows access to stock management and analytics.

Business requirements -
‫ג‬€¢ Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
‫ג‬€¢ Decrease cloud operational costs and adapt to seasonality.
‫ג‬€¢ Increase speed and reliability of development workflow.
‫ג‬€¢ Allow remote developers to be productive without compromising code or data security.
‫ג‬€¢ Create a flexible and scalable platform for developers to create custom API services for dealers and partners.

Technical requirements -
‫ג‬€¢ Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
‫ג‬€¢ Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
‫ג‬€¢ Allow developers to run experiments without compromising security and governance requirements.
‫ג‬€¢ Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
‫ג‬€¢ Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
‫ג‬€¢ Improve and standardize tools necessary for application and network monitoring and troubleshooting.

Executive statement -
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes.
After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic
plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the
cloud.

For this question, refer to the TerramEarth case study. You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another
Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices. What should you do?

A. Create a token and pass it in as an environment variable to func_display. When invoking func_query, include the token in the request. Pass the same token to func_query and reject the invocation if the tokens are
different.
B. Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include
the token to the request when invoking func_query.
C. Make func_query 'Require authentication' and only accept internal traffic. Create those two functions in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display.
D. Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both
functions use the same service account.

Answer : B

Question 45 ( Testlet 7 ) 
Company overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries.
Their mission is to build products that make their customers more productive.

Solution concept -
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from
the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500
megabytes of data per day.

Existing technical environment -


TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main
manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud.
The web frontend for dealers and customers is running in
Google Cloud and allows access to stock management and analytics.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 35/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Business requirements -
‫ג‬€¢ Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
‫ג‬€¢ Decrease cloud operational costs and adapt to seasonality.
‫ג‬€¢ Increase speed and reliability of development workflow.
‫ג‬€¢ Allow remote developers to be productive without compromising code or data security.
‫ג‬€¢ Create a flexible and scalable platform for developers to create custom API services for dealers and partners.

Technical requirements -
‫ג‬€¢ Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
‫ג‬€¢ Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
‫ג‬€¢ Allow developers to run experiments without compromising security and governance requirements.
‫ג‬€¢ Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
‫ג‬€¢ Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
‫ג‬€¢ Improve and standardize tools necessary for application and network monitoring and troubleshooting.

Executive statement -
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes.
After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic
plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the
cloud.

For this question, refer to the TerramEarth case study. You have broken down a legacy monolithic application into a few containerized RESTful microservices.
You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

A. Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTTP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.
B. Deploy Cloud Run services to multiple regions. Create serverless network endpoint groups pointing to the services. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing
instance.
C. Deploy Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.
D. Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Answer : C

Question 46 ( Testlet 7 )

Company overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries.
Their mission is to build products that make their customers more productive.

Solution concept -
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from
the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500
megabytes of data per day.

Existing technical environment -


TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main
manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud.
The web frontend for dealers and customers is running in
Google Cloud and allows access to stock management and analytics.

Business requirements -
‫ג‬€¢ Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
‫ג‬€¢ Decrease cloud operational costs and adapt to seasonality.
‫ג‬€¢ Increase speed and reliability of development workflow.
‫ג‬€¢ Allow remote developers to be productive without compromising code or data security.
‫ג‬€¢ Create a flexible and scalable platform for developers to create custom API services for dealers and partners.

Technical requirements -
‫ג‬€¢ Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
‫ג‬€¢ Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
‫ג‬€¢ Allow developers to run experiments without compromising security and governance requirements.
‫ג‬€¢ Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
‫ג‬€¢ Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
‫ג‬€¢ Improve and standardize tools necessary for application and network monitoring and troubleshooting.

Executive statement -
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes.
After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic
plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the
cloud.

For this question, refer to the TerramEarth case study. You are migrating a Linux-based application from your private data center to Google Cloud. The
TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your
migration. What should you do? (Choose two.)

A. Open a support case regarding the CVE and chat with the support engineer.
B. Read the CVEs from the Google Cloud Status Dashboard to understand the impact.
C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.
D. Post a question regarding the CVE in Stack Overflow to get an explanation.
E. Post a question regarding the CVE in a Google Cloud discussion group to get an explanation.

Answer : AD

Question 47 ( Testlet 7 )

Company overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries.
Their mission is to build products that make their customers more productive.

Solution concept -
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from
the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500
megabytes of data per day.

Existing technical environment -


https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 36/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main
manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud.
The web frontend for dealers and customers is running in

Google Cloud and allows access to stock management and analytics.

Business requirements -
‫ג‬€¢ Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
‫ג‬€¢ Decrease cloud operational costs and adapt to seasonality.
‫ג‬€¢ Increase speed and reliability of development workflow.
‫ג‬€¢ Allow remote developers to be productive without compromising code or data security.
‫ג‬€¢ Create a flexible and scalable platform for developers to create custom API services for dealers and partners.

Technical requirements -
‫ג‬€¢ Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
‫ג‬€¢ Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
‫ג‬€¢ Allow developers to run experiments without compromising security and governance requirements.
‫ג‬€¢ Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
‫ג‬€¢ Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
‫ג‬€¢ Improve and standardize tools necessary for application and network monitoring and troubleshooting.

Executive statement -
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes.
After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic
plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the
cloud.

For this question, refer to the TerramEarth case study. TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the
application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for
minimum cost. What should you do?

A. Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops
team.
B. Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page,
and notify the Ops team.
C. Create a Cloud Monitoring uptime check to validate the application URL. If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify
the Ops team.
D. Use Cloud Error Reporting to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Answer : A

Question 48 ( Testlet 7 )

Company overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries.
Their mission is to build products that make their customers more productive.

Solution concept -
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from
the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500
megabytes of data per day.

Existing technical environment -


TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main
manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud.
The web frontend for dealers and customers is running in
Google Cloud and allows access to stock management and analytics.

Business requirements -
‫ג‬€¢ Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
‫ג‬€¢ Decrease cloud operational costs and adapt to seasonality.
‫ג‬€¢ Increase speed and reliability of development workflow.
‫ג‬€¢ Allow remote developers to be productive without compromising code or data security.
‫ג‬€¢ Create a flexible and scalable platform for developers to create custom API services for dealers and partners.

Technical requirements -
‫ג‬€¢ Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
‫ג‬€¢ Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
‫ג‬€¢ Allow developers to run experiments without compromising security and governance requirements.
‫ג‬€¢ Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
‫ג‬€¢ Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
‫ג‬€¢ Improve and standardize tools necessary for application and network monitoring and troubleshooting.

Executive statement -
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes.
After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic
plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the
cloud.

For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth. The application is based on Docker containers. You want to follow Google-recommended
practices to build the application continuously and store the build artifacts. What should you do?

A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. Push the images to the Container Registry.
B. Configure a trigger in Cloud Build for new source changes. The trigger invokes build jobs and build container images for the microservices. Tag the images with a version number, and push them to Cloud Storage.
C. Create a Scheduler job to check the repo every minute. For any new change, invoke Cloud Build to build container images for the microservices. Tag the images using the current timestamp, and push them to the
Container Registry.
D. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.' Push the image to the Container Registry.

Answer : B

Question 49 ( Testlet 7 )

Company overview -
TerramEarth manufactures heavy equipment for the mining and agricultural industries. They currently have over 500 dealers and service centers in 100 countries.
Their mission is to build products that make their customers more productive.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 37/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Solution concept -
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth. Vehicles collect telemetry data from many sensors during operation. A small subset of critical data is transmitted from
the vehicles in real time to facilitate fleet management. The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles return to home base. Each vehicle usually generates 200 to 500
megabytes of data per day.

Existing technical environment -


TerramEarth's vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world. A growing amount of sensor data is captured from their two main
manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems. The private data centers have multiple network interconnects configured to Google Cloud.
The web frontend for dealers and customers is running in
Google Cloud and allows access to stock management and analytics.

Business requirements -
‫ג‬€¢ Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible.
‫ג‬€¢ Decrease cloud operational costs and adapt to seasonality.
‫ג‬€¢ Increase speed and reliability of development workflow.
‫ג‬€¢ Allow remote developers to be productive without compromising code or data security.
‫ג‬€¢ Create a flexible and scalable platform for developers to create custom API services for dealers and partners.

Technical requirements -
‫ג‬€¢ Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations.
‫ג‬€¢ Modernize all CI/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments.
‫ג‬€¢ Allow developers to run experiments without compromising security and governance requirements.
‫ג‬€¢ Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints.
‫ג‬€¢ Use cloud-native solutions for keys and secrets management and optimize for identity-based access.
‫ג‬€¢ Improve and standardize tools necessary for application and network monitoring and troubleshooting.

Executive statement -
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes.
After moving multiple systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships. Our 5-year strategic
plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to move the remaining legacy systems to the
cloud.

For this question, refer to the TerramEarth case study. TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning
team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?

A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.
B. Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage.
C. Make sure there are no other users consuming the 1Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.
D. Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage.

Answer : D

Question 50 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 38/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Use managed services whenever possible.
Encrypt data on the wire and at rest.

Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP).
The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects.
What can they do?

A. Grant the operations engineer access to use Google Cloud Shell.


B. Configure a VPN connection to GCP to allow SSH access to the cloud VMs.
C. Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.
D. Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.

Answer : A

Question 51 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
E d h i d
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 39/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files.
The database files are compressed tar files stored in their current data center.
How should he proceed?

A. Create a cron script using gsutil to copy the files to a Coldline Storage bucket.
B. Create a cron script using gsutil to copy the files to a Regional Storage bucket.
C. Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.
D. Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Answer : A

Explanation:
Follow these rules of thumb when deciding whether to use gsutil or Storage Transfer Service:
* When transferring data from an on-premises location, use gsutil.
* When transferring data from another cloud storage provider, use Storage Transfer Service.
* Otherwise, evaluate both tools with respect to your specific scenario.
Use this guidance as a starting point.
The specific details of your transfer scenario will also help you determine which tool is more appropriate.

Question 52 ( Testlet 8 ) 
Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 40/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Evaluate and choose an automation framework for provisioning resources in cloud.


Support failover of the production environment to cloud during an emergency.

Identify production services that can migrate to cloud to save capacity.


Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

Dress4Win has asked you to recommend machine types they should deploy their application servers to.
How should you proceed?

A. Perform a mapping of the on-premises physical hardware cores and RAM to the nearest machine types in the cloud.
B. Recommend that Dress4Win deploy application servers to machine types that offer the highest RAM to CPU ratio available.
C. Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the desired performance is reached.
D. Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor performance, and scale the machine types up until
the desired performance is reached.

Answer : C

Question 53 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 41/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Suppo t a o e o t e p oduct o e o e t to c oud du g a e e ge cy.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.

Encrypt data on the wire and at rest.


Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in
their traffic load.
They want to ensure that:
* The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day
* Their administrators are notified automatically when their application reports errors.
* They can filter their aggregated logs down in order to debug one piece of the application across many hosts
Which Google StackDriver features should they use?

A. Logging, Alerts, Insights, Debug


B. Monitoring, Trace, Debug, Logging
C. Monitoring, Logging, Alerts, Error Reporting
D. Monitoring, Logging, Debug, Error Report

Answer : D

Question 54 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

T h i lR i
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 42/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.

Support failover of the production environment to cloud during an emergency.


Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

Dress4Win would like to become familiar with deploying applications to the cloud by successfully deploying some applications quickly, as is. They have asked
for your recommendation.
What should you advise?

A. Identify self-contained applications with external dependencies as a first move to the cloud.
B. Identify enterprise applications with internal dependencies and recommend these as a first move to the cloud.
C. Suggest moving their in-house databases to the cloud and continue serving requests to on-premise applications.
D. Recommend moving their message queuing servers to the cloud and continue handling requests to on-premise applications.

Answer : C

Question 55 ( Testlet 8 ) 
Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
E l d h i f kf
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100
i i i i l d 43/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.

Identify production services that can migrate to cloud to save capacity.


Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

Dress4Win has asked you for advice on how to migrate their on-premises MySQL deployment to the cloud.
They want to minimize downtime and performance impact to their on-premises solution during the migration.
Which approach should you recommend?

A. Create a dump of the on-premises MySQL master server, and then shut it down, upload it to the cloud environment, and load into a new MySQL cluster.
B. Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous replication from the MySQL master server on-premises until cutover.
C. Create a new MySQL cluster in the cloud, configure applications to begin writing to both on premises and cloud MySQL masters, and destroy the original cluster at cutover.
D. Create a dump of the MySQL replica server into the cloud environment, load it into: Google Cloud Datastore, and configure applications to read/write to Cloud Datastore at cutover.

Answer : B

Question 56 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
S f il f h d i i
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100
l dd i 44/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.

Use managed services whenever possible.


Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the
services as healthy.
What should they do?

A. Install the Stackdriver agent on all of the legacy web servers.


B. In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule
C. Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https:// cloud.google.com/monitoring)
D. Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring- UptimeChecks (https://cloud.google.com/monitoring)

Answer : B

Question 57 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Id if d i i h i l
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100
d i 45/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.

Encrypt data on the wire and at rest.


Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

As part of their new application experience, Dress4Wm allows customers to upload images of themselves.
The customer has exclusive control over who may view these images.
Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in.
Which configuration should Dress4Win use?

A. Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.
B. Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique ID.
C. Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring
privacy of images.
D. Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to
their image files.

Answer : A

Question 58 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 46/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Support failover of the production environment to cloud during an emergency.


Identify production services that can migrate to cloud to save capacity.

Use managed services whenever possible.


Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

Dress4Win has end-to-end tests covering 100% of their endpoints.


They want to ensure that the move to the cloud does not introduce any new bugs.
Which additional testing methods should the developers employ to prevent an outage?

A. They should enable Google Stackdriver Debugger on the application code to show errors in the code.
B. They should add additional unit tests and production scale load tests on their cloud staging environment.
C. They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.
D. They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Answer : B

Question 59 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
Support failover of the production environment to cloud during an emergency.
Id if d i i h i l
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100
d i 47/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.

Encrypt data on the wire and at rest.


Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years.
Cost optimization is your top priority.
Which cloud services should you choose?

A. Google Cloud Storage Coldline to store the data, and gsutil to access the data.
B. Google Cloud Storage Nearline to store the data, and gsutil to access the data.
C. Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.
D. BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed
instance group to access the data.

Answer : A

Reference:
https://cloud.google.com/storage/docs/storage-classes

Question 60 ( Testlet 8 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a premium app model.

Company Background -
Dress4Win‫ג‬€™s application has grown from a few servers in the founder‫ג‬€™s garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now
insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the company‫ג‬€™s desire to innovate faster,
Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current
infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location.
Databases:
- MySQL - user data, inventory, static data
- Redis - metadata, social graph, caching
Application servers:
- Tomcat - Java micro-services
- Nginx - static content
- Apache Beam - Batch processing
Storage appliances:
- iSCSI for VM hosts
- Fiber channel SAN - MySQL databases
- NAS - image storage, logs, backups
Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
MQ servers:
- Messaging
- Social notifications
- Events
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.

Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.
Migrate fully to the cloud if all other requirements are met.

Technical Requirements -
Evaluate and choose an automation framework for provisioning resources in cloud.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 48/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
a uate a d c oose a auto at o a e o o p o so g esou ces c oud.
Support failover of the production environment to cloud during an emergency.
Identify production services that can migrate to cloud to save capacity.
Use managed services whenever possible.
Encrypt data on the wire and at rest.
Support multiple VPN connections between the production data center and cloud environment.

CEO Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a new competitor could
use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.

CTO Statement -
We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks
for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of
our capacity is sitting idle.

CFO Statement -
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully
transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50%
lower than our current model.

The current Dress4Win system architecture has high latency to some customers because it is located in one data center.
As of a future evaluation and optimizing for performance in the cloud, Dresss4Win wants to distribute its system architecture to multiple locations when Google
cloud platform.
Which approach should they use?

A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances in each region separately based on traffic.
B. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines managed by your operations team.
C. Use regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
D. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance groups.

Answer : A

Question 61 ( Testlet 9 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder‫ג‬€™s garage
to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the
company‫ג‬€™s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs
- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
‫ג‬€"
- Tomcat

Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN ‫ג‬€" MySQL databases
- 1 PB total storage; 400 TB available
NAS ‫ג‬€" image storage, logs, backups
- 100 TB total storage; 35 TB available

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 49/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.

Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.

Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment
and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our
total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has
set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize
the ROI?

A. Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.
B. Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.
C. Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.
D. Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Answer : D

Question 62 ( Testlet 9 ) 
Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder‫ג‬€™s garage
to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the
company‫ג‬€™s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs
- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
‫ג‬€"
- Tomcat

Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN ‫ג‬€" MySQL databases
- 1 PB total storage; 400 TB available
NAS ‫ג‬€" image storage, logs, backups
- 100 TB total storage; 35 TB available

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.

Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.

Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment
and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our
total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 50/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

A. Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.

B. Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.
C. Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.
D. Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.

Answer : A

Question 63 ( Testlet 9 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder‫ג‬€™s garage
to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the
company‫ג‬€™s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs
- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
‫ג‬€"
- Tomcat

Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN ‫ג‬€" MySQL databases
- 1 PB total storage; 400 TB available
NAS ‫ג‬€" image storage, logs, backups
- 100 TB total storage; 35 TB available

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.

Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.

Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment
and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our
total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as-is and would still be an optimized architecture for performance in the cloud?

A. Web applications deployed using App Engine standard environment


B. RabbitMQ deployed using an unmanaged instance group
C. Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode
D. Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Answer : A

Question 64 ( Testlet 9 )

Company Overview -
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 51/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder‫ג‬€™s garage
to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the

company‫ג‬€™s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs
- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
‫ג‬€"
- Tomcat

Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN ‫ג‬€" MySQL databases
- 1 PB total storage; 400 TB available
NAS ‫ג‬€" image storage, logs, backups
- 100 TB total storage; 35 TB available

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.

Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.

Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment
and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our
total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of
resources on Google Cloud.
What should you do?

A. Use Stackdriver Trace to create a Trace list analysis.


B. Use Stackdriver Monitoring to create a dashboard on the project‫ג‬€™s activity.
C. Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.
D. Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Answer : D

Question 65 ( Testlet 9 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder‫ג‬€™s garage
to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the
company‫ג‬€™s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 52/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs

- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
‫ג‬€"
- Tomcat

Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:
- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN ‫ג‬€" MySQL databases
- 1 PB total storage; 400 TB available
NAS ‫ג‬€" image storage, logs, backups
- 100 TB total storage; 35 TB available

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.

Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.

Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment
and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our
total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned
the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.
Considering Dress4Win‫ג‬€™s business and technical requirements, what should you do?

A. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements. Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.
B. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements. Enable default storage encryption before storing files in Cloud Storage.
C. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Utilize Google‫ג‬€™s default encryption at rest when storing files in Cloud Storage.
D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Answer : C

Question 66 ( Testlet 9 )

Company Overview -
Dress4Win is a web-based company that helps their users organize and manage their personal wardrobe using a web app and mobile application. The company also cultivates an active social network that connects
their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder‫ג‬€™s garage
to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application‫ג‬€™s rapid growth. Because of this growth and the
company‫ג‬€™s desire to innovate faster, Dress4Win is committing to a full migration to a public cloud.

Solution Concept -
For the first phase of their migration to the cloud, Dress4Win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single
location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.

Existing Technical Environment -


The Dress4Win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.
Databases:
MySQL. 1 server for user data, inventory, static data:
- MySQL 5.8
- 8 core CPUs
- 128 GB of RAM
- 2x 5 TB HDD (RAID 1)
Redis 3 server cluster for metadata, social graph, caching. Each server is:
- Redis 3.2
- 4 core CPUs
- 32GB of RAM
Compute:
40 Web Application servers providing micro-services based APIs and static content.
‫ג‬€"
- Tomcat

Java -
- Nginx
- 4 core CPUs
- 32 GB of RAM
20 Apache Hadoop/Spark servers:
- Data analysis
- Real-time trending calculations
- 8 core CPUs
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 53/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
- 128 GB of RAM
- 4x 5 TB HDD (RAID 1)
3 RabbitMQ servers for messaging, social notifications, and events:

- 8 core CPUs
- 32GB of RAM
Miscellaneous servers:
- Jenkins, monitoring, bastion hosts, security scanners
- 8 core CPUs
- 32GB of RAM
Storage appliances:
iSCSI for VM hosts
Fiber channel SAN ‫ג‬€" MySQL databases
- 1 PB total storage; 400 TB available
NAS ‫ג‬€" image storage, logs, backups
- 100 TB total storage; 35 TB available

Business Requirements -
Build a reliable and reproducible environment with scaled parity of production.
Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.
Improve business agility and speed of innovation through rapid provisioning of new resources.
Analyze and optimize architecture for performance in the cloud.

Technical Requirements -
Easily create non-production environments in the cloud.
Implement an automation framework for provisioning resources in cloud.
Implement a continuous deployment process for deploying applications to the on-premises datacenter or cloud.
Support failover of the production environment to cloud during an emergency.
Encrypt data on the wire and at rest.
Support multiple private connections between the production data center and cloud environment.

Executive Statement -
Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment
and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.
Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our
total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.
What change in the on-premises architecture should you make?

A. Replace RabbitMQ with Google Pub/Sub.


B. Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.
C. Resize compute resources to match predefined Compute Engine machine types.
D. Containerize the micro-services and host them in Google Kubernetes Engine.

Answer : C

Question 67 ( Testlet 10 ) 
Company overview -
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot
in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept -
HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging
regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment -


HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded
and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing
public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider.
Video encoding and transcoding is performed on VMs created for each job.
Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements -
HRL‫ג‬€™s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
Support ability to expose the predictive models to partners.
Increase predictive capabilities during and before races:
‫ ‹—ג‬Race results
‫ ‹—ג‬Mechanical failures
‫ ‹—ג‬Crowd sentiment
Increase telemetry and create additional insights.
Measure fan engagement with new predictions.
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers.
Minimize operational complexity.
Ensure compliance with regulations.
Create a merchandising revenue stream.

Technical requirements -
Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
Increase transcoding performance.
Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data.

Executive statement -
Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g.,
overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise
consumers, and season ticket holders. You need to implement a custom card tokenization service that meets the following requirements:
‫ג‬€¢ It must provide low latency at minimal cost.
‫ג‬€¢ It must be able to identify duplicate credit cards and must not store plaintext card numbers.
‫ג‬€¢ It should support annual key rotation.
Which storage approach should you adopt for your tokenization service?

A. Store the card data in Secret Manager after running a query to identify duplicates.
B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.
C. Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.
D. Use column-level encryption to store the data in Cloud SQL.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 54/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
yp Q

Answer : D

Question 68 ( Testlet 10 )

Company overview -
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot
in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept -
HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging
regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment -


HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded
and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing
public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider.
Video encoding and transcoding is performed on VMs created for each job.
Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements -
HRL‫ג‬€™s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
Support ability to expose the predictive models to partners.
Increase predictive capabilities during and before races:
‫ ‹—ג‬Race results
‫ ‹—ג‬Mechanical failures
‫ ‹—ג‬Crowd sentiment
Increase telemetry and create additional insights.
Measure fan engagement with new predictions.
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers.
Minimize operational complexity.
Ensure compliance with regulations.
Create a merchandising revenue stream.

Technical requirements -
Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
Increase transcoding performance.
Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data.

Executive statement -
Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g.,
overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user
experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network).
You are a member of the HRL security team and you need to configure the update that will allow only the Fastly IP address ranges through the External HTTP(S) load balancer. Which command should you use?
A.

B.

C.

D.

Answer : A

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 55/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Reference:
https://cloud.google.com/load-balancing/docs/https

Question 69 ( Testlet 10 )

Company overview -
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot
in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept -
HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging
regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment -


HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded
and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing
public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider.
Video encoding and transcoding is performed on VMs created for each job.
Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements -
HRL‫ג‬€™s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
Support ability to expose the predictive models to partners.
Increase predictive capabilities during and before races:
‫ ‹—ג‬Race results
‫ ‹—ג‬Mechanical failures
‫ ‹—ג‬Crowd sentiment
Increase telemetry and create additional insights.
Measure fan engagement with new predictions.
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers.
Minimize operational complexity.
Ensure compliance with regulations.
Create a merchandising revenue stream.

Technical requirements -
Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
Increase transcoding performance.
Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data.

Executive statement -
Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g.,
overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a
repository. The security team at HRL has developed an in-house penetration test Cloud Function called
Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you
do?

A. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.
B. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.
C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.
D. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Answer : A

Question 70 ( Testlet 10 ) 
Company overview -
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot
in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept -
HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging
regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment -


HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded
and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing
public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider.
Video encoding and transcoding is performed on VMs created for each job.
Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements -
HRL‫ג‬€™s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
Support ability to expose the predictive models to partners.
Increase predictive capabilities during and before races:
‫ ‹—ג‬Race results
‫ ‹—ג‬Mechanical failures
‫ ‹—ג‬Crowd sentiment
Increase telemetry and create additional insights.
Measure fan engagement with new predictions.
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers.
Minimize operational complexity.
Ensure compliance with regulations.
Create a merchandising revenue stream.

Technical requirements -
Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
Increase transcoding performance.
Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data.

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 56/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Executive statement -
Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g.,
overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy from their ML prediction models. They want you to use Google‫ג‬€™s AI Platform so HRL can
understand and interpret the predictions. What should you do?

A. Use Explainable AI.


B. Use Vision AI.
C. Use Google Cloud‫ג‬€™s operations suite.
D. Use Jupyter Notebooks.

Answer : A

Reference:
https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/preparing-metadata

Question 71 ( Testlet 10 )

Company overview -
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot
in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept -
HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging
regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment -


HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded
and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing
public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider.
Video encoding and transcoding is performed on VMs created for each job.
Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements -
HRL‫ג‬€™s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
Support ability to expose the predictive models to partners.
Increase predictive capabilities during and before races:
‫ ‹—ג‬Race results
‫ ‹—ג‬Mechanical failures
‫ ‹—ג‬Crowd sentiment
Increase telemetry and create additional insights.
Measure fan engagement with new predictions.
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers.
Minimize operational complexity.
Ensure compliance with regulations.
Create a merchandising revenue stream.

Technical requirements -
Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
Increase transcoding performance.
Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data.

Executive statement -
Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g.,
overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective approach for storing their race data such as telemetry. They want to keep all historical records, train
models using only the previous season's data, and plan for data growth in terms of volume and information collected. You need to propose a data solution. Considering HRL business requirements and the goals
expressed by CEO S. Hawke, what should you do?

A. Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data by season and event.
B. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data using season as a primary key.
C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season.
D. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use separate database instances for each season.

Answer : C

Reference:
https://cloud.google.com/bigquery/public-data

Question 72 ( Testlet 10 )

Company overview -
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing. Each year HRL holds the world championship and several regional league competitions where teams compete to earn a spot
in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept -
HRL wants to migrate their existing service to a new platform to expand their use of managed AI and ML services to facilitate race predictions. Additionally, as new fans engage with the sport, particularly in emerging
regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment -


HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider. Video recording and editing is performed at the race tracks, and the content is encoded
and transcoded, where needed, in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers. Their race prediction services are hosted exclusively on their existing
public cloud provider. Their existing technical environment is as follows:
Existing content is stored in an object storage service on their existing public cloud provider.
Video encoding and transcoding is performed on VMs created for each job.
Race predictions are performed using TensorFlow running on VMs in the current public cloud provider.

Business requirements
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 57/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Business requirements -
HRL‫ג‬€™s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
Support ability to expose the predictive models to partners.
Increase predictive capabilities during and before races:
‫ ‹—ג‬Race results
‫ ‹—ג‬Mechanical failures
‫ ‹—ג‬Crowd sentiment
Increase telemetry and create additional insights.
Measure fan engagement with new predictions.
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers.
Minimize operational complexity.
Ensure compliance with regulations.
Create a merchandising revenue stream.

Technical requirements -
Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
Increase transcoding performance.
Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data.

Executive statement -
Our CEO, S. Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events within the race (e.g.,
overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process season-long results.

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud infrastructure noted an exceptionally high number of
Compute Engine instances are allocated to do video encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted after their workloads completed. You need to
quickly get a list of which VM instances are idle. What should you do?

A. Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for analysis.
B. Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.
C. Use the gcloud recommender command to list the idle virtual machine instances.
D. From the Google Console, identify which Compute Engine instances in the managed instance groups are no longer responding to health check probes.

Answer : A

Reference:
https://cloud.google.com/compute/docs/instances/viewing-and-applying-idle-vm-recommendations

Question 73 ( Testlet 11 )

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.
‫ג‬€¢ Decrease infrastructure administration costs.
‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

A. Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.
B. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.
C. Use Firebase Authentication for EHR's user facing applications.
D. Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.
E. Use GKE private clusters for all Kubernetes workloads.

Answer : BD

Question 74 ( Testlet 11 )

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 58/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.
‫ג‬€¢ Decrease infrastructure administration costs.
‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are
deployed using Google Cloud services. What should you do? (Choose two.)

A. Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.
B. Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline.
C. Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry.
D. Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

Answer : AB

Question 75 ( Testlet 11 )

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.
‫ג‬€¢ Decrease infrastructure administration costs.
‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What
should you do?

A. Add a new Dedicated Interconnect connection.


B. Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.
C. Add three new Cloud VPN connections.
D. Add a new Carrier Peering connection.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 59/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
g

Answer : D

Question 76 ( Testlet 11 )

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.
‫ג‬€¢ Decrease infrastructure administration costs.
‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's
recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?

A. Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones.
B. Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks.
C. Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations.
D. Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

Answer : D

Question 77 ( Testlet 11 )

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.
‫ג‬€¢ Decrease infrastructure administration costs.
‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 60/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased
on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing
errors. You want to improve publishing latency.
What should you do?

A. Increase the Pub/Sub Total Timeout retry value.


B. Move from a Pub/Sub subscriber pull model to a push model.
C. Turn off Pub/Sub message batching.
D. Create a backup Pub/Sub message queue.

Answer : A

Question 78 ( Testlet 11 )

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.
‫ג‬€¢ Decrease infrastructure administration costs.
‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that
no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

A. Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.
B. Revoke the compute.networkAdmin role from all users in the project with front end instances.
C. Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.
D. Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Answer : D

Question 79 ( Testlet 11 )

Company overview -
EHR Healthcare is a leading provider of electronic health record software to the medical industry. EHR Healthcare provides their software as a service to multi- national medical offices, hospitals, and insurance
providers.

Solution concept -
Due to rapid changes in the healthcare and insurance industry, EHR Healthcare's business has been growing exponentially year over year. They need to be able to scale their environment, adapt their disaster recovery
plan, and roll out new continuous deployment capabilities to update their software at a fast pace. Google
Cloud has been chosen to replace their current colocation facilities.

Existing technical environment -


EHR's software is currently hosted in multiple colocation facilities. The lease on one of the data centers is about to expire.
Customer-facing applications are web-based, and many have recently been containerized to run on a group of Kubernetes clusters. Data is stored in a mixture of relational and NoSQL databases (MySQL, MS SQL
Server, Redis, and MongoDB).
EHR is hosting several legacy file- and API-based integrations with insurance providers on-premises. These systems are scheduled to be replaced over the next several years. There is no plan to upgrade or move these
systems at the current time.
Users are managed via Microsoft Active Directory. Monitoring is currently being done via various open source tools. Alerts are sent via email and are often ignored.

Business requirements -
‫ג‬€¢ On-board new insurance providers as quickly as possible.
‫ג‬€¢ Provide a minimum 99.9% availability for all customer-facing systems.
‫ג‬€¢ Provide centralized visibility and proactive action on system performance and usage.
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 61/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
‫ג‬€¢ ov de ce t a ed v s b ty a d p oact ve act o o syste pe o a ce a d usage.
‫ג‬€¢ Increase ability to provide insights into healthcare trends.
‫ג‬€¢ Reduce latency to all customers.
‫ג‬€¢ Maintain regulatory compliance.

‫ג‬€¢ Decrease infrastructure administration costs.


‫ג‬€¢ Make predictions and generate reports on industry trends based on provider data.

Technical requirements -
‫ג‬€¢ Maintain legacy interfaces to insurance providers with connectivity to both on-premises systems and cloud providers.
‫ג‬€¢ Provide a consistent way to manage customer-facing applications that are container-based.
‫ג‬€¢ Provide a secure and high-performance connection between on-premises systems and Google Cloud.
‫ג‬€¢ Provide consistent logging, log retention, monitoring, and alerting capabilities.
‫ג‬€¢ Maintain and manage multiple container-based environments.
‫ג‬€¢ Dynamically scale and provision new environments.
‫ג‬€¢ Create interfaces to ingest and process data from new providers.

Executive statement -
Our on-premises strategy has worked for years but has required a major investment of time and money in training our team on distinctly different systems, managing similar but separate environments, and
responding to outages. Many of these outages have been a result of misconfigured systems, inadequate capacity to manage spikes in traffic, and inconsistent monitoring practices. We want to use Google Cloud to
leverage a scalable, resilient platform that can span multiple environments seamlessly and provide a consistent and stable user experience that positions us for future growth.

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes
Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?

A. Use a private cluster with a private endpoint with master authorized networks configured.
B. Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes.
C. Use a private cluster with a public endpoint with master authorized networks configured.
D. Use a public cluster with master authorized networks enabled and firewall rules.

Answer : C

Question 80 ( Question Set 1 )



Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new
customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?

A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer

Answer : D

Question 81 ( Question Set 1 ) 


Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?

A. Load data into Google BigQuery


B. Insert data into Google Cloud SQL
C. Put flat files into Google Cloud Storage
D. Stream data into Google Cloud Datastore

Answer : A

Explanation:
BigQuery is Google's serverless, highly scalable, low cost enterprise data warehouse designed to make all your data analysts productive. Because there is no infrastructure to manage, you can focus on analyzing data to
find meaningful insights using familiar SQL and you don't need a database administrator.
BigQuery enables you to analyze all your data by creating a logical data warehouse over managed, columnar storage as well as data from object storage, and spreadsheets.
Reference:
https://cloud.google.com/bigquery/

Question 82 ( Question Set 1 )



The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? (Choose three.)

A. Port the application code to run on Google App Engine


B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable

Answer : ADE

References:
https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp https://cloud.google.com/appengine/docs/standard/java/building-app/cloud-sql

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 62/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Question 83 ( Question Set 1 )



A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?

A. The session variable is local to just a single instance


B. The session variable is being overwritten in Cloud Datastore
C. The URL of the API needs to be modified to prevent caching
D. The HTTP Expires header needs to be set to -1 stop caching

Answer : A

Question 84 ( Question Set 1 ) 


An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log
data. You want to help them find a solution that meets their needs.
What should you do?

A. Direct them to download and install the Google StackDriver logging agent
B. Send them a list of online resources about logging best practices
C. Help them define their requirements and assess viable logging tools
D. Help them upgrade their current tool to take advantage of any new features

Answer : C

Question 85 ( Question Set 1 )



You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company‫ג‬€™s web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)

A. Introduce a green-blue deployment model


B. Replace the QA environment with canary releases
C. Fragment the monolithic platform into microservices
D. Reduce the platform‫ג‬€™s dependency on relational database systems
E. Replace the platform‫ג‬€™s relational database systems with a NoSQL database

Answer : AC

Question 86 ( Question Set 1 )



To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines
(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in
Google Cloud while providing cost visibility to the finance department.
Which two steps should you take? (Choose two.)

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 63/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
B. Use the - -auto-delete flag on all persistent disks and terminate the VM
C. Apply VM CPU utilization label and include it in the BigQuery billing export
D. Use Google BigQuery billing export and labels to associate cost to groups
E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM
F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM

Answer : AD

Question 87 ( Question Set 1 ) 


Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion
sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about
account owners and office locations.
Which database type should you use?

A. Flat file
B. NoSQL
C. Relational
D. Blobstore

Answer : B

Explanation:
Relational databases were not designed to cope with the scale and agility challenges that face modern applications, nor were they built to take advantage of the commodity storage and processing power available
today.
NoSQL fits well for:
✑ Developers are working with applications that create massive volumes of new, rapidly changing data types ‫ג‬€" structured, semi-structured, unstructured and polymorphic data.
Incorrect Answers:
D: The Blobstore API allows your application to serve data objects, called blobs, that are much larger than the size allowed for objects in the Datastore service.
Blobs are useful for serving large files, such as video or image files, and for allowing users to upload large data files.
Reference:
https://www.mongodb.com/nosql-explained

Question 88 ( Question Set 1 )



You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM)
instances are being terminated and re-launched every minute. The instances do not have a public IP address.
You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.
What should you do?

A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

Answer : C

Explanation:
The best practice when configuration a health check is to check health and serve traffic on the same port. However, it is possible to perform health checks on one port, but serve traffic on another. If you do use two
different ports, ensure that firewall rules and services running on instances are configured appropriately. If you run health checks and serve traffic on the same port, but decide to switch ports at some point, be sure to
update both the backend service and the health check.
Backend services that do not have a valid global forwarding rule referencing it will not be health checked and will have no health status.
Reference:
https://cloud.google.com/compute/docs/load-balancing/http/backend-service

Question 89 ( Question Set 1 ) 


You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to
BigQuery.
What should you do to fix the script?

A. Install the latest BigQuery API client library for Python


B. Run your script on a new virtual machine with the BigQuery access scope enabled
C. Create a new service account with BigQuery access and execute your script with that user
D. Install the bq component for gcloud with the command gcloud components install bq.

Answer : B

Question 90 ( Question Set 1 )



Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team
requirements for storing passwords.
What authentication strategy should they use?

A. Use G Suite Password Sync to replicate passwords into Google


B. Federate authentication via SAML 2.0 to the existing Identity Provider
C. Provision users in Google using the Google Cloud Directory Sync tool
D. Ask users to set their Google password to match their corporate password

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 64/67


4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com

Answer : C

Explanation:
Provision users to Google's directory
The global Directory is available to both Cloud Platform and G Suite resources and can be provisioned by a number of means. Provisioned users can take advantage of rich authentication features including single sign-
on (SSO), OAuth, and two-factor verification.
You can provision users automatically using one of the following tools and services:
Google Cloud Directory Sync (GCDS)

Google Admin SDK -

A third-party connector -
GCDS is a connector that can provision users and groups on your behalf for both Cloud Platform and G Suite. Using GCDS, you can automate the addition, modification, and deletion of users, groups, and non-
employee contacts. You can synchronize the data from your LDAP directory server to your Cloud Platform domain by using LDAP queries. This synchronization is one-way: the data in your LDAP directory server is
never modified.
Reference:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#authentication-and-identity

Question 91 ( Question Set 1 )



Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These
options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.
Which technology should they use for this?

A. Google Cloud Dataproc


B. Google Cloud Dataflow
C. Google Container Engine with Bigtable
D. Google Compute Engine with Google BigQuery

Answer : B

Explanation:
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and expressiveness -- no more complex workarounds or
compromises needed.
Reference:
https://cloud.google.com/dataflow/

Question 92 ( Question Set 1 )



Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.
This behavior was not reported before the update.
What strategy should you take?

A. Work with your ISP to diagnose the problem


B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem

Answer : C

Explanation:
Stackdriver Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform and Amazon Web Services
(AWS). Our API also allows ingestion of any custom log data from any source. Stackdriver Logging is a fully managed service that performs at scale and can ingest application and system log data from thousands of
VMs. Even better, you can analyze all that log data in real time.
Reference:
https://cloud.google.com/logging/

Question 93 ( Question Set 1 )



A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?

A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service

Answer : A

Explanation:
On Linux instances, connect to your instance and manually resize your partitions and file systems to use the additional disk space that you added.
Extend the file system on the disk or the partition to use the added space. If you grew a partition on your disk, specify the partition. If your disk does not have a partition table, specify only the disk ID. sudo resize2fs
/dev/[DISK_ID][PARTITION_NUMBER] where [DISK_ID] is the device name and [PARTITION_NUMBER] is the partition number for the device where you are resizing the file system.
Reference:
https://cloud.google.com/compute/docs/disks/add-persistent-disk

Question 94 ( Question Set 1 )



Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 65/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating
to which payment methods are used.
How should you design your architecture?

A. Create a tokenizer service and store only tokenized data


B. Create separate projects that only process credit card data
C. Create separate subnetworks and isolate the components that process credit card data
D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor

Answer : A

Reference:
https://www.sans.org/reading-room/whitepapers/compliance/ways-reduce-pci-dss-audit-scope-tokenizing-cardholder-data-33194

Question 95 ( Question Set 1 )



You have been asked to select the storage system for the click-data of your company‫ג‬€™s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks
per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?

A. Google Cloud SQL


B. Google Cloud Bigtable
C. Google Cloud Storage
D. Google Cloud Datastore

Answer : B

Explanation:
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Good for:
✑ Low-latency read/write access
✑ High-throughput analytics
✑ Native time series support
Common workloads:
✑ IoT, finance, adtech
✑ Personalization, recommendations
✑ Monitoring
✑ Geospatial datasets
✑ Graphs
Incorrect Answers:
C: Google Cloud Storage is a scalable, fully-managed, highly reliable, and cost-efficient object / blob store.
Is good for:
✑ Images, pictures, and videos
✑ Objects and blobs
✑ Unstructured data
D: Google Cloud Datastore is a scalable, fully-managed NoSQL document database for your web and mobile applications.
Is good for:
✑ Semi-structured application data
✑ Hierarchical data
✑ Durable key-value data
✑ Common workloads:
✑ User profiles
✑ Product catalogs
✑ Game state
Reference:
https://cloud.google.com/storage-options/

Question 96 ( Question Set 1 )



You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.
What should you do?

A. Write a lifecycle management rule in XML and push it to the bucket with gsutil
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil
C. Schedule a cron script using gsutil ls ‫ג‬€"lr gs://backups/** to find and remove items older than 90 days
D. Schedule a cron script using gsutil ls ‫ג‬€"l gs://backups/** to find and remove items older than 90 days and schedule it with cron

Answer : B

Question 97 ( Question Set 1 )



Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with
the least amount of operations work and code change.
Which product should you use?

A. Google Cloud Dataflow


B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine

Answer : B
https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 66/67
4/11/22, 9:37 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Answer : B

Explanation:
Google Cloud Dataproc is a fast, easy-to-use, low-cost and fully managed service that lets you run the Apache Spark and Apache Hadoop ecosystem on Google
Cloud Platform. Cloud Dataproc provisions big or small clusters rapidly, supports many popular job types, and is integrated with other Google Cloud Platform services, such as Google Cloud Storage and Stackdriver
Logging, thus helping you reduce TCO.
Reference:
https://cloud.google.com/dataproc/docs/resources/faq

Question 98 ( Question Set 1 )



The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their
performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.
What should they change to get better performance from this system?

A. Increase the virtual machine‫ג‬€™s memory to 64 GB


B. Create a new virtual machine running PostgreSQL
C. Dynamically resize the SSD persistent disk to 500 GB
D. Migrate their performance metrics warehouse to BigQuery
E. Modify all of their batch jobs to use bulk inserts into the database

Answer : C

Question 99 ( Question Set 1 ) 


You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?

A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage

Answer : C

Explanation:
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Good for:
✑ Low-latency read/write access
✑ High-throughput analytics
✑ Native time series support
Common workloads:
✑ IoT, finance, adtech
✑ Personalization, recommendations
✑ Monitoring
✑ Geospatial datasets
✑ Graphs
Reference:
https://cloud.google.com/storage-options/

Question 100 ( Question Set 1 )



Your company‫ג‬€™s user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the
database. Currently, only a small group of select customers have access to the portal. The portal meets a
99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy
to ensure the system maintains the SLA once they introduce additional user load.
What should you do?

A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ‫ג‬€chaos‫ג‬€ to the system by terminating random resources on both zones
C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user‫ג‬€™s usage of the app, and deploy enough resources
to handle 200% of expected load

Answer : B

https://www.itexams.com/exam/Professional Cloud Architect?qpp_select=100 67/67

You might also like