0% found this document useful (0 votes)
104 views19 pages

Google - Selftestengine.professional Cloud Architect - Study.guide.2024 Aug 20.by - Michell.123q.vce

Uploaded by

mrkant6991
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views19 pages

Google - Selftestengine.professional Cloud Architect - Study.guide.2024 Aug 20.by - Michell.123q.vce

Uploaded by

mrkant6991
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!

https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

Google
Exam Questions Professional-Cloud-Architect
Google Certified Professional - Cloud Architect (GCP)

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

NEW QUESTION 1
- (Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

A. Tests should scale well beyond the prior approaches.


B. Unit tests are no longer required, only end-to-end tests.
C. Tests should be applied after the release is in the production environment.
D. Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Answer: A

Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity

NEW QUESTION 2
- (Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the
backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

A. Create a scalable environment in GCP for simulating production load.


B. Use the existing infrastructure to test the GCP-based backend at scale.
C. Build stress tests into each component of your application using resources internal to GCP to simulate load.
D. Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Answer: A

Explanation:
From scenario: Requirements for Game Backend Platform
? Dynamically scale up or down based on game activity
? Connect to a managed NoSQL database service
? Run customize Linux distro

NEW QUESTION 3
- (Topic 2)
For this question, refer to the TerramEarth case study
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle
event data. You want to support delegated authorization against this data. What should you do?

A. Build or leverage an OAuth-compatible access control system.


B. Build SAML 2.0 SSO compatibility into your authentication system.
C. Restrict data access based on the source IP address of the partner systems.
D. Create secondary credentials for each dealer that can be given to the trusted third party.

Answer: A

Explanation:
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_authorization_with_oauth2
Delegate application authorization with OAuth2
Cloud Platform APIs support OAuth 2.0, and scopes provide granular authorization over the methods that are supported. Cloud Platform supports both service-
account and user- account OAuth, also called three-legged OAuth.
References: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_authorization_with_oauth2
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps

NEW QUESTION 4
- (Topic 3)
For this question, refer to the JencoMart case study.
The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for
administration between production and development resources. What Google domain and project structure should you recommend?

A. Create two G Suite accounts to manage users: one for development/test/staging andone for productio
B. Each account should contain one project for every application.
C. Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production
applications.
D. Create a single G Suite account to manage users with each stage of each application in its own project.
E. Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Answer: D

Explanation:
Note: The principle of least privilege and separation of duties are concepts that, although semantically different, are intrinsically related from the standpoint of
security. The intent behind both is to prevent people from having higher privilege levels than they actually need
? Principle of Least Privilege: Users should only have the least amount of privileges required to perform their job and no more. This reduces authorization
exploitation by limiting access to resources such as targets, jobs, or monitoring templates for which they are not authorized.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

? Separation of Duties: Beyond limiting user privilege level, you also limit user duties, or the specific jobs they can perform. No user should be given responsibility
for more than one related function. This limits the ability of a user to perform a malicious action and then cover up that action.
References: https://cloud.google.com/kms/docs/separation-of-duties

NEW QUESTION 5
- (Topic 4)
For this question, refer to the Dress4Win case study.
Dress4Win has asked you to recommend machine types they should deploy their application servers to. How should you proceed?

A. Perform a mapping of the on-premises physical hardware cores and RAM to the nearest machine types in the cloud.
B. Recommend that Dress4Win deploy application servers to machine types that offer the highest RAM to CPU ratio available.
C. Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the
desired performance is reached.
D. Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor
performance, and scale the machine types up until the desired performance is reached.

Answer: C

NEW QUESTION 6
- (Topic 5)
You are responsible for the Google Cloud environment in your company Multiple departments need access to their own projects and the members within each
department will have the same project responsibilities You want to structure your Google Cloud environment for minimal maintenance and maximum overview of
1AM permissions as each department's projects start and end You want to follow Google-recommended practices What should you do?

A. Create a Google Group per department and add all department members to their respective groups Create a folder per departmentand grant the respective
group the required 1AM permissions at the folder level Add the projects under the respective folders
B. Grant all department members the required 1AM permissions for their respective projects
C. Create a Google Group per department and add all department members to theirrespective groups Grant each group the required I AM permissions for their
respective projects
D. Create a folder per department and grant the respective members of the department the required 1AM permissions at the folder leve
E. Structure all projects for each department under the respective folders

Answer: A

Explanation:
This option follows the Google-recommended practices for structuring a Google Cloud environment for minimal maintenance and maximum overview of IAM
permissions. By creating a Google Group per department and adding all department members to their respective groups, you can simplify user management and
avoid granting IAM permissions to individual users. By creating a folder per department and granting the respective group the required IAM permissions at the
folder level, you can enforce consistent policies across all projects within each department and avoid granting IAM permissions at the project level. By adding the
projects under the respective folders, you can organize your resources hierarchically and leverage inheritance of IAM policies from folders to projects. The other
options are not optimal for this scenario, because they either require granting IAM permissions to individual users (B, C), or do not use Google Groups to manage
users (D). References:
? https://cloud.google.com/architecture/framework/system-design
? https://cloud.google.com/architecture/identity/best-practices-for-planning
? https://cloud.google.com/resource-manager/docs/creating-managing-folders

NEW QUESTION 7
- (Topic 5)
Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development
environments. The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office
hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do?

A. Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type
outside of office hour
B. Schedule the shell script on one of the production instances to automate the task.
C. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office
hours.
D. Deploy the development and acceptance applications on a managed instance group and enable autoscaling.
E. Use regular Compute Engine instances for the production environment, and usepreemptible VMs for the acceptance and development environments.

Answer: B

Explanation:
Reference: https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing- your-cloud-costs

NEW QUESTION 8
- (Topic 5)
Your company is planning to upload several important files to Cloud Storage. After the upload is completed, they want to verify that the upload content is identical
to what they have on- premises. You want to minimize the cost and effort of performing this check. What should you do?
A.
1) Use gsutil -m to upload all the files to Cloud Storage.
2) Use gsutil cp to download the uploaded files
3) Use Linux diff to compare the content of the files
B.
1) Use gsutil -m to upload all the files to Cloud Storage.
2) Develop a custom Java application that computes CRC32C hashes
3) Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files
4) Compare the hashes
C.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

1) Use Linux shasum to compute a digest of files you want to upload


2) Use gsutil -m to upload all the files to the Cloud Storage
3) Use gsutil cp to download the uploaded files
4) Use Linux shasum to compute a digest of the downloaded files 5.Compre the hashes
D.
1) Use gsutil -m to upload all the files to Cloud Storage.
2) Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files 3)Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes
of the uploaded files
4)Compare the hashes

A.

Answer: D

Explanation:
https://cloud.google.com/storage/docs/gsutil/commands/hash

NEW QUESTION 9
- (Topic 5)
Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with an on-premises
service that requires high throughput via internal IPs, while minimizing latency. What should you do?

A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
B. Configure a direct peering connection between the on-premises environment and Google Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.

Answer: D

Explanation:
Reference https://cloud.google.com/architecture/setting-up-private-access-to-cloud-apis-through-vpn-tunnels

NEW QUESTION 10
- (Topic 5)
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow
specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?

A. Have users upload the images to Cloud Storag


B. Protect the bucket with a passwordthat expires after 24 hours.
C. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
D. Create an App Engine web application where users can upload image
E. Configure App Engine to disable the application after 24 hour
F. Authenticate users via Cloud Identity.
G. Create an App Engine web application where users can upload images for the next 24 hour
H. Authenticate users via Cloud Identity.

Answer: A

Explanation:
https://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-by-using-signed-url

NEW QUESTION 10
- (Topic 5)
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor
and maximize machine utilization. You also to reliably deploy new versions of the application. Which set of steps should you take?

A. Perform the following:1) Create a managed instance group with f1-micro type machines.2) Use a startup script to clone the repository, check out the production
branch, install the dependencies, and start the Python app.3) Restart the instances to automatically deploy new production releases.
B. Perform the following:1) Create a managed instance group with n1-standard-1 type machines.2) Build a Compute Engine image from the production branch that
contains all of the dependencies andautomatically starts the Python app.3) Rebuild the Compute Engine image, and update the instance template to deploy new
productionreleases.
C. Perform the following:1) Create a Kubernetes Engine cluster with n1-standard-1 type machines.2) Build a Docker image from the production branch with all of
the dependencies, and tag it with theversion number.3) Create a Kubernetes Deployment with the imagePullPolicy set to “IfNotPresent” in the stagingnamespace,
and then promote it to the production namespace after testing.
D. Perform the following:1) Create a Kubernetes Engine (GKE) cluster with n1-standard-4 type machines.2) Build a Docker image from the master branch will all of
the dependencies, and tag it with “latest”.3) Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to “Always”.Restart the pods
to automatically deploy new production releases.

Answer: D

Explanation:
https://cloud.google.com/compute/docs/instance-templates

NEW QUESTION 14
- (Topic 5)
You need to ensure reliability for your application and operations by supporting reliable task a scheduling for compute on GCP. Leveraging Google best practices,
what should you do?

A. Using the Cron service provided by App Engine, publishing messages directly to a message-processing utility service running on Compute Engine instances.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topi
C. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
D. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to amessage-processing utility service running on Compute
Engine instances.
E. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topi
F. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.

Answer: B

Explanation:
https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine

NEW QUESTION 15
- (Topic 5)
A development manager is building a new application He asks you to review his requirements and identify what cloud technologies he can use to meet them. The
application must
* 1. Be based on open-source technology for cloud portability
* 2. Dynamically scale compute capacity based on demand
* 3. Support continuous software delivery
* 4. Run multiple segregated copies of the same application stack
* 5. Deploy application bundles using dynamic templates
* 6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?

A. Google Container Engine, Jenkins, and Helm


B. Google Container Engine and Cloud Load Balancing
C. Google Compute Engine and Cloud Deployment Manager
D. Google Compute Engine, Jenkins, and Cloud Load Balancing

Answer: A

Explanation:
Helm for managing Kubernetes
Kubernetes can base on the URL to route traffic to different location (path)
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer eg.apiVersion: networking.k8s.io/v1beta1
kind: Ingress metadata:
name: fanout-ingress spec:
rules:
- http: paths:
- path: /* backend:
serviceName: web servicePort: 8080
- path: /v2/* backend: serviceName: web2 servicePort: 8080

NEW QUESTION 17
- (Topic 5)
Your company has a Google Cloud project that uses BigQuery for data warehousing They have a VPN tunnel between the on-premises environment and Google
Cloud that is configured with Cloud VPN. The security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing.
What should they do?

A. Configure Private Google Access for on-premises only.


B. Perform the following tasks:1) Create a service account.2) Give the BigQuery JobUser role and Storage Reader role to the service account.3) Remove all other
IAM access from the project.
C. Configure VPC Service Controls and configure Private Google Access.
D. Configure Private Google Access.

Answer: C

Explanation:
https://cloud.google.com/vpc-service-controls/docs/overview
VPC Service Controls improves your ability to mitigate the risk of data exfiltration from Google Cloud services such as Cloud Storage and BigQuery.

NEW QUESTION 21
- (Topic 5)
Your company has a Google Cloud project that uses BigQuery for data warehousing on a pay-per-use basis. You want to monitor queries in real time to discover
the most costly queries and which users spend the most. What should you do?
A.
* 1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage.
* 2. Develop a Dataflow pipeline to compute the cost of queries split by users.
B.
* 1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery.
* 2. Perform a BigQuery query on the generated table to extract the information you need.
C.
* 1. Activate billing export into BigQuery.
* 2. Perform a BigQuery query on the billing table to extract the information you need.
D.
* 1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query.
* 2. Open the Billing page of the project.
* 3. Select Reports.
* 4. Select BigQuery as the product and filter by the user you want to check.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

A.

Answer: C

Explanation:
https://cloud.google.com/blog/products/data-analytics/taking-a-practical-approach-to-bigquery-cost-monitoring

NEW QUESTION 26
- (Topic 5)
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool.
This requires a relational database that can operate on hundreds of terabytes of data. What is the Google recommended tool for such applications?

A. Cloud Spanner, because it is globally distributed


B. Cloud SQL, because it is a fully managed relational database
C. Cloud Firestore, because it offers real-time synchronization across devices
D. BigQuery, because it is designed for large-scale processing of tabular data

Answer: A

Explanation:
Reference: https://cloud.google.com/files/BigQueryTechnicalWP.pdf

NEW QUESTION 29
- (Topic 5)
Your operations team currently stores 10 TB of data m an object storage service from a third-party provider. They want to move this data to a Cloud Storage
bucket as quickly as possible, following Google-recommended practices. They want to minimize the cost of this data migration. When approach should they use?

A. Use the gsutil mv command lo move the data


B. Use the Storage Transfer Service to move the data
C. Download the data to a Transfer Appliance and ship it to Google
D. Download the data to the on-premises data center and upload it to the Cloud Storage bucket

Answer: B

Explanation:
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#transfer-options
https://cloud.google.com/storage-transfer-service

NEW QUESTION 31
- (Topic 5)
Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping. The members of the development team are in a
Google Group. The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from
creating resources in projects in the Finance folder. What should you do?

A. Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the
Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the
Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.

Answer: C

Explanation:
https://cloud.google.com/resource-manager/docs/cloud-platform-resource- hierarchy
"Roles are always inherited, and there is no way to explicitly remove a permission for a lower-level resource that is granted at a higher level in the resource
hierarchy. Given the above example, even if you were to remove the Project Editor role from Bob on the "Test GCP Project", he would still inherit that role from the
"Dept Y" folder, so he would still have the permissions for that role on "Test GCP Project"."
Reference: https://cloud.google.com/resource-manager/docs/creating-managing-folders

NEW QUESTION 32
- (Topic 5)
You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per
second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not
receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?

A. Cloud Run and BigQuery


B. Cloud Run and Cloud Bigtable
C. A Compute Engine autoscaling managed instance group and BigQuery
D. A Compute Engine autoscaling managed instance group and Cloud Bigtable

Answer: B

Explanation:
https://cloud.google.com/run/docs/about-instance-autoscaling https://cloud.google.com/blog/topics/developers-practitioners/bigtable-vs-bigquery-whats- difference

NEW QUESTION 33

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

- (Topic 5)
You need to set up Microsoft SQL Server on GCP. Management requires that there’s no downtime in case of a data center outage in any of the zones within a
GCP region. What should you do?

A. Configure a Cloud SQL instance with high availability enabled.


B. Configure a Cloud Spanner instance with a regional instance configuration.
C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows FailoverClusterin
D. Place nodes in different subnets.
E. Set up SQL Server Always On Availability Groups using Windows Failover Clusterin
F. Place nodes in different zones.

Answer: D

Explanation:
https://cloud.google.com/sql/docs/sqlserver/configure-ha

NEW QUESTION 35
- (Topic 5)
Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs
need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You
want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?
A.
* 1. Install the Cloud Ops agent on all instances.
* 2. Create a sink to export logs into a partitioned BigQuery table.
* 3. Set a time_partitioning_expiration of 30 days.
B.
* 1. Install the Cloud Ops agent on all instances.
* 2. Create a sink to export logs into a regional Cloud Storage bucket.
* 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
* 4. Configure a retention policy at the bucket level to create a lock.
C.
* 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery
table.
* 2. Set a time_partitioning_expiration of 30 days.
D.
* 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket.
* 2. Create a sink to export logs into a regional Cloud Storage bucket.
* 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.

A.

Answer: B

Explanation:
The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.
The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage. The reason for using Cloud Storage as the destination for the logs is
that the requirement in question requires setting up a lifecycle based on the storage period.
In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.
If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-
optimal solution.
Therefore, the correct answer is as follows
* 1. Install the Cloud Logging agent on all instances.
Create a sync that exports the logs to the region's Cloud Storage bucket.
* 3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. * 4.
* 4. set up a bucket-level retention policy using bucket locking."

NEW QUESTION 39
- (Topic 5)
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you
use?

A. Grant the security team access to the logs in each Project.


B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery.
C. Configure Stackdriver Monitoring for all Projects with the default retention policies.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.

Answer: D

Explanation:
Overview of storage classes, price, and use cases https://cloud.google.com/storage/docs/storage-classes
Why export logs? https://cloud.google.com/logging/docs/export/
StackDriver Quotas and Limits for Monitoring https://cloud.google.com/monitoring/quotas The BigQuery pricing. https://cloud.google.com/bigquery/pricing

NEW QUESTION 42
- (Topic 5)
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be
encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?

A. Supply the encryption key in a .boto configuration fil


B. Use gsutil to upload the files.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

C. Supply the encryption key using gcloud confi


D. Use gsutil to upload the files to that bucket.
E. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
F. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption ke
G. Use gsutil to upload the files to that bucket.

Answer: A

Explanation:
https://cloud.google.com/storage/docs/encryption/customer-supplied- keys#gsutil

NEW QUESTION 43
- (Topic 5)
You want to enable your running Google Container Engine cluster to scale as demand for your application
changes.
What should you do?

A. Add additional nodes to your Container Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
B. Add a tag to the instances in the cluster with the following command:gcloud compute instances add-tags INSTANCE --tags enable --autoscaling max-nodes-10
C. Update the existing Container Engine cluster with the following command:gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1
-- max-nodes=10
D. Create a new Container Engine cluster with the following command:gcloud alpha container clusters create mycluster --enable-autocaling --min-nodes=1 --max-
nodes=10and redeploy your application.

Answer: B

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster- autoscaler
Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by --node-pool or the default node pool if -- node-pool is not provided.
Where:
--max-nodes=MAX_NODES
Maximum number of nodes in the node pool.
Maximum number of nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale.

NEW QUESTION 47
- (Topic 5)
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Kubernetes Engine for workload orchestration.
Parts of your architecture must also be PCI DSScompliant.
Which of the following is most accurate?

A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B. Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.
C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI- compliant.

Answer: D

Explanation:
https://cloud.google.com/security/compliance/pci-dss

NEW QUESTION 49
- (Topic 5)
You are configuring the cloud network architecture for a newly created project m Google Cloud that will host applications in Compote Engine Compute Engine
virtual machine instances will be created in two different subnets (sub-a and sub-b) within a single region
• Instances in sub-a win have public IP addresses
• Instances in sub-b will have only private IP addresses
To download updated packages, instances must connect to a public repository outside the boundaries of Google Cloud You need to allow sub-b to access the
external repository. What should you do?

A. Enable Private Google Access on sub-b


B. Configure Cloud NAT and select sub b m the NAT mapping section
C. Configure a bastion host instance in sub a to connect to instances in sub-b
D. Enable Identity Aware Proxy for TCP forwarding for instances in sub-b

Answer: B

Explanation:
? Cloud NAT (network address translation) lets Google Cloud virtual machine (VM) instances without external IP addresses and private Google Kubernetes Engine
(GKE) clusters send outbound packets to the internet and receive any corresponding established inbound response packets1. By configuring Cloud NAT and
selecting sub-b in the NAT mapping section, you can allow instances in sub-b to access the external repository without exposing them to the internet1.

NEW QUESTION 54
- (Topic 5)
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile
and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv
files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

What actions will meet your company’s needs?

A. Compress and upload both achieved files and files uploaded daily using the qsutil –m option.
B. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transferarchived data to Cloud Storag
C. Establish a connection with Google using a Dedicated Interconnect orDirect Peering connection and use it to upload files daily.
D. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transferarchived data to Cloud Storag
E. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compares and upload files daily using the gsutil –m option.
F. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storag
G. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.

Answer: B

Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering

NEW QUESTION 59
- (Topic 5)
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be
uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?

A. Create a retention policy on the bucket for the duration of 5 year


B. Create a lock on the retention policy.
C. Create the bucket with uniform bucket-level access, and grant a service account the role of Object Write
D. Use the service account to upload new files.
E. Use a customer-managed key for the encryption of the bucke
F. Rotate the key after 5 years.
G. Create the bucket with fine-grained access control, and grant a service account the role of Object Write
H. Use the service account to upload new files.

Answer: A

Explanation:
Reference: https://cloud.google.com/storage/docs/using-bucket-lock

NEW QUESTION 63
- (Topic 5)
As part of implementing their disaster recovery plan, your company is trying to replicate their production
MySQL database from their private data center to their GCP project using a Google Cloud VPN connection.
They are experiencing latency issues and a small amount of packet loss that is disrupting the replication. What should they do?

A. Configure their replication to use UDP.


B. Configure a Google Cloud Dedicated Interconnect.
C. Restore their database daily using Google Cloud SQL.
D. Add additional VPN connections and load balance them.
E. Send the replicated transaction to Google Cloud Pub/Sub.

Answer: B

NEW QUESTION 68
- (Topic 5)
An application development team has come to you for advice.They are planning to write and deploy an HTTP(S) API using Go 1.12. The API will have a very
unpredictable workload and must remain reliable during peaks in traffic. They want to minimize operational overhead for this application. What approach should
you recommend?

A. Use a Managed Instance Group when deploying to Compute Engine


B. Develop an application with containers, and deploy to Google Kubernetes Engine (GKE)
C. Develop the application for App Engine standard environment
D. Develop the application for App Engine Flexible environment using a custom runtime

Answer: C

Explanation:
https://cloud.google.com/appengine/docs/the-appengine-environments

NEW QUESTION 73
- (Topic 5)
You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the
virtual machines are preempted. What should you do?

A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory.


B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdnver endpoint check to call the service.
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new
virtual machine instance.
D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as
the value for a new metadata entry with the key shutdown-script-url

Answer: C

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

NEW QUESTION 76
- (Topic 5)
You are managing several projects on Google Cloud and need to interact on a daily basis with BigQuery, Bigtable and Kubernetes Engine using the gcloud CLI
tool You are travelling a lot and work on different workstations during the week You want to avoid having to manage the gcloud CLI manually What should you do?

A. Use a package manager to install gcloud on your workstations instead of installing it manually
B. Create a Compute Engine instance and install gcloud on the instance Connect to this instance via SSH to always use the samegcloud installation when
interacting with Google Cloud
C. Install gcloud on all of your workstations Run the command gcloud components auto- update on each workstation
D. Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud

Answer: D

Explanation:
This option allows you to use the gcloud CLI tool without having to install or manage it manually on different workstations. Google Cloud Shell is a browser-based
command-line tool that provides you with a temporary Compute Engine virtual machine instance preloaded with the Cloud SDK, including the gcloud CLI tool. You
can access Google Cloud Shell from any web browser and use it to interact with BigQuery, Bigtable and Kubernetes Engine using the gcloud CLI tool. The other
options are not optimal for this scenario, because they either require installing and updating the gcloud CLI tool on multiple workstations (A, C), or creating and
maintaining a Compute Engine instance for the sole purpose of using the gcloud CLI tool (B). References:
? https://cloud.google.com/shell/docs/overview
? https://cloud.google.com/sdk/gcloud/

NEW QUESTION 79
- (Topic 5)
You want to automate the creation of a managed instance group and a startup script to install the OS package dependencies. You want to minimize the startup
time for VMs in the instance group.
What should you do?

A. Use Terraform to create the managed instance group and a startup script to install the OS packagedependencies.
B. Create a custom VM image with all OS package dependencie
C. Use Deployment Manager to create the managed instance group with the VM image.
D. Use Puppet to create the managed instance group and install the OS package dependencies.
E. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.

Answer: B

Explanation:
"Custom images are more deterministic and start more quickly than instances with startup scripts. However, startup scripts are more flexible and let you update the
apps and settings in your instances more easily." https://cloud.google.com/compute/docs/instance- templates/create-instance-
templates#using_custom_or_public_images_in_your_instance_templates

NEW QUESTION 81
- (Topic 5)
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high
performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20TB of log archives
retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows
customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?

A. Local SSD for customer session state dat


B. Lifecycle-managed Cloud Storage for logarchives, thumbnails, and VM boot/data volumes.
C. Memcache backed by Cloud Datastore for the customer session state dat
D. Lifecycle- managed CloudStorage for log archives, thumbnails, and VM boot/data volumes.
E. Memcache backed by Cloud SQL for customer session state dat
F. Assorted local SSD- backed instances for VM boot/data volume
G. Cloud Storage for log archives and thumbnails.
H. Memcache backed by Persistent Disk SSD storage for customer session state dat
I. Assorted local SSDbacked instances for VM boot/data volume
J. Cloud Storage for log archives and thumbnails.

Answer: D

Explanation:
https://cloud.google.com/compute/docs/disks

NEW QUESTION 86
- (Topic 5)
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to
operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What
should you do?

A. Create a Compute Engine instance template using the most recent Debian imag
B. Create an instance from this template, and install and configure the application as part of the startup scrip
C. Repeat this process whenever a new Google-managed Debian image becomes available.
D. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
E. Create an instance with the latest available Debian imag
F. Connect to the instance via SSH, and install and configure the application on the instanc
G. Repeat this process whenever a new Google-managed Debian image becomes available.
H. Create a Docker container with Debian as the base imag
I. Install and configure the application as part of the Docker image creation proces

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

J. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available.

Answer: B

Explanation:
Reference: https://cloud.google.com/compute/docs/os-patch-management

NEW QUESTION 90
- (Topic 5)
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan incorporates the business goal of
cost optimization. Your team has deployed two GCP projects successfully to date. What should you do?

A. Allocate budget for team trainin


B. Set a deadline for the new GCP project.
C. Allocate budget for team trainin
D. Create a roadmap for your team to achieve Google Cloud certification based on job role.
E. Allocate budget to hire skilled external consultant
F. Set a deadline for the new GCP project.
G. Allocate budget to hire skilled external consultant
H. Create a roadmap for your team to achieve Google Cloud certification based on job role.

Answer: B

Explanation:
https://services.google.com/fh/files/misc/cloud_center_of_excellence.pdf

NEW QUESTION 94
- (Topic 5)
You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate encryption keys
outside of Google Cloud. You need to implement a solution. What should you do?

A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the
created ke
B. Set up a Dataflow pipeline to decrypt the data and to store it in a BigQuery dataset.
C. Generate a new key in Cloud Key Management Service (Cloud KMS). Create a dataset in BigQuery using the customer-managed key option and select the
created key
D. Import a key in Cloud KM
E. Store all data in Cloud Storage using the customer- managed key option and select the created ke
F. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
G. Import a key in Cloud KM
H. Create a dataset in BigQuery using the customer-supplied key option and select the created key.

Answer: D

Explanation:
https://cloud.google.com/bigquery/docs/customer-managed-encryption

NEW QUESTION 95
- (Topic 5)
Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to
ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your
application stays below a certain threshold. What should you do?

A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.
B. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployment
C. Send curl requests to your application, and validate if the auto scaling works.
D. Replicate the application over multiple GKE clusters in every Google Cloud region.Configure a global HTTP(S) load balancer to expose the different clusters
over a single global IP address.
E. Use Cloud Debugger in the development environment to understand the latency between the different microservices.

Answer: B

NEW QUESTION 96
- (Topic 5)
You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the application. You want to
evaluate the new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?

A. Deploy a new revision to Cloud Run with the new versio


B. Configure traffic percentage between revisions.
C. Deploy a new service to Cloud Run with the new versio
D. Add a Cloud Load Balancing instance in front of both services.
E. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branc
F. As part of the Cloud Build trigger, configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.
G. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Ru
H. Configure Traffic Director to send a small percentage of traffic to the new version of the application.

Answer: A

Explanation:

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

https://cloud.google.com/run/docs/rollouts-rollbacks-traffic-migration

NEW QUESTION 98
- (Topic 5)
Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling
and relaunches.
Which feature of Kubernetes should you use to accomplish this?

A. StatefulSets
B. Role-based access control
C. Container environment variables
D. Persistent Volumes

Answer: A

Explanation:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/

NEW QUESTION 100


- (Topic 5)
Your company has an application running on App Engine that allows users to upload music files and share them with other people. You want to allow users to
upload files directly into Cloud Storage from their browser session. The payload should not be passed through the backend. What should you do?
A.
* 1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App
Engine application is an allowed origin.
* 2. Use the Cloud Storage Signed URL feature to generate a POST URL.
B.
* 1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App
Engine application is an allowed origin.
* 2. Assign the Cloud Storage WRITER role to users who upload files.
C.
* 1. Use the Cloud Storage Signed URL feature to generate a POST URL.
* 2. Use App Engine default credentials to sign requests against Cloud Storage.
D.
* 1. Assign the Cloud Storage WRITER role to users who upload files.
* 2. Use App Engine default credentials to sign requests against Cloud Storage.

A.

Answer: B

NEW QUESTION 103


- (Topic 5)
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google-
recommended way for your application to authenticate to the required Google Cloud services?

A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes togrant the appropriate Cloud Pub/Sub IAM roles.
C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.

Answer: A

NEW QUESTION 108


- (Topic 5)
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4
TB, and large updates are frequent. Replication requires private address space communication. Which networking approach should you use?

A. Google Cloud Dedicated Interconnect


B. Google Cloud VPN connected to the data center network
C. A NAT and TLS translation gateway installed on-premises
D. A Google Compute Engine instance with a VPN server installed connected to the data center network

Answer: A

Explanation:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google’s
network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional
bandwidth over the public Internet or using VPN tunnels.
Benefits:
? Traffic between your on-premises network and your VPC network doesn't traverse the public Internet. Traffic traverses a dedicated connection with fewer hops,
meaning there are less points of failure where traffic might get dropped or disrupted.
? Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN
tunnel to reach internal IP addresses. Currently, you can only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses,
you must use a separate connection.
? You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a
maximum of eight connections (80 Gbps total per interconnect).
? The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

have a high-volume of traffic to and from Google’s network.


References: https://cloud.google.com/interconnect/docs/details/dedicated

NEW QUESTION 110


- (Topic 5)
Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users. The
application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the
underlying infrastructure. Which service should you use to deploy the application?

A. App Engine
B. Cloud Endpoints
C. Compute Engine
D. Google Kubernetes Engine

Answer: A

Explanation:
Reference: https://cloud.google.com/terms/services https://cloud.google.com/appengine/docs/standard/go/how-requests-are-handled

NEW QUESTION 112


- (Topic 5)
Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all
of your Virtual Private Clouds (VPCs). What should you do?

A. Remove the default route on all VPC


B. Move all approved instances into a new subnet that has a default route to an internet gateway.
C. Create a new VPC in custom mod
D. Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet.
E. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
F. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAcces
G. List the approved instances in the allowedValues list.

Answer: D

Explanation:
Reference: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip- address
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip- address#disableexternalip
you might want to restrict external IP address so that only specific VM instances can use them. This option can help to prevent data exfiltration or maintain network
isolation. Using an Organization Policy, you can restrict external IP addresses to specific VM instances with
constraints to control use of external IP addresses for your VM instances within an organization or a project.

NEW QUESTION 114


- (Topic 5)
You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network.
How should you deploy the VPN?

A. Use VPC Network Peering between the VPC and the on-premises network.
B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on- premises peer gateway.
D. Deploy Cloud VPN Gateway in each regio
E. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.

Answer: C

Explanation:
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns

NEW QUESTION 115


- (Topic 5)
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPUload. What should you do?

A. Configure a HorizontalPodAutoscaler with a target CPU usag


B. Enable the Cluster Autoscaler from theGCP Console.
C. Configure a HorizontalPodAutoscaler with a target CPU usag
D. Enable autoscaling on the managedinstance group for the cluster using the gcloud command.
E. Create a deployment and set the maxUnavailable and maxSurge propertie
F. Enable the Cluster Autoscaler using the gcloud command.
G. Create a deployment and set the maxUnavailable and maxSurge propertie
H. Enable autoscaling on thecluster managed instance group from the GCP Console.

Answer: B

NEW QUESTION 116


- (Topic 5)
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don’t run out of
storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?

A. 1) Enable automatic storage increase for the instance.2) Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

reduceCPU usage.3) Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
B. 1) Enable automatic storage increase for the instance.2) Change the instance type to a 32-core machine type to keep CPU usage below 75%.3) Create a
Stackdriver alert for replication lag, and shard the database to reduce replication time.
C. 1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on theinstance to create more space.2) Deploy memcached to
reduce CPU load.3) Change the instance type to a 32-core machine type to reduce replication lag.
D. 1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on theinstance to create more space.2) Deploy memcached to
reduce CPU load.3) Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.

Answer: A

NEW QUESTION 120


- (Topic 5)
Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not
overlap and must remain separated. The network configuration is shown below.

Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this?

A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.


B. Add two additional NICs to Instance #1 with the following configuration:•NIC1VPC: VPC #2SUBNETWORK: subnet #2•NIC2VPC: VPC #3SUBNETWORK:
subnet #3Update firewall rules to enable traffic between instances.
C. Create two VPN tunnels via CloudVPN:•1 between VPC #1 and VPC #2.•1 between VPC #2 and VPC #3.Update firewall rules to enable traffic between the
instances.
D. Peer all three VPCs:•Peer VPC #1 with VPC #2.•Peer VPC #2 with VPC #3.Update firewall rules to enable traffic between the instances.

Answer: B

Explanation:
As per GCP documentation: "By default, every instance in a VPC network has a single network interface. Use these instructions to create additional network
interfaces. Each interface is attached to a different VPC network, giving that instance access to different VPC networks in Google Cloud. You cannot attach
multiple network interfaces to the same VPC network." Refer to: https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
https://cloud.google.com/vpc/docs/create-use-multiple- interfaces#i_am_not_able_to_connect_to_secondary_interfaces_internal_ip

NEW QUESTION 125


- (Topic 5)
Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID and the
API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes
backward-incompatible changes. You want to follow Google-recommended practices. What should you do?

A. Create a distribution list of all customers to inform them of an upcoming backward- incompatible change at least one month before replacing the old API with the
new API.
B. Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an
update to the API.
C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.
D. Use a versioning strategy for the APIs that adds the suffix “DEPRECATED” to the current API version number on every backward-incompatible chang
E. Use the current version number for the new API.

Answer: C

Explanation:
https://cloud.google.com/apis/design/versioning
All Google API interfaces must provide a major version number, which is encoded at the end of the protobuf package, and included as the first part of the URI path
for REST APIs. If an API introduces a breaking change, such as removing or renaming a field, it must increment its API version number to ensure that existing user
code does not suddenly break.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

NEW QUESTION 126


- (Topic 5)
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table.
Any logs older than 45 days should be removed. You want to optimize storage and follow Google recommended practices. What should you do?

A. Configure the expiration time for your tables at 45 days


B. Make the tables time-partitioned, and configure the partition expiration at 45 days
C. Rely on BigQuery’s default behavior to prune application logs older than 45 days
D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days

Answer: B

Explanation:
https://cloud.google.com/bigquery/docs/managing-partitioned-tables

NEW QUESTION 128


- (Topic 5)
Your company has a support ticketing solution that uses App Engine Standard. The project that contains the App Engine application already has a Virtual Private
Cloud(VPC) network fully
connected to the company’s on-premises environment through a Cloud VPN tunnel. You want to enable App Engine application to communicate with a database
that is running in
the company’s on-premises environment. What should you do?

A. Configure private services access


B. Configure private Google access for on-premises hosts only
C. Configure serverless VPC access
D. Configure private Google access

Answer: A

Explanation:
https://cloud.google.com/appengine/docs/standard/python3/connecting-vpc https://cloud.google.com/appengine/docs/flexible/python/using-third-party-
databases#on_premises

NEW QUESTION 129


- (Topic 5)
Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company’s data
center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private Cloud (VPC) overlap with your
data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?

A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping
IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.

Answer: A

Explanation:
To connect two networks together we need (1) either VPN or interconnect and (2) peering. When there is peering, you cannot have conflicting IP addresses. You
can use either Cloud VPN or Cloud Interconnect to securely connect your on-premises network to your VPC network. (https://cloud.google.com/vpc/docs/vpc-
peering#transit-network) At the time of peering, Google Cloud checks to see if there are any subnet IP ranges that overlap subnet IP ranges in the other network. If
there is any overlap, peering is not established. (https://cloud.google.com/vpc/docs/vpc-peering#considerations) NAT is used to translate private to public IP and
vice versa, however because we are connecting 2 networks together, they become private IPs. So it is not applicable.

NEW QUESTION 132


- (Topic 5)
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this
analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live-
processing some data as it comes in. Which technology should they use for this?

A. Google Cloud Dataproc


B. Google Cloud Dataflow
C. Google Container Engine with Bigtable
D. Google Compute Engine with Google BigQuery

Answer: B

Explanation:
Dataflow is for processing both the Batch and Stream.
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and
expressiveness -- no more complex workarounds or compromises needed.
References: https://cloud.google.com/dataflow/

NEW QUESTION 134


- (Topic 5)
You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace the copy easily if there are changes on
the production

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

virtual machine. You will deploy the copy as a new instances in a different project in the US-East region. What steps must you take?

A. Use the Linux dd and netcat command to copy and stream the root disk contents to a new virtual machine instance in the US-East region.
B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
C. Create an image file from the root disk with Linux dd command, create a new disk from the image file, and use it to create a new virtual machine instance in the
US-East region
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East
region using the image file for the root disk.

Answer: D

Explanation:
https://stackoverflow.com/questions/36441423/migrate-google-compute-engine-instance-to-a-different-region

NEW QUESTION 135


- (Topic 6)
For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for
performance in the cloud?

A. Web applications deployed using App Engine standard environment


B. RabbitMQ deployed using an unmanaged instance group
C. Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode
D. Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Answer: C

NEW QUESTION 136


- (Topic 6)
For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in
Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should
use Google best practices and implement the simplest design to meet the requirements.
Considering Dress4Win’s business and technical requirements, what should you do?

A. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.Encrypt data with a customer-supplied encryption key
when storing files in Cloud Storage.
B. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.Enable default storage encryption before storing files in
Cloud Storage.
C. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.Utilize Google’s default encryption at rest when storing
files in Cloud Storage.
D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirement
E. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Answer: D

Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts

NEW QUESTION 139


- (Topic 7)
For this question, refer to the TerramEarth case study.
You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another
Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google's recommended best practices.
What should you do?

A. Create a token and pass it in as an environment variable to func_displa


B. When invoking func_query, include the token in the request Pass the same token to func _query and reject the invocation if the tokens are different.
C. Make func_query 'Require authentication.' Create a unique service account and associate it to func_displa
D. Grant the service account invoker role for func_quer
E. Create an id token in func_display and include the token to the request when invoking func_query.
F. Make func _query 'Require authentication' and only accept internal traffi
G. Create those two functions in the same VP
H. Create an ingress firewall rule for func_query to only allow traffic from func_display.
I. Create those two functions in the same project and VP
J. Make func_query only accept internal traffi
K. Create an ingress firewall for func_query to only allow traffic from func_displa
L. Also, make sure both functions use the same service account.

Answer: B

Explanation:
https://cloud.google.com/functions/docs/securing/authenticating#authenticating_function_to_function_calls

NEW QUESTION 140


- (Topic 7)
TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning
team. Currently, a 1- Gbps interconnect link is available for you. The machine learning team wants to start using
the data in a month. What should you do?

A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

B. Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage
C. Make sure there are no other users consuming the 1 Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.
D. Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage

Answer: A

NEW QUESTION 145


- (Topic 8)
Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google
Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

A. Configure Workload Identity and service accounts to be used by the application platform.
B. Use Kubernetes Secrets, which are obfuscated by defaul
C. Configure these Secrets to be used by theapplication platform.
D. Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and useCloud Key Management Service (Cloud KMS) to
manage the encryption key
E. Configure these Secrets tobe used by the application platform.
F. Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and CloudKey Management Service (Cloud KMS) to manage the
encryption key
G. Configure these Secrets to be usedby the application platform.

Answer: A

NEW QUESTION 150


- (Topic 8)
For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform architecture. The game communicates with the
backend over a REST API.
You want to follow Google-recommended practices. How should you design the backend?

A. Create an instance template for the backen


B. For every region, deploy it on a multi-zone managed instance grou
C. Use an L4 load balancer.
D. Create an instance template for the backen
E. For every region, deploy it on a single- zone managed instance grou
F. Use an L4 load balancer.
G. Create an instance template for the backen
H. For every region, deploy it on a multi-zone managed instance grou
I. Use an L7 load balancer.
J. Create an instance template for the backen
K. For every region, deploy it on a single- zone managed instance grou
L. Use an L7 load balancer.

Answer: C

Explanation:
https://cloud.google.com/solutions/gaming/cloud-game-infrastructure#dedicated_game_server

NEW QUESTION 155


- (Topic 8)
You need to optimize batch file transfers into Cloud Storage for Mountkirk Games’ new Google Cloud solution.
The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract
transform load (ETL) tool. What should you do?

A. Use gsutil to batch move files in sequence.


B. Use gsutil to batch copy the files in parallel.
C. Use gsutil to extract the files as the first part of ETL.
D. Use gsutil to load the files as the last part of ETL.

Answer: B

Explanation:
Reference: https://cloud.google.com/storage/docs/gsutil/commands/cp

NEW QUESTION 159


- (Topic 8)
Your development team has created a mobile game app. You want to test the new mobile app on Android and iOS devices with a variety of configurations. You
need to ensure that testing is efficient and cost-effective. What should you do?

A. Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.
B. Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.
C. Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on thecontainers, and test the mobile app.
D. Upload your mobile app with different configurations to Firebase Hosting and test each configuration.

Answer: C

NEW QUESTION 161


- (Topic 8)
You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What should you do?

A. Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in
both projects
B. Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin
role to both SAs
C. Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's
project.
D. Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in
both projects

Answer: A

NEW QUESTION 166


- (Topic 10)
For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy
compliance audit. What should you do? (Choose two.)

A. Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.
B. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.
C. Use Firebase Authentication for EHR's user facing applications.
D. Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.
E. Use GKE private clusters for all Kubernetes workloads.

Answer: AB

Explanation:
https://cloud.google.com/security/compliance/hipaa

NEW QUESTION 169


- (Topic 10)
You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises
systems and Google
Cloud until the migration is completed.
You want to make sure all your on-premises systems remain reachable during this period. How should you organize your networking in Google Cloud?

A. Use the same IP range on Google Cloud as you use on-premises


B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use asecondary range that does not overlap with the range you
use on-premises
C. Use an IP range on Google Cloud that does not overlap with the range you use on- premises
D. Use an IP range on Google Cloud that does not overlap with the range you use on- premises for yourprimary IP range and use a secondary range with the
same IP range as you use on- premises

Answer: C

NEW QUESTION 170


......

Guaranteed success with Our exam guides visit - https://www.certshared.com


Certshared now are offering 100% pass ensure Professional-Cloud-Architect dumps!
https://www.certshared.com/exam/Professional-Cloud-Architect/ (267 Q&As)

Thank You for Trying Our Product

We offer two products:

1st - We have Practice Tests Software with Actual Exam Questions

2nd - Questons and Answers in PDF Format

Professional-Cloud-Architect Practice Exam Features:

* Professional-Cloud-Architect Questions and Answers Updated Frequently

* Professional-Cloud-Architect Practice Questions Verified by Expert Senior Certified Staff

* Professional-Cloud-Architect Most Realistic Questions that Guarantee you a Pass on Your FirstTry

* Professional-Cloud-Architect Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year

100% Actual & Verified — Instant Download, Please Click


Order The Professional-Cloud-Architect Practice Test Here

Guaranteed success with Our exam guides visit - https://www.certshared.com


Powered by TCPDF (www.tcpdf.org)

You might also like