Google - Selftestengine.professional Cloud Architect - Study.guide.2024 Aug 20.by - Michell.123q.vce
Google - Selftestengine.professional Cloud Architect - Study.guide.2024 Aug 20.by - Michell.123q.vce
Google
Exam Questions Professional-Cloud-Architect
Google Certified Professional - Cloud Architect (GCP)
NEW QUESTION 1
- (Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?
Answer: A
Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity
NEW QUESTION 2
- (Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the
backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?
Answer: A
Explanation:
From scenario: Requirements for Game Backend Platform
? Dynamically scale up or down based on game activity
? Connect to a managed NoSQL database service
? Run customize Linux distro
NEW QUESTION 3
- (Topic 2)
For this question, refer to the TerramEarth case study
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle
event data. You want to support delegated authorization against this data. What should you do?
Answer: A
Explanation:
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_authorization_with_oauth2
Delegate application authorization with OAuth2
Cloud Platform APIs support OAuth 2.0, and scopes provide granular authorization over the methods that are supported. Cloud Platform supports both service-
account and user- account OAuth, also called three-legged OAuth.
References: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_authorization_with_oauth2
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps
NEW QUESTION 4
- (Topic 3)
For this question, refer to the JencoMart case study.
The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for
administration between production and development resources. What Google domain and project structure should you recommend?
A. Create two G Suite accounts to manage users: one for development/test/staging andone for productio
B. Each account should contain one project for every application.
C. Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production
applications.
D. Create a single G Suite account to manage users with each stage of each application in its own project.
E. Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.
Answer: D
Explanation:
Note: The principle of least privilege and separation of duties are concepts that, although semantically different, are intrinsically related from the standpoint of
security. The intent behind both is to prevent people from having higher privilege levels than they actually need
? Principle of Least Privilege: Users should only have the least amount of privileges required to perform their job and no more. This reduces authorization
exploitation by limiting access to resources such as targets, jobs, or monitoring templates for which they are not authorized.
? Separation of Duties: Beyond limiting user privilege level, you also limit user duties, or the specific jobs they can perform. No user should be given responsibility
for more than one related function. This limits the ability of a user to perform a malicious action and then cover up that action.
References: https://cloud.google.com/kms/docs/separation-of-duties
NEW QUESTION 5
- (Topic 4)
For this question, refer to the Dress4Win case study.
Dress4Win has asked you to recommend machine types they should deploy their application servers to. How should you proceed?
A. Perform a mapping of the on-premises physical hardware cores and RAM to the nearest machine types in the cloud.
B. Recommend that Dress4Win deploy application servers to machine types that offer the highest RAM to CPU ratio available.
C. Recommend that Dress4Win deploy into production with the smallest instances available, monitor them over time, and scale the machine type up until the
desired performance is reached.
D. Identify the number of virtual cores and RAM associated with the application server virtual machines align them to a custom machine type in the cloud, monitor
performance, and scale the machine types up until the desired performance is reached.
Answer: C
NEW QUESTION 6
- (Topic 5)
You are responsible for the Google Cloud environment in your company Multiple departments need access to their own projects and the members within each
department will have the same project responsibilities You want to structure your Google Cloud environment for minimal maintenance and maximum overview of
1AM permissions as each department's projects start and end You want to follow Google-recommended practices What should you do?
A. Create a Google Group per department and add all department members to their respective groups Create a folder per departmentand grant the respective
group the required 1AM permissions at the folder level Add the projects under the respective folders
B. Grant all department members the required 1AM permissions for their respective projects
C. Create a Google Group per department and add all department members to theirrespective groups Grant each group the required I AM permissions for their
respective projects
D. Create a folder per department and grant the respective members of the department the required 1AM permissions at the folder leve
E. Structure all projects for each department under the respective folders
Answer: A
Explanation:
This option follows the Google-recommended practices for structuring a Google Cloud environment for minimal maintenance and maximum overview of IAM
permissions. By creating a Google Group per department and adding all department members to their respective groups, you can simplify user management and
avoid granting IAM permissions to individual users. By creating a folder per department and granting the respective group the required IAM permissions at the
folder level, you can enforce consistent policies across all projects within each department and avoid granting IAM permissions at the project level. By adding the
projects under the respective folders, you can organize your resources hierarchically and leverage inheritance of IAM policies from folders to projects. The other
options are not optimal for this scenario, because they either require granting IAM permissions to individual users (B, C), or do not use Google Groups to manage
users (D). References:
? https://cloud.google.com/architecture/framework/system-design
? https://cloud.google.com/architecture/identity/best-practices-for-planning
? https://cloud.google.com/resource-manager/docs/creating-managing-folders
NEW QUESTION 7
- (Topic 5)
Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development
environments. The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office
hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do?
A. Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type
outside of office hour
B. Schedule the shell script on one of the production instances to automate the task.
C. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office
hours.
D. Deploy the development and acceptance applications on a managed instance group and enable autoscaling.
E. Use regular Compute Engine instances for the production environment, and usepreemptible VMs for the acceptance and development environments.
Answer: B
Explanation:
Reference: https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing- your-cloud-costs
NEW QUESTION 8
- (Topic 5)
Your company is planning to upload several important files to Cloud Storage. After the upload is completed, they want to verify that the upload content is identical
to what they have on- premises. You want to minimize the cost and effort of performing this check. What should you do?
A.
1) Use gsutil -m to upload all the files to Cloud Storage.
2) Use gsutil cp to download the uploaded files
3) Use Linux diff to compare the content of the files
B.
1) Use gsutil -m to upload all the files to Cloud Storage.
2) Develop a custom Java application that computes CRC32C hashes
3) Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files
4) Compare the hashes
C.
A.
Answer: D
Explanation:
https://cloud.google.com/storage/docs/gsutil/commands/hash
NEW QUESTION 9
- (Topic 5)
Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with an on-premises
service that requires high throughput via internal IPs, while minimizing latency. What should you do?
A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
B. Configure a direct peering connection between the on-premises environment and Google Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.
Answer: D
Explanation:
Reference https://cloud.google.com/architecture/setting-up-private-access-to-cloud-apis-through-vpn-tunnels
NEW QUESTION 10
- (Topic 5)
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow
specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?
Answer: A
Explanation:
https://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-by-using-signed-url
NEW QUESTION 10
- (Topic 5)
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor
and maximize machine utilization. You also to reliably deploy new versions of the application. Which set of steps should you take?
A. Perform the following:1) Create a managed instance group with f1-micro type machines.2) Use a startup script to clone the repository, check out the production
branch, install the dependencies, and start the Python app.3) Restart the instances to automatically deploy new production releases.
B. Perform the following:1) Create a managed instance group with n1-standard-1 type machines.2) Build a Compute Engine image from the production branch that
contains all of the dependencies andautomatically starts the Python app.3) Rebuild the Compute Engine image, and update the instance template to deploy new
productionreleases.
C. Perform the following:1) Create a Kubernetes Engine cluster with n1-standard-1 type machines.2) Build a Docker image from the production branch with all of
the dependencies, and tag it with theversion number.3) Create a Kubernetes Deployment with the imagePullPolicy set to “IfNotPresent” in the stagingnamespace,
and then promote it to the production namespace after testing.
D. Perform the following:1) Create a Kubernetes Engine (GKE) cluster with n1-standard-4 type machines.2) Build a Docker image from the master branch will all of
the dependencies, and tag it with “latest”.3) Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to “Always”.Restart the pods
to automatically deploy new production releases.
Answer: D
Explanation:
https://cloud.google.com/compute/docs/instance-templates
NEW QUESTION 14
- (Topic 5)
You need to ensure reliability for your application and operations by supporting reliable task a scheduling for compute on GCP. Leveraging Google best practices,
what should you do?
A. Using the Cron service provided by App Engine, publishing messages directly to a message-processing utility service running on Compute Engine instances.
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topi
C. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
D. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to amessage-processing utility service running on Compute
Engine instances.
E. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topi
F. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
Answer: B
Explanation:
https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine
NEW QUESTION 15
- (Topic 5)
A development manager is building a new application He asks you to review his requirements and identify what cloud technologies he can use to meet them. The
application must
* 1. Be based on open-source technology for cloud portability
* 2. Dynamically scale compute capacity based on demand
* 3. Support continuous software delivery
* 4. Run multiple segregated copies of the same application stack
* 5. Deploy application bundles using dynamic templates
* 6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?
Answer: A
Explanation:
Helm for managing Kubernetes
Kubernetes can base on the URL to route traffic to different location (path)
https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer eg.apiVersion: networking.k8s.io/v1beta1
kind: Ingress metadata:
name: fanout-ingress spec:
rules:
- http: paths:
- path: /* backend:
serviceName: web servicePort: 8080
- path: /v2/* backend: serviceName: web2 servicePort: 8080
NEW QUESTION 17
- (Topic 5)
Your company has a Google Cloud project that uses BigQuery for data warehousing They have a VPN tunnel between the on-premises environment and Google
Cloud that is configured with Cloud VPN. The security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing.
What should they do?
Answer: C
Explanation:
https://cloud.google.com/vpc-service-controls/docs/overview
VPC Service Controls improves your ability to mitigate the risk of data exfiltration from Google Cloud services such as Cloud Storage and BigQuery.
NEW QUESTION 21
- (Topic 5)
Your company has a Google Cloud project that uses BigQuery for data warehousing on a pay-per-use basis. You want to monitor queries in real time to discover
the most costly queries and which users spend the most. What should you do?
A.
* 1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage.
* 2. Develop a Dataflow pipeline to compute the cost of queries split by users.
B.
* 1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery.
* 2. Perform a BigQuery query on the generated table to extract the information you need.
C.
* 1. Activate billing export into BigQuery.
* 2. Perform a BigQuery query on the billing table to extract the information you need.
D.
* 1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query.
* 2. Open the Billing page of the project.
* 3. Select Reports.
* 4. Select BigQuery as the product and filter by the user you want to check.
A.
Answer: C
Explanation:
https://cloud.google.com/blog/products/data-analytics/taking-a-practical-approach-to-bigquery-cost-monitoring
NEW QUESTION 26
- (Topic 5)
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool.
This requires a relational database that can operate on hundreds of terabytes of data. What is the Google recommended tool for such applications?
Answer: A
Explanation:
Reference: https://cloud.google.com/files/BigQueryTechnicalWP.pdf
NEW QUESTION 29
- (Topic 5)
Your operations team currently stores 10 TB of data m an object storage service from a third-party provider. They want to move this data to a Cloud Storage
bucket as quickly as possible, following Google-recommended practices. They want to minimize the cost of this data migration. When approach should they use?
Answer: B
Explanation:
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#transfer-options
https://cloud.google.com/storage-transfer-service
NEW QUESTION 31
- (Topic 5)
Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping. The members of the development team are in a
Google Group. The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from
creating resources in projects in the Finance folder. What should you do?
A. Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the
Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the
Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.
Answer: C
Explanation:
https://cloud.google.com/resource-manager/docs/cloud-platform-resource- hierarchy
"Roles are always inherited, and there is no way to explicitly remove a permission for a lower-level resource that is granted at a higher level in the resource
hierarchy. Given the above example, even if you were to remove the Project Editor role from Bob on the "Test GCP Project", he would still inherit that role from the
"Dept Y" folder, so he would still have the permissions for that role on "Test GCP Project"."
Reference: https://cloud.google.com/resource-manager/docs/creating-managing-folders
NEW QUESTION 32
- (Topic 5)
You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per
second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not
receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?
Answer: B
Explanation:
https://cloud.google.com/run/docs/about-instance-autoscaling https://cloud.google.com/blog/topics/developers-practitioners/bigtable-vs-bigquery-whats- difference
NEW QUESTION 33
- (Topic 5)
You need to set up Microsoft SQL Server on GCP. Management requires that there’s no downtime in case of a data center outage in any of the zones within a
GCP region. What should you do?
Answer: D
Explanation:
https://cloud.google.com/sql/docs/sqlserver/configure-ha
NEW QUESTION 35
- (Topic 5)
Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs
need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You
want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?
A.
* 1. Install the Cloud Ops agent on all instances.
* 2. Create a sink to export logs into a partitioned BigQuery table.
* 3. Set a time_partitioning_expiration of 30 days.
B.
* 1. Install the Cloud Ops agent on all instances.
* 2. Create a sink to export logs into a regional Cloud Storage bucket.
* 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
* 4. Configure a retention policy at the bucket level to create a lock.
C.
* 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery
table.
* 2. Set a time_partitioning_expiration of 30 days.
D.
* 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket.
* 2. Create a sink to export logs into a regional Cloud Storage bucket.
* 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month.
A.
Answer: B
Explanation:
The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.
The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage. The reason for using Cloud Storage as the destination for the logs is
that the requirement in question requires setting up a lifecycle based on the storage period.
In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.
If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-
optimal solution.
Therefore, the correct answer is as follows
* 1. Install the Cloud Logging agent on all instances.
Create a sync that exports the logs to the region's Cloud Storage bucket.
* 3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. * 4.
* 4. set up a bucket-level retention policy using bucket locking."
NEW QUESTION 39
- (Topic 5)
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you
use?
Answer: D
Explanation:
Overview of storage classes, price, and use cases https://cloud.google.com/storage/docs/storage-classes
Why export logs? https://cloud.google.com/logging/docs/export/
StackDriver Quotas and Limits for Monitoring https://cloud.google.com/monitoring/quotas The BigQuery pricing. https://cloud.google.com/bigquery/pricing
NEW QUESTION 42
- (Topic 5)
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be
encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?
Answer: A
Explanation:
https://cloud.google.com/storage/docs/encryption/customer-supplied- keys#gsutil
NEW QUESTION 43
- (Topic 5)
You want to enable your running Google Container Engine cluster to scale as demand for your application
changes.
What should you do?
A. Add additional nodes to your Container Engine cluster using the following command: gcloud container clusters resize CLUSTER_NAME --size 10
B. Add a tag to the instances in the cluster with the following command:gcloud compute instances add-tags INSTANCE --tags enable --autoscaling max-nodes-10
C. Update the existing Container Engine cluster with the following command:gcloud alpha container clusters update mycluster --enable-autoscaling --min-nodes=1
-- max-nodes=10
D. Create a new Container Engine cluster with the following command:gcloud alpha container clusters create mycluster --enable-autocaling --min-nodes=1 --max-
nodes=10and redeploy your application.
Answer: B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster- autoscaler
Cluster autoscaling
--enable-autoscaling
Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by --node-pool or the default node pool if -- node-pool is not provided.
Where:
--max-nodes=MAX_NODES
Maximum number of nodes in the node pool.
Maximum number of nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale.
NEW QUESTION 47
- (Topic 5)
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Kubernetes Engine for workload orchestration.
Parts of your architecture must also be PCI DSScompliant.
Which of the following is most accurate?
A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B. Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.
C. Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI- compliant.
Answer: D
Explanation:
https://cloud.google.com/security/compliance/pci-dss
NEW QUESTION 49
- (Topic 5)
You are configuring the cloud network architecture for a newly created project m Google Cloud that will host applications in Compote Engine Compute Engine
virtual machine instances will be created in two different subnets (sub-a and sub-b) within a single region
• Instances in sub-a win have public IP addresses
• Instances in sub-b will have only private IP addresses
To download updated packages, instances must connect to a public repository outside the boundaries of Google Cloud You need to allow sub-b to access the
external repository. What should you do?
Answer: B
Explanation:
? Cloud NAT (network address translation) lets Google Cloud virtual machine (VM) instances without external IP addresses and private Google Kubernetes Engine
(GKE) clusters send outbound packets to the internet and receive any corresponding established inbound response packets1. By configuring Cloud NAT and
selecting sub-b in the NAT mapping section, you can allow instances in sub-b to access the external repository without exposing them to the internet1.
NEW QUESTION 54
- (Topic 5)
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile
and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv
files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
A. Compress and upload both achieved files and files uploaded daily using the qsutil –m option.
B. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transferarchived data to Cloud Storag
C. Establish a connection with Google using a Dedicated Interconnect orDirect Peering connection and use it to upload files daily.
D. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transferarchived data to Cloud Storag
E. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compares and upload files daily using the gsutil –m option.
F. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storag
G. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
Answer: B
Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering
NEW QUESTION 59
- (Topic 5)
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be
uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?
Answer: A
Explanation:
Reference: https://cloud.google.com/storage/docs/using-bucket-lock
NEW QUESTION 63
- (Topic 5)
As part of implementing their disaster recovery plan, your company is trying to replicate their production
MySQL database from their private data center to their GCP project using a Google Cloud VPN connection.
They are experiencing latency issues and a small amount of packet loss that is disrupting the replication. What should they do?
Answer: B
NEW QUESTION 68
- (Topic 5)
An application development team has come to you for advice.They are planning to write and deploy an HTTP(S) API using Go 1.12. The API will have a very
unpredictable workload and must remain reliable during peaks in traffic. They want to minimize operational overhead for this application. What approach should
you recommend?
Answer: C
Explanation:
https://cloud.google.com/appengine/docs/the-appengine-environments
NEW QUESTION 73
- (Topic 5)
You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the
virtual machines are preempted. What should you do?
Answer: C
NEW QUESTION 76
- (Topic 5)
You are managing several projects on Google Cloud and need to interact on a daily basis with BigQuery, Bigtable and Kubernetes Engine using the gcloud CLI
tool You are travelling a lot and work on different workstations during the week You want to avoid having to manage the gcloud CLI manually What should you do?
A. Use a package manager to install gcloud on your workstations instead of installing it manually
B. Create a Compute Engine instance and install gcloud on the instance Connect to this instance via SSH to always use the samegcloud installation when
interacting with Google Cloud
C. Install gcloud on all of your workstations Run the command gcloud components auto- update on each workstation
D. Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud
Answer: D
Explanation:
This option allows you to use the gcloud CLI tool without having to install or manage it manually on different workstations. Google Cloud Shell is a browser-based
command-line tool that provides you with a temporary Compute Engine virtual machine instance preloaded with the Cloud SDK, including the gcloud CLI tool. You
can access Google Cloud Shell from any web browser and use it to interact with BigQuery, Bigtable and Kubernetes Engine using the gcloud CLI tool. The other
options are not optimal for this scenario, because they either require installing and updating the gcloud CLI tool on multiple workstations (A, C), or creating and
maintaining a Compute Engine instance for the sole purpose of using the gcloud CLI tool (B). References:
? https://cloud.google.com/shell/docs/overview
? https://cloud.google.com/sdk/gcloud/
NEW QUESTION 79
- (Topic 5)
You want to automate the creation of a managed instance group and a startup script to install the OS package dependencies. You want to minimize the startup
time for VMs in the instance group.
What should you do?
A. Use Terraform to create the managed instance group and a startup script to install the OS packagedependencies.
B. Create a custom VM image with all OS package dependencie
C. Use Deployment Manager to create the managed instance group with the VM image.
D. Use Puppet to create the managed instance group and install the OS package dependencies.
E. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.
Answer: B
Explanation:
"Custom images are more deterministic and start more quickly than instances with startup scripts. However, startup scripts are more flexible and let you update the
apps and settings in your instances more easily." https://cloud.google.com/compute/docs/instance- templates/create-instance-
templates#using_custom_or_public_images_in_your_instance_templates
NEW QUESTION 81
- (Topic 5)
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high
performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20TB of log archives
retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows
customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
Answer: D
Explanation:
https://cloud.google.com/compute/docs/disks
NEW QUESTION 86
- (Topic 5)
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to
operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What
should you do?
A. Create a Compute Engine instance template using the most recent Debian imag
B. Create an instance from this template, and install and configure the application as part of the startup scrip
C. Repeat this process whenever a new Google-managed Debian image becomes available.
D. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
E. Create an instance with the latest available Debian imag
F. Connect to the instance via SSH, and install and configure the application on the instanc
G. Repeat this process whenever a new Google-managed Debian image becomes available.
H. Create a Docker container with Debian as the base imag
I. Install and configure the application as part of the Docker image creation proces
J. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available.
Answer: B
Explanation:
Reference: https://cloud.google.com/compute/docs/os-patch-management
NEW QUESTION 90
- (Topic 5)
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan incorporates the business goal of
cost optimization. Your team has deployed two GCP projects successfully to date. What should you do?
Answer: B
Explanation:
https://services.google.com/fh/files/misc/cloud_center_of_excellence.pdf
NEW QUESTION 94
- (Topic 5)
You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate encryption keys
outside of Google Cloud. You need to implement a solution. What should you do?
A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the
created ke
B. Set up a Dataflow pipeline to decrypt the data and to store it in a BigQuery dataset.
C. Generate a new key in Cloud Key Management Service (Cloud KMS). Create a dataset in BigQuery using the customer-managed key option and select the
created key
D. Import a key in Cloud KM
E. Store all data in Cloud Storage using the customer- managed key option and select the created ke
F. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
G. Import a key in Cloud KM
H. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
Answer: D
Explanation:
https://cloud.google.com/bigquery/docs/customer-managed-encryption
NEW QUESTION 95
- (Topic 5)
Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to
ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your
application stays below a certain threshold. What should you do?
A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.
B. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployment
C. Send curl requests to your application, and validate if the auto scaling works.
D. Replicate the application over multiple GKE clusters in every Google Cloud region.Configure a global HTTP(S) load balancer to expose the different clusters
over a single global IP address.
E. Use Cloud Debugger in the development environment to understand the latency between the different microservices.
Answer: B
NEW QUESTION 96
- (Topic 5)
You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the application. You want to
evaluate the new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?
Answer: A
Explanation:
https://cloud.google.com/run/docs/rollouts-rollbacks-traffic-migration
NEW QUESTION 98
- (Topic 5)
Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling
and relaunches.
Which feature of Kubernetes should you use to accomplish this?
A. StatefulSets
B. Role-based access control
C. Container environment variables
D. Persistent Volumes
Answer: A
Explanation:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
A.
Answer: B
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes togrant the appropriate Cloud Pub/Sub IAM roles.
C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
Answer: A
Answer: A
Explanation:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google’s
network. Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional
bandwidth over the public Internet or using VPN tunnels.
Benefits:
? Traffic between your on-premises network and your VPC network doesn't traverse the public Internet. Traffic traverses a dedicated connection with fewer hops,
meaning there are less points of failure where traffic might get dropped or disrupted.
? Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN
tunnel to reach internal IP addresses. Currently, you can only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses,
you must use a separate connection.
? You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a
maximum of eight connections (80 Gbps total per interconnect).
? The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you
A. App Engine
B. Cloud Endpoints
C. Compute Engine
D. Google Kubernetes Engine
Answer: A
Explanation:
Reference: https://cloud.google.com/terms/services https://cloud.google.com/appengine/docs/standard/go/how-requests-are-handled
Answer: D
Explanation:
Reference: https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip- address
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip- address#disableexternalip
you might want to restrict external IP address so that only specific VM instances can use them. This option can help to prevent data exfiltration or maintain network
isolation. Using an Organization Policy, you can restrict external IP addresses to specific VM instances with
constraints to control use of external IP addresses for your VM instances within an organization or a project.
A. Use VPC Network Peering between the VPC and the on-premises network.
B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on- premises peer gateway.
D. Deploy Cloud VPN Gateway in each regio
E. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
Answer: C
Explanation:
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns
Answer: B
A. 1) Enable automatic storage increase for the instance.2) Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to
reduceCPU usage.3) Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
B. 1) Enable automatic storage increase for the instance.2) Change the instance type to a 32-core machine type to keep CPU usage below 75%.3) Create a
Stackdriver alert for replication lag, and shard the database to reduce replication time.
C. 1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on theinstance to create more space.2) Deploy memcached to
reduce CPU load.3) Change the instance type to a 32-core machine type to reduce replication lag.
D. 1) Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on theinstance to create more space.2) Deploy memcached to
reduce CPU load.3) Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.
Answer: A
Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this?
Answer: B
Explanation:
As per GCP documentation: "By default, every instance in a VPC network has a single network interface. Use these instructions to create additional network
interfaces. Each interface is attached to a different VPC network, giving that instance access to different VPC networks in Google Cloud. You cannot attach
multiple network interfaces to the same VPC network." Refer to: https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
https://cloud.google.com/vpc/docs/create-use-multiple- interfaces#i_am_not_able_to_connect_to_secondary_interfaces_internal_ip
A. Create a distribution list of all customers to inform them of an upcoming backward- incompatible change at least one month before replacing the old API with the
new API.
B. Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an
update to the API.
C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.
D. Use a versioning strategy for the APIs that adds the suffix “DEPRECATED” to the current API version number on every backward-incompatible chang
E. Use the current version number for the new API.
Answer: C
Explanation:
https://cloud.google.com/apis/design/versioning
All Google API interfaces must provide a major version number, which is encoded at the end of the protobuf package, and included as the first part of the URI path
for REST APIs. If an API introduces a breaking change, such as removing or renaming a field, it must increment its API version number to ensure that existing user
code does not suddenly break.
Answer: B
Explanation:
https://cloud.google.com/bigquery/docs/managing-partitioned-tables
Answer: A
Explanation:
https://cloud.google.com/appengine/docs/standard/python3/connecting-vpc https://cloud.google.com/appengine/docs/flexible/python/using-third-party-
databases#on_premises
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping
IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.
Answer: A
Explanation:
To connect two networks together we need (1) either VPN or interconnect and (2) peering. When there is peering, you cannot have conflicting IP addresses. You
can use either Cloud VPN or Cloud Interconnect to securely connect your on-premises network to your VPC network. (https://cloud.google.com/vpc/docs/vpc-
peering#transit-network) At the time of peering, Google Cloud checks to see if there are any subnet IP ranges that overlap subnet IP ranges in the other network. If
there is any overlap, peering is not established. (https://cloud.google.com/vpc/docs/vpc-peering#considerations) NAT is used to translate private to public IP and
vice versa, however because we are connecting 2 networks together, they become private IPs. So it is not applicable.
Answer: B
Explanation:
Dataflow is for processing both the Batch and Stream.
Cloud Dataflow is a fully-managed service for transforming and enriching data in stream (real time) and batch (historical) modes with equal reliability and
expressiveness -- no more complex workarounds or compromises needed.
References: https://cloud.google.com/dataflow/
virtual machine. You will deploy the copy as a new instances in a different project in the US-East region. What steps must you take?
A. Use the Linux dd and netcat command to copy and stream the root disk contents to a new virtual machine instance in the US-East region.
B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
C. Create an image file from the root disk with Linux dd command, create a new disk from the image file, and use it to create a new virtual machine instance in the
US-East region
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East
region using the image file for the root disk.
Answer: D
Explanation:
https://stackoverflow.com/questions/36441423/migrate-google-compute-engine-instance-to-a-different-region
Answer: C
A. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.Encrypt data with a customer-supplied encryption key
when storing files in Cloud Storage.
B. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.Enable default storage encryption before storing files in
Cloud Storage.
C. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.Utilize Google’s default encryption at rest when storing
files in Cloud Storage.
D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirement
E. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.
Answer: D
Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts
Answer: B
Explanation:
https://cloud.google.com/functions/docs/securing/authenticating#authenticating_function_to_function_calls
A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.
B. Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage
C. Make sure there are no other users consuming the 1 Gbps link, and use multi-thread transfer to upload the data to Cloud Storage.
D. Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage
Answer: A
A. Configure Workload Identity and service accounts to be used by the application platform.
B. Use Kubernetes Secrets, which are obfuscated by defaul
C. Configure these Secrets to be used by theapplication platform.
D. Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and useCloud Key Management Service (Cloud KMS) to
manage the encryption key
E. Configure these Secrets tobe used by the application platform.
F. Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and CloudKey Management Service (Cloud KMS) to manage the
encryption key
G. Configure these Secrets to be usedby the application platform.
Answer: A
Answer: C
Explanation:
https://cloud.google.com/solutions/gaming/cloud-game-infrastructure#dedicated_game_server
Answer: B
Explanation:
Reference: https://cloud.google.com/storage/docs/gsutil/commands/cp
A. Upload your mobile app to the Firebase Test Lab, and test the mobile app on Android and iOS devices.
B. Create Android and iOS VMs on Google Cloud, install the mobile app on the VMs, and test the mobile app.
C. Create Android and iOS containers on Google Kubernetes Engine (GKE), install the mobile app on thecontainers, and test the mobile app.
D. Upload your mobile app with different configurations to Firebase Hosting and test each configuration.
Answer: C
programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What should you do?
A. Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in
both projects
B. Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin
role to both SAs
C. Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's
project.
D. Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in
both projects
Answer: A
A. Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.
B. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.
C. Use Firebase Authentication for EHR's user facing applications.
D. Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.
E. Use GKE private clusters for all Kubernetes workloads.
Answer: AB
Explanation:
https://cloud.google.com/security/compliance/hipaa
Answer: C
* Professional-Cloud-Architect Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* Professional-Cloud-Architect Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year