Professional Cloud Architect - 8
Professional Cloud Architect - 8
Get the Full Professional-Cloud-Architect dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/Professional-Cloud-Architect-exam-dumps.html (267 New Questions)
Google
Exam Questions Professional-Cloud-Architect
Google Certified Professional - Cloud Architect (GCP)
NEW QUESTION 1
- (Exam Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?
Answer: A
Explanation:
From Scenario:
A few of their games were more popular than expected, and they had problems scaling their application servers, MySQL databases, and analytics tools.
Requirements for Game Analytics Platform include: Dynamically scale up or down based on game activity
NEW QUESTION 2
- (Exam Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record
number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?
Answer: B
Explanation:
503 is service unavailable error. If the database was online everyone would get the 503 error. https://cloud.google.com/docs/quota#capping_usage
NEW QUESTION 3
- (Exam Topic 1)
For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the
backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?
Answer: A
Explanation:
From scenario: Requirements for Game Backend Platform
Dynamically scale up or down based on game activity
Connect to a managed NoSQL database service
Run customize Linux distro
NEW QUESTION 4
- (Exam Topic 2)
For this question, refer to the TerramEarth case study.
The TerramEarth development team wants to create an API to meet the company's business requirements. You want the development team to focus their
development effort on business value versus creating a custom framework. Which method should they use?
Answer: A
Explanation:
https://cloud.google.com/endpoints/docs/openapi/about-cloud-endpoints?hl=en_US&_ga=2.21787131.-1712523
https://cloud.google.com/endpoints/docs/openapi/architecture-overview https://cloud.google.com/storage/docs/gsutil/commands/test
Develop, deploy, protect and monitor your APIs with Google Cloud Endpoints. Using an Open API Specification or one of our API frameworks, Cloud Endpoints
gives you the tools you need for every phase of API development.
From scenario: Business Requirements
Decrease unplanned vehicle downtime to less than 1 week, without increasing the cost of carrying surplus inventory
Support the dealer network with more data on how their customers use their equipment to better position new products and services
Have the ability to partner with different companies – especially with seed and fertilizer suppliers in the fast-growing agricultural business – to create compelling
joint offerings for their customers.
Reference: https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth
NEW QUESTION 5
- (Exam Topic 2)
For this question, refer to the TerramEarth case study.
TerramEarth's CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the development team to focus their failure.
You want to allow analysts to centrally query the vehicle data. Which architecture should you recommend?
A)
B)
C)
D)
A. Option A
B. Option B
C. Option C
D. Option D
Answer: A
Explanation:
https://cloud.google.com/solutions/iot/ https://cloud.google.com/solutions/designing-connected-vehicle-platform
https://cloud.google.com/solutions/designing-connected-vehicle-platform#data_ingestion
http://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing-conne
cted-car-data
https://cloud.google.com/solutions/iot/ The push endpoint can be a load balancer. A container cluster can be used.
Cloud Pub/Sub for Stream Analytics
NEW QUESTION 6
- (Exam Topic 2)
For this question, refer to the TerramEarth case study
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle
event data. You want to support delegated authorization against this data. What should you do?
Answer: A
Explanation:
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-
organizations#delegate_application_autho Delegate application authorization with OAuth2
Cloud Platform APIs support OAuth 2.0, and scopes provide granular authorization over the methods that are supported. Cloud Platform supports both service-
account and user-account OAuth, also called three-legged OAuth.
References:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_autho
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps
NEW QUESTION 7
- (Exam Topic 2)
For this question, refer to the TerramEarth case study.
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is
error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution
and minimize data transfer time on the cellular connections. What should you do?
Answer: D
Explanation:
https://cloud.google.com/storage/docs/locations
NEW QUESTION 8
- (Exam Topic 2)
For this question refer to the TerramEarth case study
Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental
conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this
goal?
A. Have your engineers inspect the data for patterns, and then create an algorithm with rules that makeoperational adjustments automatically.
B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments
automatically.
D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make
operational adjustments automatically.
Answer: B
NEW QUESTION 9
- (Exam Topic 2)
For this question refer to the TerramEarth case study.
Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.
Answer: B
Explanation:
Capacity planning, TCO calculations, opex/capex allocation From the case study, it can conclude that Management (CXO) all concern rapid provision of resources
(infrastructure) for growing as well as cost management, such as Cost optimization in Infrastructure, trade up front capital expenditures (Capex) for ongoing
operating expenditures (Opex), and Total cost of ownership (TCO)
NEW QUESTION 10
- (Exam Topic 3)
For this question, refer to the JencoMart case study.
JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?
A. Cloud Spanner
B. Google BigQuery
C. Google Cloud SQL
D. Google Cloud Datastore
Answer: D
Explanation:
https://cloud.google.com/datastore/docs/concepts/overview Common workloads for Google Cloud Datastore:
User profiles
Product catalogs
Game state
References: https://cloud.google.com/storage-options/ https://cloud.google.com/datastore/docs/concepts/overview
NEW QUESTION 10
- (Exam Topic 3)
For this question, refer to the JencoMart case study.
The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for
administration between production and development resources. What Google domain and project structure should you recommend?
A. Create two G Suite accounts to manage users: one for development/test/staging and one for production.Each account should contain one project for every
application.
B. Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production
applications.
C. Create a single G Suite account to manage users with each stage of each application in its own project.
D. Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.
Answer: D
Explanation:
Note: The principle of least privilege and separation of duties are concepts that, although semantically different, are intrinsically related from the standpoint of
security. The intent behind both is to prevent people from having higher privilege levels than they actually need
Principle of Least Privilege: Users should only have the least amount of privileges required to perform their job and no more. This reduces authorization
exploitation by limiting access to resources such as targets, jobs, or monitoring templates for which they are not authorized.
Separation of Duties: Beyond limiting user privilege level, you also limit user duties, or the specific jobs they can perform. No user should be given
responsibility for more than one related function. This limits the ability of a user to perform a malicious action and then cover up that action.
References: https://cloud.google.com/kms/docs/separation-of-duties
NEW QUESTION 14
- (Exam Topic 4)
The current Dress4win system architecture has high latency to some customers because it is located in one data center.
As of a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute it's system architecture to multiple locations when Google
cloud platform. Which approach should they use?
A. Use regional managed instance groups and a global load balancer to increase performance because the regional managed instance group can grow instances
in each region separately based on traffic.
B. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines managed by your operations team.
C. Use regional managed instance groups and a global load balancer to increase reliability by providing automatic failover between zones in different regions.
D. Use a global load balancer with a set of virtual machines that forward the requests to a closer group of virtual machines as part of a separate managed instance
groups.
Answer: A
NEW QUESTION 18
- (Exam Topic 4)
For this question, refer to the Dress4Win case study.
The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP). The operations team
needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects. What can they do?
Answer: A
NEW QUESTION 19
- (Exam Topic 4)
For this question, refer to the Dress4Win case study.
You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top
priority. Which cloud services should you choose?
A. Google Cloud Storage Coldline to store the data, and gsutil to access the data.
B. Google Cloud Storage Nearline to store the data, and gsutil to access the data.
C. Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.
D. BigQuery to store the data, and a web server cluster in a managed instance group to access the data.Google Cloud SQL mirrored across two distinct regions to
store the data, and a Redis cluster in a managed instance group to access the data.
Answer: A
Explanation:
References: https://cloud.google.com/storage/docs/storage-classes
NEW QUESTION 22
- (Exam Topic 4)
Dress4win has end to end tests covering 100% of their endpoints.
They want to ensure that the move of cloud does not introduce any new bugs.
Which additional testing methods should the developers employ to prevent an outage?
A. They should run the end to end tests in the cloud staging environment to determine if the code is working as intended.
B. They should enable google stack driver debugger on the application code to show errors in the code
C. They should add additional unit tests and production scale load tests on their cloud staging environment.
D. They should add canary tests so developers can measure how much of an impact the new release causes to latency
Answer: B
NEW QUESTION 27
- (Exam Topic 4)
For this question, refer to the Dress4Win case study.
Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which
additional testing methods should the developers employ to prevent an outage?
A. They should enable Google Stackdriver Debugger on the application code to show errors in the code.
B. They should add additional unit tests and production scale load tests on their cloud staging environment.
C. They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.
D. They should add canary tests so developers can measure how much of an impact the new release causes to latency.
Answer: B
NEW QUESTION 31
- (Exam Topic 4)
For this question, refer to the Dress4Win case study.
As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view
these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they
log in. Which configuration should Dress4Win use?
Answer: A
NEW QUESTION 34
- (Exam Topic 4)
For this question, refer to the Dress4Win case study.
Dress4Win would like to become familiar with deploying applications to the cloud by successfully deploying some applications quickly, as is. They have asked for
your recommendation. What should you advise?
A. Identify self-contained applications with external dependencies as a first move to the cloud.
B. Identify enterprise applications with internal dependencies and recommend these as a first move to the cloud.
C. Suggest moving their in-house databases to the cloud and continue serving requests to on-premise applications.
D. Recommend moving their message queuing servers to the cloud and continue handling requests to on-premise applications.
Answer: A
Explanation:
https://cloud.google.com/blog/products/gcp/the-five-phases-of-migrating-to-google-cloud-platform
NEW QUESTION 36
- (Exam Topic 5)
As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their
GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication.
What
should they do?
Answer: B
NEW QUESTION 39
- (Exam Topic 5)
You have an application deployed on Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-
service. You need to perform an update to the application with minimal downtime to the application. What should you do?
Answer: A
Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps#updating_an_application
NEW QUESTION 40
- (Exam Topic 5)
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of
the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to
serve both APIs. What should they do?
A. Configure a new load balancer for the new version of the API.
B. Reconfigure old clients to use a new endpoint for the new API.
C. Have the old API forward traffic to the new API based on the path.
D. Use separate backend pools for each API path behind the load balancer.
Answer: D
Explanation:
https://cloud.google.com/endpoints/docs/openapi/lifecycle-management
NEW QUESTION 43
- (Exam Topic 5)
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow
specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?
Answer: A
Explanation:
https://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-by-usi
NEW QUESTION 47
- (Exam Topic 5)
Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform
and monitor the KPIs with low latency. How should they capture the KPIs?
A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
Answer: A
Explanation:
https://cloud.google.com/monitoring/api/v3/metrics-details#metric-kinds
NEW QUESTION 48
- (Exam Topic 5)
You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do?
Answer: B
Explanation:
https://cloud.google.com/sql/docs/mysql/high-availability
NEW QUESTION 51
- (Exam Topic 5)
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S)
load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address.
You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.
What should you do?
A. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Create a tag on each instance with the name of the load balance
E. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
Answer: C
Explanation:
https://cloud.google.com/vpc/docs/using-firewalls
The best practice when configuration a health check is to check health and serve traffic on the same port. However, it is possible to perform health checks on one
port, but serve traffic on another. If you do use two different ports, ensure that firewall rules and services running on instances are configured appropriately. If you
run health checks and serve traffic on the same port, but decide to switch ports at some point, be sure to update both the backend service and the health check.
Backend services that do not have a valid global forwarding rule referencing it will not be health checked and will have no health status.
References: https://cloud.google.com/compute/docs/load-balancing/http/backend-service
NEW QUESTION 52
- (Exam Topic 5)
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application
deployments are taking too long.
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? Choose 2 answers.
D. Use larger machine types for your Google Container Engine node pools.
E. Copy the source after the package dependencies (Python and pip) are installed.
Answer: CE
Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and
by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container
requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large
selection of packages from the repository.
References: https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/
NEW QUESTION 53
- (Exam Topic 5)
Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time.
You want to use Google-recommended practices to detect anomalies in your company data. What should you do?
Answer: B
Explanation:
https://cloud.google.com/dataprep/
NEW QUESTION 54
- (Exam Topic 5)
Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to
prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?
A. * 1. Create a VPC Service Controls perimeter that includes the projects with the buckets.* 2. Create an access level with the CIDR of the office network.
B. * 1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range.* 2. Use the Classless Inter-domain Routing (CIDR) of the
office network.
C. * 1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets.* 2. Schedule the
Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business.
D. * 1. Create a Cloud VPN to the office network.* 2. Configure Private Google Access for on-premises hosts.
Answer: A
Explanation:
For all Google Cloud services secured with VPC Service Controls, you can ensure that: Resources within a perimeter are accessed only from clients within
authorized VPC networks using Private Google Access with either Google Cloud or on-premises. https://cloud.google.com/vpc-service-controls/docs/overview
https://cloud.google.com/vpc-service-controls/docs/overview. You create a service control across your VPC and any cloud bucket or any project resource to restrict
access. Anything outside of it can't access the resources within service control perimeter
NEW QUESTION 55
- (Exam Topic 5)
Your company is designing its data lake on Google Cloud and wants to develop different ingestion pipelines to collect unstructured data from different sources.
After the data is stored in Google Cloud, it will be processed in several data pipelines to build a recommendation engine for end users on the website. The
structure of the data retrieved from the source systems can change at any time. The data must be stored exactly as it was retrieved for reprocessing purposes in
case the data structure is incompatible with the current processing pipelines. You need to design an architecture to support the use case after you retrieve the
data. What should you do?
A. Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing.
B. Store the data in a BigQuery tabl
C. Design the processing pipelines to retrieve the data from the table.
D. Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket for reprocessing.
E. Store the data in a Cloud Storage bucke
F. Design the processing pipelines to retrieve the data from the bucket
Answer: D
NEW QUESTION 59
- (Exam Topic 5)
You team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly built application that requires access to third-party services on the internet.
Your company does not allow any Compute Engine instance to have a public IP address on Google Cloud. You need to create a deployment strategy that adheres
to these guidelines. What should you do?
A. Create a Compute Engine instance, and install a NAT Proxy on the instanc
B. Configure all workloads on GKE to pass through this proxy to access third-party services on the Internet
C. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet
D. Configure the GKE cluster as a route-based cluste
Answer: B
Explanation:
A Cloud NAT gateway can perform NAT for nodes and Pods in a private cluster, which is a type of
VPC-native cluster. The Cloud NAT gateway must be configured to apply to at least the following subnet IP address ranges for the subnet that your cluster uses:
Subnet primary IP address range (used by nodes)
Subnet secondary IP address range used for Pods in the cluster Subnet secondary IP address range used for Services in the cluster
The simplest way to provide NAT for an entire private cluster is to configure a Cloud NAT gateway to apply to all of the cluster's subnet's IP address ranges.
https://cloud.google.com/nat/docs/overview
NEW QUESTION 64
- (Exam Topic 5)
You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data
for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
Answer: B
Explanation:
Reference https://cloud.google.com/pubsub/docs/ordering
NEW QUESTION 69
- (Exam Topic 5)
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high
performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20TB of log archives
retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows
customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
Answer: D
Explanation:
https://cloud.google.com/compute/docs/disks
NEW QUESTION 70
- (Exam Topic 5)
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Googlerecommended practices. What should you do?
Answer: A
Explanation:
https://cloud.google.com/transfer-appliance/docs/2.0/faq
NEW QUESTION 73
- (Exam Topic 5)
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12
months. You want to streamline and expedite the analysis and audit process. What should you do?
A. Create custom Google Stackdriver alerts and send them to the auditor.
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor.
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLS and views to limit an auditor's view.
D. Enable Google Cloud Storage (GCS) log export to audit logs Into a GCS bucket and delegate access to the bucket.
Answer: D
Explanation:
Export the logs to Google Cloud Storage bucket - Archive Storage, as it will not be used for 1 year, price for which is $0.004 per GB per Month. The price for long
term storage in BigQuery is $0.01 per GB per Month (250% more). Also for analysis purpose, whenever Auditors are there(once per year), you can use BigQuery
and use GCS bucket as external data source. BigQuery supports querying Cloud Storage data from these storage classes:
Standard Nearline Coldline Archive
NEW QUESTION 75
- (Exam Topic 5)
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile
and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv
files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
What actions will meet your company’s needs?
A. Compress and upload both achieved files and files uploaded daily using the qsutil –m option.
B. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transfer archived data to Cloud Storag
C. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
D. Lease a Transfer Appliance, upload archived files to it, and send it, and send it to Google to transferarchived data to Cloud Storag
E. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compares and upload files daily using the gsutil –m option.
F. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storag
G. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
Answer: B
Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering
NEW QUESTION 80
- (Exam Topic 5)
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you
do?
Answer: B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster
NEW QUESTION 81
- (Exam Topic 5)
The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The
database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual
machine with 80 GB of SSD persistent disk. What should they change to get better performance from this system?
Answer: C
NEW QUESTION 82
- (Exam Topic 5)
You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate encryption keys
outside of Google Cloud. You need to implement a solution. What should you do?
A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the
created ke
B. Set up a Dataflow pipeline to decrypt the data and to store it in a BigQuery dataset.
C. Generate a new key in Cloud Key Management Service (Cloud KMS). Create a dataset in BigQuery using the customer-managed key option and select the
created key
D. Import a key in Cloud KM
E. Store all data in Cloud Storage using the customer-managed key option and select the created ke
F. Set up a Dataflow pipeline to decrypt the data and to store it in a newBigQuery dataset.
G. Import a key in Cloud KM
H. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
Answer: D
Explanation:
https://cloud.google.com/bigquery/docs/customer-managed-encryption
NEW QUESTION 83
- (Exam Topic 5)
You need to migrate Hadoop jobs for your company’s Data Science team without modifying the underlying infrastructure. You want to minimize costs and
infrastructure management effort. What should you do?
Answer: B
Explanation:
Reference: https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-jobs
NEW QUESTION 86
- (Exam Topic 5)
You are designing an application for use only during business hours. For the minimum viable product release, you’d like to use a managed product that
automatically “scales to zero” so you don’t incur costs when there is no activity.
Which primary compute resource should you choose?
A. Cloud Functions
B. Compute Engine
C. Kubernetes Engine
D. AppEngine flexible environment
Answer: A
Explanation:
https://cloud.google.com/serverless-options
NEW QUESTION 90
- (Exam Topic 5)
You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per
second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not
receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?
Answer: B
Explanation:
https://cloud.google.com/run/docs/about-instance-autoscaling https://cloud.google.com/blog/topics/developers-practitioners/bigtable-vs-bigquery-whats-difference
NEW QUESTION 91
- (Exam Topic 5)
Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more
cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling. Which two compute products should you choose? Choose 2 answers
Answer: BC
Explanation:
B: With Container Engine, Google will automatically deploy your cluster for you, update, patch, secure the nodes.
Kubernetes Engine's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run.
C: Solutions like Datastore, BigQuery, AppEngine, etc are truly NoOps.
App Engine by default scales the number of instances running up and down to match the load, thus providing consistent performance for your app at all times
while minimizing idle instances and thus reducing cost.
Note: At a high level, NoOps means that there is no infrastructure to build out and manage during usage of the platform. Typically, the compromise you make with
NoOps is that you lose control of the underlying infrastructure.
References:
https://www.quora.com/How-well-does-Google-Container-Engine-support-Google-Cloud-Platform%E2%80%9
NEW QUESTION 94
- (Exam Topic 5)
You have an outage in your Compute Engine managed instance group: all instance keep restarting after 5 seconds. You have a health check configured, but
autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you
do?
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instance-groups/autohealing-instances-in-migs
Health checks used for autohealing should be conservative so they don't preemptively delete and recreate your instances. When an autohealer health check is too
aggressive, the autohealer might mistake busy instances for failed instances and unnecessarily restart them, reducing availability
NEW QUESTION 99
- (Exam Topic 5)
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You
provision the Google Cloud Resource Manager and set up yourself as the org admin. What Google Cloud Identity and Access Management (Cloud IAM) roles
should you give to the security team'?
Answer: B
Explanation:
https://cloud.google.com/iam/docs/using-iam-securely
Answer: B
Explanation:
https://cloud.google.com/logging/docs/agent/default-logs
Answer: B
Explanation:
https://services.google.com/fh/files/misc/cloud_center_of_excellence.pdf
A. Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the
Shopping folder.
B. Assign the development team group only the Project Viewer role on the Finance folder.
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the
Organization.
D. Assign the development team group only the Project Owner role on the Shopping folder.
Answer: C
Explanation:
https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy
"Roles are always inherited, and there is no way to explicitly remove a permission for a lower-level resource that is granted at a higher level in the resource
hierarchy. Given the above example, even if you were to remove the Project Editor role from Bob on the "Test GCP Project", he would still inherit that role from the
"Dept Y" folder, so he would still have the permissions for that role on "Test GCP Project"."
Reference: https://cloud.google.com/resource-manager/docs/creating-managing-folders
Answer: D
Explanation:
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#billing_and_management
Answer: D
Explanation:
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#gcloud:-cloud-functions https://cloud.google.com/blog/products/networking/better-
load-balancing-for-app-engine-cloud-run-and-functio
A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branc
B. After a successful commit, have the developer deploy the newly built container image on the development cluster.
C. Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branc
D. After a successful commit, have the developer deploy the newly built container image on the development cluster.
E. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registr
F. Create a deployment pipeline that watches for new images and deploys the new image on the development cluste
G. Ensure only the deployment tool has access to deploy new versions.
H. Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registr
I. Rely on Vulnerability Scanning to ensure the code tests succee
J. As the final step of the Cloud Build process, deploy the new container image on the development cluste
K. Ensure only Cloud Build has access to deploy new versions.
Answer: C
Explanation:
https://cloud.google.com/container-registry/docs/overview
Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and
stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the
deployment tool has access to deploy new versions.
Answer: AC
Answer: C
Explanation:
https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application
Answer: D
Explanation:
https://cloud.google.com/solutions/dr-scenarios-planning-guide
Answer: B
Explanation:
The error is most like caused by the access scope issue. When create new instance, you have the default Compute engine default service account but most
serves access including BigQuery is not enable. Create an instance Most access are not enabled by default You have default service account but don't have the
permission (scope) you can stop the instance, edit, change scope and restart it to enable the scope access. Of course, if you Run your script on a new virtual
machine with the BigQuery access scope enabled, it also works
https://cloud.google.com/compute/docs/access/service-accounts
Answer: B
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google-
recommended way for your application to authenticate to the required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles.
C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
Answer: A
Answer: A
Explanation:
Reference: https://cloud.google.com/files/BigQueryTechnicalWP.pdf
A. Enable the Cloud Trace API on your project and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics
B. Configure Anthos Config Management on your cluster and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster
C. Use Cloud Profiler to follow up the request latenc
D. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting Policy in case this metric exceeds the threshold
E. Install Anthos Service Mesh on your cluste
F. Use the Google Cloud Console to define a Service Level Objective (SLO)
Answer: D
Explanation:
https://cloud.google.com/service-mesh/docs/overview https://cloud.google.com/service-mesh/docs/observability/slo-overview
A. Capture existing users input, and replay captured user load until autoscale is triggered on all layer
B. At the same time, terminate all resources in one of the zones.
C. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce "chaos" to the system by
terminating random resources on both zones.
D. Expose the new system to a larger group of users, and increase group ' size each day until autoscale logic is tnggered on all layer
E. At the same time, terminate random resources on both zones.
F. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing
users usage of the app, and deploy enough resources to handle 200% of expected load.
Answer: A
Answer: A
Explanation:
Reference https://cloud.google.com/storage/docs/json_api/v1/status-codes
Your company is developing a web-based application. You need to make sure that production deployments are linked to source code commits and are fully
auditable. What should you do?
A. Make sure a developer is tagging the code commit with the date and time of commit
B. Make sure a developer is adding a comment to the commit that links to the deployment.
C. Make the container tag match the source code commit hash.
D. Make sure the developer is tagging the commits with :latest
Answer: C
Explanation:
From: https://cloud.google.com/architecture/best-practices-for-building-containers Under: Tagging using the Git commit hash (bottom of page almost)
"In this case, a common way of handling version numbers is to use the Git commit SHA-1 hash (or a short version of it) as the version number. By design, the Git
commit hash is immutable and references a specific version of your software.
You can use this commit hash as a version number for your software, but also as a tag for the Docker image built from this specific version of your software. Doing
so makes Docker images traceable: because in this case the image tag is immutable, you instantly know which specific version of your software is running inside a
given container."
A. * 1. Copy popular songs into CloudSQL as a blob* 2. Update application code to retrieve data from CloudSQL when Cloud Storage is overloaded
B. * 1. Create a managed instance group with Compute Engine instances* 2. Create a global toad balancer and configure ii with two backbends* Managed
instance group* Cloud Storage bucket* 3. Enable Cloud CDN on the bucket backend
C. * 1. Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances* 2. Serve muse files directly from the backend Compute Engine
instance
D. * 1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances* 2. Download popular songs in Cloud Filestore* 3. Serve
music Wes directly from the backend Compute Engine instance
Answer: B
Answer: B
Explanation:
https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#transfer-optio https://cloud.google.com/storage-transfer-service
Answer: A
Explanation:
https://cloud.google.com/appengine/docs/standard/php/memcache/using
D. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
E. Add all users to a grou
F. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
G. Add all users to a grou
H. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
Answer: A
Explanation:
Reference: https://cloud.google.com/bigquery/docs/running-queries
A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
B. Configure a direct peering connection between the on-premises environment and Google Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.
Answer: C
Explanation:
Reference https://cloud.google.com/architecture/setting-up-private-access-to-cloud-apis-through-vpn-tunnels
A. Create an egress rule with priority 1000 to deny all traffic for all instance
B. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances.
C. Create an egress rule with priority 100 to deny all traffic for all instance
D. Create another egress rule with priority 1000 to allow the Active Directory traffic for all instances.
E. Create an egress rule with priority 1000 to allow the Active Directory traffi
F. Rely on the implied deny egress rule with priority 100 to block all traffic for all instances.
G. Create an egress rule with priority 100 to allow the Active Directory traffi
H. Rely on the implied deny egress rule with priority 1000 to block all traffic for all instances.
Answer: B
Explanation:
https://cloud.google.com/vpc/docs/firewalls
A. StatefulSets
B. Role-based access control
C. Container environment variables
D. Persistent Volumes
Answer: A
Explanation:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Answer: A
Explanation:
Reference: https://cloud.google.com/storage/docs/using-bucket-lock
Answer: C
Explanation:
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a combination of both. You can use folders to group
projects under an organization in a hierarchy. For example, your organization might contain multiple departments, each with its own set of GCP resources. Folders
allow you to group these resources on a per-department basis. Folders are used to group resources that share common IAM policies. While a folder can contain
multiple folders or resources, a given folder or resource can have exactly one parent.
References: https://cloud.google.com/resource-manager/docs/creating-managing-folders
A. * 1. Update your GKE cluster to use Cloud Operations for GKE.* 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
B. * 1. Create a new GKE cluster with Cloud Operations for GKE enabled.* 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the
new cluster.* 3. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
C. * 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus.* 2. Set an alert to trigger whenever the application returns an error.
D. * 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus.* 2. Migrate the affected Pods to the new cluster, and redirect
traffic for those Pods to the new cluste
E. * 3. Set an alert to trigger whenever the application returns an error.
Answer: A
Explanation:
Reference: https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running- kubernetes-engine
Answer: C
Explanation:
https://cloud.google.com/appengine/docs/the-appengine-environments
A. Write a lifecycle management rule in XML and push it to the bucket with gsutil.
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil.
C. Schedule a cron script using gsutil is -lr gs://backups/** to find and remove items older than 90 days.
D. Schedule a cron script using gsutil ls -1 gs://backups/** to find and remove items older than 90 days and schedule it with cron.
Answer: B
Explanation:
https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
A. App Engine
B. Cloud Endpoints
C. Compute Engine
D. Google Kubernetes Engine
Answer: A
Explanation:
Reference: https://cloud.google.com/terms/services https://cloud.google.com/appengine/docs/standard/go/how-requests-are-handled
A. Deploy the application on two Compute Engine instances in the same project but in a different region.Use the first instance to serve traffic, and use the HTTP
load balancing service to fail over to the standby instance in case of a disaster.
B. Deploy the application on a Compute Engine instanc
C. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster.
D. Deploy the application on two Compute Engine instance groups, each in the same project but in adifferent regio
E. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
F. Deploy the application on two Compute Engine instance groups, each in separate project and a different regio
G. Use the first instance group to server traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
Answer: C
A. • Append metadata to file body.• Compress individual files.• Name files with serverName-Timestamp.• Create a new bucket if bucket is older than 1 hour and
save individual files to the new bucke
B. Otherwise, save files to existing bucket
C. • Batch every 10,000 events with a single manifest file for metadata.• Compress event files and manifest file into a single archive file.• Name files using
serverName-EventSequence.• Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucke
D. Otherwise, save the single archive file to existing bucket.
E. • Compress individual files.• Name files with serverName-EventSequence.• Save files to one bucket• Set custom metadata headers for each object after
saving.
F. • Append metadata to file body.• Compress individual files.• Name files with a random prefix pattern.• Save files to one bucket
Answer: D
Explanation:
In order to maintain a high request rate, avoid using sequential names. Using completely random object names will give you the best load distribution.
Randomness after a common prefix is effective under the prefix https://cloud.google.com/storage/docs/request-rate
Answer: AEF
Explanation:
References: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp https://cloud.google.com/appengine/docs/standard/java/building-
app/cloud-sql
A. Perform the following:1) Create a managed instance group with f1-micro type machines.2) Use a startup script to clone the repository, check out the production
branch, install the dependencies,and start the Python app.3) Restart the instances to automatically deploy new production releases.
B. Perform the following:1) Create a managed instance group with n1-standard-1 type machines.2) Build a Compute Engine image from the production branch that
contains all of the dependencies and automatically starts the Python app.3) Rebuild the Compute Engine image, and update the instance template to deploy new
production releases.
C. Perform the following:1) Create a Kubernetes Engine cluster with n1-standard-1 type machines.2) Build a Docker image from the production branch with all of
the dependencies, and tag it with the version number.3) Create a Kubernetes Deployment with the imagePullPolicy set to “IfNotPresent” in the staging
namespace, and then promote it to the production namespace after testing.
D. Perform the following:1) Create a Kubernetes Engine (GKE) cluster with n1-standard-4 type machines.2) Build a Docker image from the master branch will all of
the dependencies, and tag it with “latest”.3) Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to “Always”. Restart the
Answer: D
Explanation:
https://cloud.google.com/compute/docs/instance-templates
Answer: D
Explanation:
Reference: https://cloud.google.com/blog/products/sap-google-cloud/best-practices-for-sap-app-server- autoscaling-on-google-cloud
A. * 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team.* 2. Create a second project with a standalone VPC and
assign the Compute Admin role to the development team.* 3. Use Cloud VPN to join the two VPCs.
B. * 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role
to the development team.
C. * 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team.* 2. Create a second project without a VPC, configure it as a
Shared VPC service project, and assign the Compute Admin role to the development team.
D. * 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team.* 2. Create a second project with a standalone VPC and
assign the Compute Admin role to the development team.* 3. Use VPC Peering to join the two VPCs.
Answer: C
Explanation:
In this scenario, a large organization has a central team that manages security and networking controls for the entire organization. Developers do not have
permissions to make changes to any network or security settings defined by the security and networking team but they are granted permission to create resources
such as virtual machines in shared subnets. To facilitate this the organization makes use of a shared VPC (Virtual Private Cloud). A shared VPC allows creation of
a VPC network of RFC 1918 IP spaces that associated projects (service projects) can then use. Developers using the associated projects can create VM instances
in the shared VPC network spaces. The organization's network and security admins can create subnets, VPNs, and firewall rules usable by all the projects in the
VPC network.
https://cloud.google.com/iam/docs/job-functions/networking#single_team_manages_security_network_for_orga
Reference: https://cloud.google.com/vpc/docs/shared-vpc
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect.
D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.
Answer: A
Explanation:
Reference: https://cloud.google.com/compute/docs/instances/connecting-advanced
Answer: C
Explanation:
https://cloud.google.com/appengine/docs/standard/python3/connecting-vpc https://cloud.google.com/appengine/docs/flexible/python/using-third-party-
databases#on_premises
A. Engage with a security company to run web scrapes that look your users’ authentication data om malicious websites and notify you if any if found.
B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
D. Configure a red replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST
API.
Answer: C
Answer: B
Answer: B
Explanation:
https://cloud.google.com/interconnect/docs/how-to/direct-peering
A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
B. Review the Stackdriver logs for the specific Kubernetes Engine container that is serving the unresponsive part of the application.
C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
Answer: B
A. * 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server.* 2. Stop the on-
premises application.* 3. Create a mysqldump of the on-premises MySQL serve
B. * 4.Upload the dump to a Cloud Storage bucket.* 5. Import the dump into Cloud SQL.* 6. Modify the source code of the application to write queries to both
databases and read from its local database.* 7. Start the Compute Engine applicatio
Answer: C
Explanation:
External replica promotion migration In the migration strategy of external replica promotion, you create an external database replica and synchronize the existing
data to that replica. This can happen with minimal downtime to the existing database. When you have a replica database, the two databases have different roles
that are referred to in this document as primary and replica. After the data is synchronized, you promote the replica to be the primary in order to move the
management layer with minimal impact to database uptime. In Cloud SQL, an easy way to accomplish the external replica promotion is to use the automated
migration workflow. This process automates many of the steps that are needed for this type of migration.
https://cloud.google.com/architecture/migrating-mysql-to-cloudsql-concept
- The best option for migrating your MySQL database is to use an external replica promotion. In this strategy, you create a replica database and set your existing
database as the primary. You wait until the two databases are in sync, and you then promote your MySQL replica database to be the primary. This process
minimizes database downtime related to the database migration. https://cloud.google.com/architecture/migrating-mysql-to-cloudsql-
concept#external_replica_promotion_migrat
Answer: B
Answer: C
A. Direct them to download and install the Google StackDriver logging agent.
B. Send them a list of online resources about logging best practices.
C. Help them define their requirements and assess viable logging tools.
D. Help them upgrade their current tool to take advantage of any new features.
Answer: C
Explanation:
Help them define their requirements and assess viable logging tools. They know the requirements and the existing tools' problems. While it's true StackDriver
Logging and Error Reporting possibly meet all their requirements, there might be other tools also meet their need. They need you to provide expertise to make
assessment for new tools, specifically, logging tools that can "capture errors and help them analyze their historical log data".
References: https://cloud.google.com/logging/docs/agent/installation
- (Exam Topic 5)
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently
of the others Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database
tier. How should you configure the network?
Answer: D
Explanation:
https://aws.amazon.com/blogs/aws/building-three-tier-architectures-with-security-groups/
Google Cloud Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be defined once and used across all regions.
References: https://cloud.google.com/docs/compare/openstack/ https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/
Answer: C
Answer: C
Explanation:
https://cloud.google.com/storage/docs/access-logs
References: https://cloud.google.com/logging/docs/reference/tools/gcloud-logging
Answer: B
Explanation:
Google Cloud Dataproc is a fast, easy-to-use, low-cost and fully managed service that lets you run the Apache Spark and Apache Hadoop ecosystem on Google
Cloud Platform. Cloud Dataproc provisions big or small clusters rapidly, supports many popular job types, and is integrated with other Google Cloud Platform
services, such as Google Cloud Storage and Stackdriver Logging, thus helping you reduce TCO.
References: https://cloud.google.com/dataproc/docs/resources/faq
A. Ensure that the load tests validate the performance of Cloud Bigtable.
B. Create a separate Google Cloud project to use for the load-testing environment.
C. Schedule the load-testing tool to regularly run against the production environment.
D. Ensure all third-party systems your services use are capable of handling high load.
E. Instrument the production services to record every transaction for replay by the load-testing tool.
F. Instrument the load-testing tool and the target services with detailed logging and metrics collection.
Answer: ABF
Answer: B
A. Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the
new API.
B. Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an
update to the API.
C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.
D. Use a versioning strategy for the APIs that adds the suffix “DEPRECATED” to the current API version number on every backward-incompatible chang
E. Use the current version number for the new API.
Answer: C
Explanation:
https://cloud.google.com/apis/design/versioning
All Google API interfaces must provide a major version number, which is encoded at the end of the protobuf package, and included as the first part of the URI path
for REST APIs. If an API introduces a breaking change, such as removing or renaming a field, it must increment its API version number to ensure that existing user
code does not suddenly break.
A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage
Answer: C
Explanation:
It is time-series data, So Big Table. https://cloud.google.com/bigtable/docs/schema-design-time-series
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Good for:
Low-latency read/write access
High-throughput analytics
Native time series support
Common workloads:
IoT, finance, adtech
Personalization, recommendations
Monitoring
Geospatial datasets
Graphs
References: https://cloud.google.com/storage-options/
A. Create a Compute Engine instance template using the most recent Debian imag
B. Create an instance from this template, and install and configure the application as part of the startup scrip
C. Repeat this process whenever a new Google-managed Debian image becomes available.
D. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
E. Create an instance with the latest available Debian imag
F. Connect to the instance via SSH, and install and configure the application on the instanc
G. Repeat this process whenever a new Google-managed Debian image becomes available.
Answer: B
Explanation:
Reference: https://cloud.google.com/compute/docs/os-patch-management
Answer: D
Explanation:
In certain scenarios, an opportunistic update is useful because you don't want to cause instability to the system if it can be avoided. For example, if you have a non-
critical update that can be applied as necessary without any urgency and you have a MIG that is actively being autoscaled, perform an opportunistic update so that
Compute Engine does not actively tear down your existing instances to apply the update. When resizing down, the autoscaler preferentially terminates instances
with the old template as well as instances that are not yet in a RUNNING state.
Answer: B
Explanation:
Microservice runs on all nodes. The Micro service runs on Pod, Pod runs on Nodes. Nodes is nothing but Virtual machines. Once deployed the application
microservices will get deployed across all Nodes. Destroying one node may not mimic the behaviour of microservice crashing as it may be running in other nodes.
link: https://istio.io/latest/docs/tasks/traffic-management/fault-injection/
Answer: A
Explanation:
https://cloud.google.com/logging/docs/audit/
A. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.Encrypt data with a customer-supplied encryption key
when storing files in Cloud Storage.
B. Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.Enable default storage encryption before storing files in
Cloud Storage.
C. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.Utilize Google’s default encryption at rest when storing
files in Cloud Storage.
D. Assign predefined IAM roles to the Google Groups you created in order to enforce security requirement
E. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.
Answer: D
Explanation:
https://cloud.google.com/iam/docs/understanding-service-accounts
Answer: C
Answer: B
Explanation:
https://cloud.google.com/functions/docs/securing/authenticating#authenticating_function_to_function_calls
Answer: C
Explanation:
https://cloud.google.com/blog/products/management-tools/how-to-use-pubsub-as-a-cloud-monitoring-notificatio
A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion proces
B. Clean the data in a Cloud Dataflow pipeline.
C. Create a Cloud Function that reads data from BigQuery and cleans i
D. Trigger i
E. Trigger the Cloud Function from a Compute Engine instance.
F. Create a SQL statement on the data in BigQuery, and save it as a vie
G. Run the view daily, and save the result to a new table.
H. Use Cloud Dataprep and configure the BigQuery tables as the sourc
I. Schedule a daily job to clean the data.
Answer: A
Answer: B
Explanation:
https://cloud.google.com/run/docs/multiple-regions
A. Open a support case regarding the CVE and chat with the support engineer.
B. Read the CVEs from the Google Cloud Status Dashboard to understand the impact.
C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact
D. Post a question regarding the CVE in Stack Overflow to get an explanation
E. Post a question regarding the CVE in a Google Cloud discussion group to get an explanation
Answer: AC
Explanation:
https://cloud.google.com/support/bulletins
A. Create a BigQuery table for the European data, and set the table retention period to 36 month
B. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
C. Create a BigQuery table for the European data, and set the table retention period to 36 month
D. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.
E. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 month
F. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
G. Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 month
H. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.
Answer: C
Explanation:
https://cloud.google.com/bigquery/docs/managing-partitioned-tables#partition-expiration https://cloud.google.com/storage/docs/lifecycle
* Professional-Cloud-Architect Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* Professional-Cloud-Architect Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year