What Download Professional-Cloud-DevOps-Engineer Preparation Exams Is

It is more faster and easier to pass the Google Professional-Cloud-DevOps-Engineer exam by using Verified Google Google Cloud Certified - Professional Cloud DevOps Engineer Exam questuins and answers. Immediate access to the Improve Professional-Cloud-DevOps-Engineer Exam and find the same core area Professional-Cloud-DevOps-Engineer questions with professionally verified answers, then PASS your exam with a high score now.

Online Google Professional-Cloud-DevOps-Engineer free dumps demo Below:

NEW QUESTION 1
Your application runs on Google Cloud Platform (GCP). You need to implement Jenkins for deploying application releases to GCP. You want to streamline the release process, lower operational toil, and keep user data secure. What should you do?

  • A. Implement Jenkins on local workstations.
  • B. Implement Jenkins on Kubernetes on-premises
  • C. Implement Jenkins on Google Cloud Functions.
  • D. Implement Jenkins on Compute Engine virtual machines.

Answer: D

Explanation:
Your application runs on Google Cloud Platform (GCP). You need to implement Jenkins for deploying application releases to GCP. You want to streamline the release process, lower operational toil, and keep user data secure. What should you do?
https://plugins.jenkins.io/google-compute-engine/

NEW QUESTION 2
You are performing a semiannual capacity planning exercise for your flagship service. You expect a service user growth rate of 10% month-over-month over the next six months. Your service is fully containerized and runs on Google Cloud Platform (GCP). using a Google Kubernetes Engine (GKE) Standard regional cluster on three zones with cluster autoscaler enabled. You currently consume about 30% of your total deployed CPU capacity, and you require resilience against the failure of a zone. You want to ensure that your users experience minimal negative impact as a result of this growth or as a result of zone failure, while avoiding unnecessary costs. How should you prepare to handle the predicted growth?

  • A. Verity the maximum node pool size, enable a horizontal pod autoscaler, and then perform a load test to verity your expected resource needs.
  • B. Because you are deployed on GKE and are using a cluster autoscale
  • C. your GKE cluster will scale automatically, regardless of growth rate.
  • D. Because you are at only 30% utilization, you have significant headroom and you won't need to add any additional capacity for this rate of growth.
  • E. Proactively add 60% more node capacity to account for six months of 10% growth rate, and then perform a load test to make sure you have enough capacity.

Answer: A

Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler
The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload's CPU or memory consumption

NEW QUESTION 3
You have an application running in Google Kubernetes Engine. The application invokes multiple services per request but responds too slowly. You need to identify which downstream service or services are causing the delay. What should you do?

  • A. Analyze VPC flow logs along the path of the request.
  • B. Investigate the Liveness and Readiness probes for each service.
  • C. Create a Dataflow pipeline to analyze service metrics in real time.
  • D. Use a distributed tracing framework such as OpenTelemetry or Stackdriver Trace.

Answer: C

NEW QUESTION 4
You are responsible for creating and modifying the Terraform templates that define your Infrastructure. Because two new engineers will also be working on the same code, you need to define a process and adopt a tool that will prevent you from overwriting each other's code. You also want to ensure that you capture all updates in the latest version. What should you do?

  • A. • Store your code in a Git-based version control system.• Establish a process that allows developers to merge their own changes at the end of each day.• Package and upload code lo a versioned Cloud Storage bucket as the latest master version.
  • B. • Store your code in a Git-based version control system.• Establish a process that includes code reviews by peers and unit testing to ensure integrity and functionality before integration of code.• Establish a process where the fully integrated code in the repository becomes the latest master version.
  • C. • Store your code as text files in Google Drive in a defined folder structure that organizes the files.• At the end of each da
  • D. confirm that all changes have been captured in the files within the folder structure.• Rename the folder structure with a predefined naming convention that increments the version.
  • E. • Store your code as text files in Google Drive in a defined folder structure that organizes the files.• At the end of each day, confirm that all changes have been captured in the files within the folder structure and create a new .zip archive with a predefined naming convention.• Upload the .zip archive to a versioned Cloud Storage bucket and accept it as the latest version.

Answer: B

NEW QUESTION 5
You are running a real-time gaming application on Compute Engine that has a production and testing environment. Each environment has their own Virtual Private Cloud (VPC) network. The application frontend and backend servers are located on different subnets in the environment's VPC. You suspect there is a malicious process communicating intermittently in your production frontend servers. You want to ensure that network traffic is captured for analysis. What should you do?

  • A. Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 0.5.
  • B. Enable VPC Flow Logs on the production VPC network frontend and backend subnets only with a sample volume scale of 1.0.
  • C. Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 0.5. Apply changes intesting before production.
  • D. Enable VPC Flow Logs on the testing and production VPC network frontend and backend subnets with a volume scale of 1.0. Apply changes in testing before production.

Answer: D

NEW QUESTION 6
Some of your production services are running in Google Kubernetes Engine (GKE) in the eu-west-1 region. Your build system runs in the us-west-1 region. You want to push the container images from your build system to a scalable registry to maximize the bandwidth for transferring the images to the cluster. What should you do?

  • A. Push the images to Google Container Registry (GCR) using the gcr.io hostname.
  • B. Push the images to Google Container Registry (GCR) using the us.gcr.io hostname.
  • C. Push the images to Google Container Registry (GCR) using the eu.gcr.io hostname.
  • D. Push the images to a private image registry running on a Compute Engine instance in the eu-west-1 region.

Answer: C

Explanation:
Hostname Storage location gcr.io Stores images in data centers in the United States asia.gcr.io Stores images in data centers in Asia eu.gcr.io Stores images in data centers within member states of the European Union us.gcr.io Stores images in data centers in the United States

NEW QUESTION 7
You support a popular mobile game application deployed on Google Kubernetes Engine (GKE) across several Google Cloud regions. Each region has multiple Kubernetes clusters. You receive a report that none of the users in a specific region can connect to the application. You want to resolve the incident while following Site Reliability Engineering practices. What should you do first?

  • A. Reroute the user traffic from the affected region to other regions that don’t report issues.
  • B. Use Stackdriver Monitoring to check for a spike in CPU or memory usage for the affected region.
  • C. Add an extra node pool that consists of high memory and high CPU machine type instances to the cluster.
  • D. Use Stackdriver Logging to filter on the clusters in the affected region, and inspect error messages in the logs.

Answer: A

Explanation:
Google always aims to first stop the impact of an incident, and then find the root cause (unless the root cause just happens to be identified early on).

NEW QUESTION 8
You manage several production systems that run on Compute Engine in the same Google Cloud Platform (GCP) project. Each system has its own set of dedicated Compute Engine instances. You want to know how must it costs to run each of the systems. What should you do?

  • A. In the Google Cloud Platform Console, use the Cost Breakdown section to visualize the costs per system.
  • B. Assign all instances a label specific to the system they ru
  • C. Configure BigQuery billing export and query costs per label.
  • D. Enrich all instances with metadata specific to the system they ru
  • E. Configure Stackdriver Logging to export to BigQuery, and query costs based on the metadata.
  • F. Name each virtual machine (VM) after the system it run
  • G. Set up a usage report export to a Cloud Storage bucke
  • H. Configure the bucket as a source in BigQuery to query costs based on VM name.

Answer: B

Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery

NEW QUESTION 9
Your organization wants to implement Site Reliability Engineering (SRE) culture and principles. Recently, a service that you support had a limited outage. A manager on another team asks you to provide a formal explanation of what happened so they can action remediations. What should you do?

  • A. Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized list of action item
  • B. Share it with the manager only.
  • C. Develop a postmortem that includes the root causes, resolution, lessons learned, and a prioritized list of action item
  • D. Share it on the engineering organization's document portal.
  • E. Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people responsible, and a list of action items for each perso
  • F. Share it with the manager only.
  • G. Develop a postmortem that includes the root causes, resolution, lessons learned, the list of people responsible, and a list of action items for each perso
  • H. Share it on the engineering organization's document portal.

Answer: B

NEW QUESTION 10
You support an application running on App Engine. The application is used globally and accessed from various device types. You want to know the number of connections. You are using Stackdriver Monitoring for App Engine. What metric should you use?

  • A. flex/connections/current
  • B. tcp_ssl_proxy/new_connections
  • C. tcp_ssl_proxy/open_connections
  • D. flex/instance/connections/current

Answer: A

Explanation:
https://cloud.google.com/monitoring/api/metrics_gcp#gcp-appengine

NEW QUESTION 11
You deploy a new release of an internal application during a weekend maintenance window when there is minimal user traffic. After the window ends, you learn that one of the new features isn't working as expected in the production environment. After an extended outage, you roll back the new release and deploy a fix. You want to modify your release process to reduce the mean time to recovery so you can avoid extended outages in the future. What should you do?
Choose 2 answers

  • A. Before merging new code, require 2 different peers to review the code changes.
  • B. Adopt the blue/green deployment strategy when releasing new code via a CD server.
  • C. Integrate a code linting tool to validate coding standards before any code is accepted into the repository.
  • D. Require developers to run automated integration tests on their local development environments before release.
  • E. Configure a CI serve
  • F. Add a suite of unit tests to your code and have your CI server run them on commit and verify any changes.

Answer: BE

NEW QUESTION 12
You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application required by your company into production. This application is written by a third party and cannot be modified or reconfigured. The application writes its log information to /var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What should you do?

  • A. Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.
  • B. Deploy a Fluentd daemonset to GK
  • C. Then create a customized input and output configuration to tail the log file in the application's pods and write to Slackdriver Logging.
  • D. Install Kubernetes on Google Compute Engine (GCE> and redeploy your application
  • E. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.
  • F. Write a script to tail the log file within the pod and write entries to standard outpu
  • G. Run the script as a sidecar container with the application's po
  • H. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.

Answer: B

Explanation:
https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd
Besides the list of default logs that the Logging agent streams by default, you can customize the Logging agent to send additional logs to Logging or to adjust agent settings by adding input configurations. The configuration definitions in these sections apply to the fluent-plugin-google-cloud output plugin only and specify how logs are transformed and ingested into Cloud Logging. https://cloud.google.com/logging/docs/agent/logging/configuration#configure

NEW QUESTION 13
You support an application running on GCP and want to configure SMS notifications to your team for the most critical alerts in Stackdriver Monitoring. You have already identified the alerting policies you want to configure this for. What should you do?

  • A. Download and configure a third-party integration between Stackdriver Monitoring and an SMS gateway.Ensure that your team members add their SMS/phone numbers to the external tool.
  • B. Select the Webhook notifications option for each alerting policy, and configure it to use a third-party integration too
  • C. Ensure that your team members add their SMS/phone numbers to the external tool.
  • D. Ensure that your team members set their SMS/phone numbers in their Stackdriver Profil
  • E. Select the SMS notification option for each alerting policy and then select the appropriate SMS/phone numbers from the list.
  • F. Configure a Slack notification for each alerting polic
  • G. Set up a Slack-to-SMS integration to send SMS messages when Slack messages are receive
  • H. Ensure that your team members add their SMS/phone numbers to the external integration.

Answer: C

Explanation:
https://cloud.google.com/monitoring/support/notification-options#creating_channels To configure SMS notifications, do the following:
In the SMS section, click Add new and follow the instructions. Click Save. When you set up your alerting policy, select the SMS notification type and choose a verified phone number from the list.

NEW QUESTION 14
Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to the production environment. A recent security audit alerted your team that the code pushed to production could contain vulnerabilities and that the existing tooling around virtual machine (VM) vulnerabilities no longer applies to the containerized environment. You need to ensure the security and patch level of all code running through the pipeline. What should you do?

  • A. Set up Container Analysis to scan and report Common Vulnerabilities and Exposures.
  • B. Configure the containers in the build pipeline to always update themselves before release.
  • C. Reconfigure the existing operating system vulnerability software to exist inside the container.
  • D. Implement static code analysis tooling against the Docker files used to create the containers.

Answer: D

Explanation:
https://cloud.google.com/binary-authorization
Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE) or Cloud Run. With Binary Authorization, you can require images to be signed by trusted authorities during the development process and then enforce signature validation when deploying. By enforcing validation, you can gain tighter control over your container environment by ensuring only verified images are integrated into the build-and-release process.

NEW QUESTION 15
You are responsible for the reliability of a high-volume enterprise application. A large number of users report that an important subset of the application’s functionality – a data intensive reporting feature – is consistently failing with an HTTP 500 error. When you investigate your application’s dashboards, you notice a strong correlation between the failures and a metric that represents the size of an internal queue used for generating reports. You trace the failures to a reporting backend that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend’s persistent disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report generation feature. How would you define it?

  • A. As the I/O wait times aggregated across all report generation backends
  • B. As the proportion of report generation requests that result in a successful response
  • C. As the application’s report generation queue size compared to a known-good threshold
  • D. As the reporting backend PD throughout capacity compared to a known-good threshold

Answer: B

Explanation:
According to SRE Workbook, one of potential SLI is as below:
* Type of service: Request-driven
* Type of SLI: Availability
* Description: The proportion of requests that resulted in a successful response. https://sre.google/workbook/implementing-slos/

NEW QUESTION 16
You are developing a strategy for monitoring your Google Cloud Platform (GCP) projects in production using Stackdriver Workspaces. One of the requirements is to be able to quickly identify and react to production environment issues without false alerts from development and staging projects. You want to ensure that you adhere to the principle of least privilege when providing relevant team members with access to Stackdriver Workspaces. What should you do?

  • A. Grant relevant team members read access to all GCP production project
  • B. Create Stackdriver workspaces inside each project.
  • C. Grant relevant team members the Project Viewer IAM role on all GCP production project
  • D. Create Slackdriver workspaces inside each project.
  • E. Choose an existing GCP production project to host the monitoring workspac
  • F. Attach the production projects to this workspac
  • G. Grant relevant team members read access to the Stackdriver Workspace.
  • H. Create a new GCP monitoring project, and create a Stackdriver Workspace inside i
  • I. Attach the production projects to this workspac
  • J. Grant relevant team members read access to the Stackdriver Workspace.

Answer: D

Explanation:
"A Project can host many Projects and appear in many Projects, but it can only be used as the scoping project once. We recommend that you create a new Project for the purpose of having multiple Projects in the same scope."

NEW QUESTION 17
Your application images are built using Cloud Build and pushed to Google Container Registry (GCR). You want to be able to specify a particular version of your application for deployment based on the release version tagged in source control. What should you do when you push the image?

  • A. Reference the image digest in the source control tag.
  • B. Supply the source control tag as a parameter within the image name.
  • C. Use Cloud Build to include the release version tag in the application image.
  • D. Use GCR digest versioning to match the image to the tag in source control.

Answer: B

Explanation:
https://cloud.google.com/container-registry/docs/pushing-and-pulling

NEW QUESTION 18
You manage an application that is writing logs to Stackdriver Logging. You need to give some team members the ability to export logs. What should you do?

  • A. Grant the team members the IAM role of logging.configWriter on Cloud IAM.
  • B. Configure Access Context Manager to allow only these members to export logs.
  • C. Create and grant a custom IAM role with the permissions logging.sinks.list and logging.sink.get.
  • D. Create an Organizational Policy in Cloud IAM to allow only these members to create log exports.

Answer: A

Explanation:
https://cloud.google.com/logging/docs/access-control

NEW QUESTION 19
You are running an application on Compute Engine and collecting logs through Stackdriver. You discover that some personally identifiable information (Pll) is leaking into certain log entry fields. All Pll entries begin with the text userinfo. You want to capture these log entries in a secure location for later review and prevent them from leaking to Stackdriver Logging. What should you do?

  • A. Create a basic log filter matching userinfo, and then configure a log export in the Stackdriver console with Cloud Storage as a sink.
  • B. Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo, and then copy the entries to a Cloud Storage bucket.
  • C. Create an advanced log filter matching userinfo, configure a log export in the Stackdriver console with Cloud Storage as a sink, and then configure a tog exclusion with userinfo as a filter.
  • D. Use a Fluentd filter plugin with the Stackdriver Agent to remove log entries containing userinfo, create an advanced log filter matching userinfo, and then configure a log export in the Stackdriver console with Cloud Storage as a sink.

Answer: B

Explanation:
https://medium.com/google-cloud/fluentd-filter-plugin-for-google-cloud-data-loss-prevention-api-42bbb1308e7

NEW QUESTION 20
......

Recommend!! Get the Full Professional-Cloud-DevOps-Engineer dumps in VCE and PDF From Thedumpscentre.com, Welcome to Download: https://www.thedumpscentre.com/Professional-Cloud-DevOps-Engineer-dumps/ (New 162 Q&As Version)