All About Simulation DOP-C01 Free Practice Exam

We provide real DOP-C01 exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Amazon-Web-Services DOP-C01 Exam quickly & easily. The DOP-C01 PDF type is available for reading and printing. You can print more and practice many times. With the help of our Amazon-Web-Services DOP-C01 dumps pdf and vce product and material, you can easily pass the DOP-C01 exam.

Online Amazon-Web-Services DOP-C01 free dumps demo Below:

NEW QUESTION 1
Your security officer has told you that you need to tighten up the logging of all events that occur on your AWS account. He wants to be able to access all events that occur on the account across all regions quickly and in the simplest way possible. He also wants to make sure he is the only person that has access to these events in the most secure way possible. Which of the following would be the best solution to assure his requirements are met? Choose the correct answer from the options below

  • A. Use CloudTrail to logall events to one S3 bucke
  • B. Make this S3 bucket only accessible by your security officer with a bucket policy that restricts access to his user only and also add MFA to the policy for a further level of securit
  • C. ^/
  • D. Use CloudTrail to log all events to an Amazon Glacier Vaul
  • E. Make sure the vault access policy only grants access to the security officer's IP address.
  • F. Use CloudTrail to send all API calls to CloudWatch and send an email to the security officer every time an API call is mad
  • G. Make sure the emails are encrypted.
  • H. Use CloudTrail to log all events to a separate S3 bucket in each region as CloudTrail cannot write to a bucket in a different regio
  • I. Use MFA and bucket policies on all the different buckets.

Answer: A

Explanation:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log,
continuously monitor, and retain events related to API calls across your AWS infrastructure. CloudTrail provides a history of AWS API calls for your account, including API calls made through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This history simplifies security analysis, resource change tracking, and troubleshooting.
You can design cloudtrail to send all logs to a central S3 bucket. For more information on cloudtrail, please visit the below URL:
◆ https://aws.amazon.com/cloudtrail/

NEW QUESTION 2
Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?

  • A. You didn't choose the Development version of the AMI you are using.
  • B. You didn't set the Development flag to true when deploying EC2 instances.
  • C. You hit the soft limit of 5 EIPs per region and requested a 6th.
  • D. You hit the soft limit of 2 VPCs per region and requested a 3rd.

Answer: C

Explanation:
The most likely cause is the fact you have hit the maximum of 5 Elastic IP's per region.
By default, all AWS accounts are limited to 5 Clastic IP addresses per region, because public (IPv4) Internet addresses are a scarce public resource. We strongly encourage you to use an Elastic IP address primarily for the ability to remap the address to another instance in the case of instance failure, and to use DNS hostnames for all other inter-node communication.
Option A is invalid because a AMI does not have a Development version tag. Option B is invalid because there is no flag for an CC2 Instance
Option D is invalid because there is a limit of 5 VPCs per region. For more information on Clastic IP's, please visit the below URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/elastic-i p-addresses-eip.html

NEW QUESTION 3
Your company has a set of resources hosted in AWS. They want to be notified when the costs of the AWS resources running in the account reaches a certain threshold. How can this be accomplished in an ideal way.

  • A. Create a script which monitors all the running resources and calculates the costs accordingly.
  • B. Download the cost reports and analyze the reports to see if the costs are going beyond the threshold
  • C. Create a billing alarm which can alert you when the costs are going beyond a certain threshold
  • D. Create a consolidated billing report and see if the costs are going beyond the threshold.

Answer: C

Explanation:
The AWS Documentation mentions
You can monitor your AWS costs by using Cloud Watch. With Cloud Watch, you can create billing alerts that notify you when your usage of your services exceeds
thresholds that you define. You specify these threshold amounts when you create the billing alerts.
When your usage exceeds these amounts, AWS sends you an email notification. You can also sign up to receive notifications when AWS prices change. For more information on billing alarms, please visit the below URL:
• http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/monitor-charges.html

NEW QUESTION 4
Your company is planning to setup a wordpress application. The wordpress application will connect to a MySQL database. Part of the requirement is to ensure that the database environment is fault
tolerant and highly available. Which of the following 2 options individually can help fulfil this requirement.

  • A. Create a MySQL RDS environment with Multi-AZ feature enabled
  • B. Create a MySQL RDS environment and create a Read Replica
  • C. Create multiple EC2 instances in the same A
  • D. Host MySQL and enable replication via scripts between the instances.
  • E. Create multiple EC2 instances in separate AZ'
  • F. Host MySQL and enable replication via scripts between the instances.

Answer: AD

Explanation:
One way to ensure high availability and fault tolerant environments is to ensure Instances are located across multiple availability zones. Hence if you are hosting
MySQL yourself, ensure you have instances spread across multiple AZ's
The AWS Documentation mentions the following about the multi-AZ feature
Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover support. Multi-AZ deployments for Oracle, PostgreSGL, MySQL, and MariaDB DB instances use Amazon's failover technology
For more information on AWS Multi-AZ deployments, please visit the below URL: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

NEW QUESTION 5
In AWS Code Deploy which of the following deployment types are available. Choose 2 answers from the options given below

  • A. In-placedeployments
  • B. Rollingdeployments
  • C. Immutabledeployments
  • D. Blue/Greendeployments

Answer: AD

Explanation:
The AWS documentation mentions the following
Deployment type: The method used to make the latest application revision available on instances in a deployment group.
In-place deployment: The application on each instance in the deployment group is stopped, the latest application revision is installed, and the new version of the application is started and validated. You can choose to use a load balancer so each instance is deregistered during its deployment and then restored to service after the deployment is complete.
Blue/green deployment: The instances in a deployment group (the original environment) are replaced by a different set of instances (the replacement environment) using these steps:
Instances are provisioned for the replacement environment.
o The latest application revision is installed on the replacement instances,
o An optional wait time occurs for activities such as application testing and system verification. Instances in the replacement environment are registered with an Elastic Load Balancing load balancer, causing traffic to be rerouted to them. Instances in the original environment are deregistered and can be terminated or kept running for other uses. For more information on the components of AWS Code Deploy, please refer to the below link:
• http://docs.aws.amazon.com/codedeploy/latest/userguide/primary-components.html

NEW QUESTION 6
You have deployed an Elastic Beanstalk application in a new environment and want to save the current state of your environment in a document. You want to be able to restore your environment to the current state later or possibly create a new environment. You also want to make sure you have a restore point. How can you achieve this?

  • A. Use CloudFormation templates
  • B. Configuration Management Templates
  • C. Saved Configurations
  • D. Saved Templates

Answer: C

Explanation:
You can save your environment's configuration as an object in Amazon S3 that can be applied to other environments during environment creation, or applied to a running environment. Saved configurations are YAML formatted templates that define an environment's platform configuration, tier, configuration option settings,
and tags.
For more information on Saved Configurations please refer to the below link:
• http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/envi ronment-configuration- savedconfig.html

NEW QUESTION 7
You have an I/O and network-intensive application running on multiple Amazon EC2 instances that cannot handle a large ongoing increase in traffic. The Amazon EC2 instances are using two Amazon EBS PIOPS volumes each, and each instance is identical.
Which of the following approaches should be taken in order to reduce load on the instances with the least disruption to the application?

  • A. Createan AMI from each instance, and set up Auto Scaling groups with a largerinstance type that has enhanced networking enabled and is Amazon EBS-optimized.
  • B. Stopeach instance and change each instance to a larger Amazon EC2 instance typethat has enhanced networking enabled and is Amazon EBS-optimize
  • C. Ensure thatRAID striping is also set up on each instance.
  • D. Addan instance-store volume for each running Amazon EC2 instance and implementRAID striping to improve I/O performance.
  • E. Addan Amazon EBS volume for each running Amazon EC2 instance and implement RAIDstripingto improve I/O performance.
  • F. Createan AMI from an instance, and set up an Auto Scaling group with an instance typethat has enhanced networking enabled and is Amazon EBS-optimized.

Answer: E

Explanation:
The AWS Documentation mentions the following on AMI's
An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AM I when you launch
an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.
For more information on AMI's, please visit the link:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/AMIs.html

NEW QUESTION 8
You have an ELB on AWS which has a set of web servers behind them. There is a requirement that the SSL key used to encrypt data is always kept secure. Secondly the logs of ELB should only be decrypted by a subset of users. Which of these architectures meets all of the requirements?

  • A. UseElastic Load Balancing to distribute traffic to a set of web server
  • B. Toprotect the SSL private key.upload the key to the load balancer and configure the load balancer to offloadthe SSL traffi
  • C. Write yourweb server logs to an ephemeral volume that has been encrypted using a randomlygenerated AES key.
  • D. UseElastic Load Balancing to distribute traffic to a set of web server
  • E. Use TCPIoad balancing on theload balancer and configure your web servers to retrieve the private key from aprivate Amazon S3bucket on boo
  • F. Write your web server logs to a private Amazon S3 bucket usingAmazon S3 server- sideencryption.
  • G. UseElastic Load Balancing to distribute traffic to a set of web servers, configurethe load balancer toperform TCP load balancing, use an AWS CloudHSM to perform the SSLtransactions, and write yourweb server logs to a private Amazon S3 bucket using Amazon S3 server-sideencryption.
  • H. UseElastic Load Balancing to distribute traffic to a set of web server
  • I. Configurethe load balancer toperform TCP load balancing, use an AWS CloudHSM to perform the SSLtransactions, and write yourweb server logs to an ephemeral volume that has been encrypted using a randomlygenerated AES key.

Answer: C

Explanation:
The AWS CIoudHSM service helps you meet corporate, contractual and regulatory compliance requirements for data security by using dedicated Hardware Security
Module (HSM) appliances within the AWS cloud. With CIoudHSM, you control the encryption keys and cryptographic operations performed by the HSM.
Option D is wrong with the CIoudHSM option because of the ephemeral volume which this is temporary storage
For more information on cloudhsm, please refer to the link:
• https://aws.amazon.com/cloudhsm/

NEW QUESTION 9
There is a requirement for an application hosted on a VPC to access the On-premise LDAP server. The VPC and the On-premise location are connected via an I PSec VPN. Which of the below are the right options for the application to authenticate each user. Choose 2 answers from the options below

  • A. Develop an identity broker that authenticates against 1AM security Token service to assume a 1AM role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials.
  • B. The application authenticates against LDAP and retrieves the name of an 1AM role associated with the use
  • C. The application then calls the 1AM Security Token Service to assume that 1AM rol
  • D. The application can use the temporary credentials to access any AWS resources.
  • E. Develop an identity broker that authenticates against LDAP and then calls 1AM Security Token Service to get 1AM federated user credential
  • F. The application calls the identity broker to get 1AM federated user credentials with access to the appropriate AWS service.
  • G. The application authenticates against LDAP the application then calls the AWS identity and Access Management (1AM) Security service to log in to 1AM using the LDAP credentials the application can use the 1AM temporary credentials to access the appropriate AWS service.

Answer: BC

Explanation:
When you have the need for an in-premise environment to work with a cloud environment, you would normally have 2 artefacts for authentication purposes
• An identity store - So this is the on-premise store such as Active Directory which stores all the information for the user's and the groups they below to.
• An identity broker - This is used as an intermediate agent between the on-premise location and the cloud environment. In Windows you have a system known as Active Directory Federation services to provide this facility.
Hence in the above case, you need to have an identity broker which can work with the identity store and the Security Token service in aws. An example diagram of how this works from the aws documentation is given below.
DOP-C01 dumps exhibit
For more information on federated access, please visit the below link: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated- users.htm I

NEW QUESTION 10
You are responsible for your company's large multi-tiered Windows-based web application running on Amazon EC2 instances situated behind a load balancer. While reviewing metrics, you've started noticing an upwards trend for slow customer page load time. Your manager has asked you to come up with a solution to ensure that customer load time is not affected by too many requests per second. Which technique would you use to solve this issue?

  • A. Re-deploy your infrastructure usingan AWS CloudFormation templat
  • B. Configure Elastic Load Balancing health checks to initiate a new AWS CloudFormation stack when health checks return failed.
  • C. Re-deploy your infrastructure using an AWS CloudFormation templat
  • D. Spin up a second AWS CloudFormation stac
  • E. Configure Elastic Load Balancing SpillOver functionality to spill over any slow connections to the second AWS CloudFormation stack.
  • F. Re-deploy your infrastructure using AWS CloudFormation, Elastic Beanstalk, and Auto Scalin
  • G. Setup your Auto Scalinggroup policies to scale based on the number of requests per second as well as the current customer load tim
  • H. •>/D- Re-deploy your application using an Auto Scaling templat
  • I. Configure the Auto Scaling template to spin up a new Elastic Beanstalk application when the customer load time surpasses your threshold.

Answer: C

Explanation:
Auto Scaling helps you ensure that you have the correct number of Amazon CC2 instances available to handle the load for your application. You create collections of
CC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Auto Scaling ensures that your group
never goes below this size. You can specify the maximum number of instances in each Auto Scaling group, and Auto Scaling ensures that yourgroup never goes
above this size. If you specify the desired capacity, either when you create the group or at any time thereafter. Auto Scaling ensures that yourgroup has this many
instances. If you specify scaling policies, then Auto Scaling can launch or terminate instances as demand on your application increases or decreases.
Option A and B are invalid because Autoscaling is required to solve the issue to ensure the application can handle high traffic loads.
Option D is invalid because there is no Autoscaling template.
For more information on Autoscaling, please refer to the below document link: from AWS http://docs.aws.amazon.com/autoscaling/latest/userguide/Whatl sAutoScaling.html

NEW QUESTION 11
You are designing a cloudformation template to install a set of web servers on EC2 Instances. The following User data needs to be passed to the EC2 Instances
#!/bin/bash
sudo apt-get update
sudo apt-get install -y nginx
Where in the cloudformation template would you ideally pass this User Data

  • A. Inthe properties section oftheEC2 Instance in the resources section
  • B. Inthe properties section of the EC2 Instance in the Output section
  • C. Inthe Metadata section of the EC2 Instance in the resources section
  • D. Inthe Metadata section of the EC2 Instance in the Output section

Answer: A

Explanation:
DOP-C01 dumps exhibit
For more information on User data in cloudformation templates, please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/depl oying.applications.htm I

NEW QUESTION 12
Your company has a set of resources hosted in AWS. Your IT Supervisor is concerned with the costs being incurred with the current set of AWS resources and wants to monitor the cost usage. Which of the following mechanisms can be used to monitor the costs of the AWS resources and also look at the possibility of cost optimization. Choose 3 answers from the options given below

  • A. Usethe Cost Explorer to see the costs of AWS resources
  • B. Createbudgets in billing section so that budgets are set beforehand
  • C. Sendall logs to Cloudwatch logs and inspect the logs for billing details
  • D. Considerusing the Trusted Advisor

Answer: ABD

Explanation:
The AWS Documentation mentions the following
1) For a quick, high-level analysis use Cost Explorer, which is a free tool that you can use to view graphs of your AWS spend data. It includes a variety of filters and preconfigured views, as well as forecasting capabilities. Cost Explorer displays data from the last 13 months, the current month, and the forecasted costs for the next three months, and it updates this data daily.
2) Consider using budgets if you have a defined spending plan for a project or service and you want to track how close your usage and costs are to exceeding your budgeted amount. Budgets use data from Cost Explorer to provide you with a quick way to see your usage-to-date and current estimated charges from AWS. You can also set up notifications that warn you if you exceed or are about to exceed your budgeted amount.
3) Visit the AWS Trusted Advisor console regularly. Trusted Advisor works like a customized cloud expert, analyzing your AWS environment and providing best practice recommendations to help you save money, improve system performance and reliability, and close security gaps.
For more information on cost optimization, please visit the below U RL:
• https://aws.amazon.com/answers/account-management/cost-optimization-monitor/

NEW QUESTION 13
You need to scale an RDS deployment. You are operating at 10% writes and 90% reads, based on your logging. How best can you scale this in a simple way?

  • A. Create a second master RDS instance and peer the RDS groups.
  • B. Cache all the database responses on the read side with CloudFront.
  • C. Create read replicas for RDS since the load is mostly reads.
  • D. Create a Multi-AZ RDS installs and route read traffic to standby.

Answer: C

Explanation:
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This replication feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.
Option A is invalid because you would need to maintain the synchronization yourself with a secondary instance.
Option B is invalid because you are introducing another layer unnecessarily when you already have read replica's Option D is invalid because you only use this for Standy's
For more information on Read Replica's, please refer to the below link: https://aws.amazon.com/rds/details/read-replicas/

NEW QUESTION 14
Your company develops a variety of web applications using many platforms and programming languages with different application dependencies. Each application must be developed and deployed quickly and be highly available to satisfy your business requirements. Which of the following methods should you use to deploy these applications rapidly?

  • A. Develop the applications in Docker containers, and then deploy them to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing.
  • B. Use the AWS CloudFormation Docker import service to build and deploy the applications with high availability in multiple Availability Zones.
  • C. Develop each application's code in DynamoDB, and then use hooks to deploy it to Elastic Beanstalk environments with Auto Scaling and Elastic Load Balancing.
  • D. Store each application's code in a Git repository, develop custom package repository managers for each application's dependencies, and deploy to AWS OpsWorks in multiple Availability Zones.

Answer: A

Explanation:
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
By using Docker with Elastic Beanstalk, you have an infrastructure that automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
For more information on Dockers and Elastic beanstalk please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html

NEW QUESTION 15
One of your instances is reporting an unhealthy system status check. However, this is not something you should have to monitor and repair on your own. How might you automate the repair of the system status check failure in an AWS environment? Choose the correct answer from the options given below

  • A. Create Cloud Watch alarms for StatuscheckFailed_System metrics and select EC2 action-Recover the instance
  • B. Writea script that queries the EC2 API for each instance status check
  • C. Writea script that periodically shuts down and starts instances based on certainstats.
  • D. Implementa third party monitoring tool.

Answer: A

Explanation:
Using Amazon Cloud Watch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your CC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
For more information on using alarm actions, please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html

NEW QUESTION 16
You are in charge of designing a number of Cloudformation templates for your organization. You need to ensure that no one can accidentally update the production based resources on the stack during a stack update. How can this be achieved in the most efficient way?

  • A. Createtags for the resources and then create 1AM policies to protect the resources.
  • B. Usea Stack based policy to protect the production based resources.
  • C. UseS3 bucket policies to protect the resources.
  • D. UseMFA to protect the resources

Answer: B

Explanation:
The AWS Documentation mentions
When you create a stack, all update actions are allowed on all resources. By default, anyone with stack update permissions can update all of the resources in the stack. During an update, some resources might require an interruption or be completely replaced, resulting in new physical IDs or completely new storage. You can prevent stack resources from being unintentionally updated or deleted during a stack update by using a stack policy. A stack policy is a JSON document that defines the update action1.-; that car1 be performed on designated resources.
For more information on protecting stack resources, please visit the below url http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/protect-stack-resources.html

NEW QUESTION 17
The project you are working on currently uses a single AWS CloudFormation template to deploy its AWS infrastructure, which supports a multi-tier web application. You have been tasked with organizing the AWS CloudFormation resources so that they can be maintained in the future, and so that different departments such as Networking and Security can review the architecture before it goes to Production. How should you do this in a way that accommodates each department, using their existing workflows?

  • A. Organize the AWS CloudFormation template so that related resources are next to each other in the template, such as VPC subnets and routing rules for Networkingand security groups and 1AM information for Security.
  • B. Separate the AWS CloudFormation template into a nested structure that has individual templates for the resources that are to be governed by different departments, and use the outputs from the networking and security stacks for the application template that you contro
  • C. ^/
  • D. Organize the AWS CloudFormation template so that related resources are next to each other in the template for each department's use, leverage your existing continuous integration tool to constantly deploy changes from all parties to the Production environment, and then run tests for validation.
  • E. Use a custom application and the AWS SDK to replicate the resources defined in the current AWS CloudFormation template, and use the existing code review system to allow other departments to approve changes before altering the application for future deployments.

Answer: B

Explanation:
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS:: Cloud Form ation::Stackresource in your template to reference other templates.
For more information on best practices for Cloudformation please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html

NEW QUESTION 18
Which of the following are components of the AWS Data Pipeline service. Choose 2 answers from the options given below

  • A. Pipeline definition
  • B. Task Runner
  • C. Task History
  • D. Workflow Runner

Answer: AB

Explanation:
The AWS Documentation mentions the following on AWS Pipeline
The following components of AWS Data Pipeline work together to manage your data: A pipeline definition specifies the business logic of your data management.
A pipeline schedules and runs tasks. You upload your pipeline definition to the pipeline, and then activate the pipeline. You can edit the pipeline definition for a running pipeline and activate the pipeline again for it to take effect. You can deactivate the pipeline, modify a data source, and then activate the pipeline again. When you are finished with your pipeline, you can delete it.
Task Runner polls for tasks and then performs those tasks. For example. Task Runner could copy log files to Amazon S3 and launch Amazon EMR clusters. Task Runner is installed and runs automatically on resources created by your pipeline definitions. You can write a custom task runner application, or you can use the Task Runner application that is provided by AWS Data Pipeline.
For more information on AWS Pipeline, please visit the link: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html

NEW QUESTION 19
What are the benefits when you implement a Blue Green deployment for your infrastructure or application level changes. Choose 3 answers from the options given below

  • A. Nearzero-downtime release for new changes
  • B. Betterrollback capabilities
  • C. Abilityto deploy with higher risk
  • D. Goodturnaround time for application deployments

Answer: ABD

Explanation:
The AWS Documentation mentions the following
Blue/green deployments provide near zero-downtime release and rollback capabilities. The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application. The blue environment represents the current application version serving production traffic. In parallel, the green environment is staged running a different version of your application. After the green environment is ready and tested, production traffic is redirected from blue to green.
For more information on Blue Green deployments please see the below link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 20
As part of your continuous deployment process, your application undergoes an I/O load performance test before it is deployed to production using new AMIs. The application uses one Amazon Elastic Block Store (EBS) PIOPS volume per instance and requires consistent I/O performance. Which of the following must be carried out to ensure that I/O load performance tests yield the correct results in a repeatable manner?

  • A. Ensure that the I/O block sizes for the test are randomly selected.
  • B. Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the test.
  • C. Ensure that snapshots of the Amazon EBS volumes are created as a backup.
  • D. Ensure that the Amazon EBS volume is encrypted.

Answer: B

Explanation:
During the AMI-creation process, Amazon CC2 creates snapshots of your instance's root volume and any other CBS volumes attached to your instance
New CBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming).
However, storage blocks on volumes that were restored from snapshots must to initialized (pulled
down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable.
Option A is invalid because block sizes are predetermined and should not be randomly selected. Option C is invalid because this is part of continuous integration and hence volumes can be destroyed after the test and hence there should not be snapshots created unnecessarily
Option D is invalid because the encryption is a security feature and not part of load tests normally. For more information on CBS initialization please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/ebs-in itialize.html

NEW QUESTION 21
You have an asynchronous processing application usingan Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxec out, but the inbound job velocity did not increase. What is a possible issue?

  • A. Some of the new jobs coming in are malformed and unprocessable.
  • B. The routing tables changed and none of the workers can process events anymore.
  • C. Someone changed the 1AM Role Policy on the instances in the worker group and broke permissions to access the queue.
  • D. The scaling metric is not functioning correctly.

Answer: A

Explanation:
This question is more on the grounds of validating each option
Option B is invalid, because the Route table would have an effect on all worker processes and no jobs would have been completed.
Option C is invalid because if the 1AM Role was invalid then no jobs would be completed.
Option D is invalid because the scaling is happening, its just that the jobs are not getting completed. For more information on Scaling on Demand, please visit the below URL:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html

NEW QUESTION 22
Which of the following are the basic stages of a CI/CD Pipeline. Choose 3 answers from the options below

  • A. SourceControl
  • B. Build
  • C. Run
  • D. Production

Answer: ABD

Explanation:
The below diagram shows the stages of a typical CI/CD pipeline
DOP-C01 dumps exhibit
For more information on AWS Continuous Integration, please visit the below URL: https://da.wsstatic.com/whitepapers/DevOps/practicing-continuous-integration-continuous- delivery-on- AWS.pdf

NEW QUESTION 23
......

100% Valid and Newest Version DOP-C01 Questions & Answers shared by Downloadfreepdf.net, Get Full Dumps HERE: https://www.downloadfreepdf.net/DOP-C01-pdf-download.html (New 116 Q&As)