Amazon-Web-Services SAP-C01 Training Materials 2021

Cause all that matters here is passing the Amazon-Web-Services SAP-C01 exam. Cause all that you need is a high score of SAP-C01 AWS Certified Solutions Architect- Professional exam. The only one thing you need to do is downloading Examcollection SAP-C01 exam study guides now. We will not let you down with our money-back guarantee.

Check SAP-C01 free dumps before getting the full version:

NEW QUESTION 1
A company runs an ordering system on AWS using Amazon SQS and AWS Lambda, with each order received as a JSON message. recently the company had a marketing event that led to a tenfold increase in orders. With this increase, the following undesired behaviors started in the ordering system:
SAP-C01 dumps exhibit Lambda failures while processing orders lead to queue backlogs.
SAP-C01 dumps exhibit The same orders have been processed multiple times.
A solutions Architect has been asked to solve the existing issues with the ordering system and add the following resiliency features:
SAP-C01 dumps exhibit Retain problematic orders for analysis.
SAP-C01 dumps exhibit Send notification if errors go beyond a threshold value. How should the Solutions Architect meet these requirements?

  • A. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a dead letter queue for messages that could not be processed, create an Amazon CloudWatch alarm on Lambda errors for notification.
  • B. Receive single messages with each Lambda invocation, put additional Lambda workers to poll the queue, delete messages after processing, increase the message timer for the messages, use Amazon CloudWatch Logs for messages that could not be processed, create a CloudWatch alarm on Lambda errors for notification.
  • C. Receive multiple messages with each Lambda invocation, use long polling when receiving the messages, log the errors from the message processing code using Amazon CloudWatch Logs, create a dead letter queue with AWS Lambda to capture failed invocations, create CloudWatch events on Lambda errors for notification.
  • D. Receive multiple messages with each Lambda invocation, add error handling to message processing code and delete messages after processing, increase the visibility timeout for the messages, create a delay queue for messages that could not be processed, create an Amazon CloudWatch metric on Lambda errors for notification.

Answer: D

NEW QUESTION 2
During an audit a Security team discovered that a Development team was putting IAM user secret access keys in their code and then committing it to an AWS CodeCommit repository The Security team wants to automatically find and remediate instances of this security vulnerability
Which solution will ensure that the credentials are appropriately secured automatically?

  • A. Run a script rightly using AWS Systems Manager Run Command to search (or credentials on thedevelopment instances It found, use AWS Secrets Manager to rotate the credentials
  • B. Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit If credentials are found generate new credentials and store them in AWS KMS
  • C. Configure Amazon Macie to scan for credentials in CodeCommit repositories If credentials are found, trigger an AWS Lambda function to disable the credentials and notify the user
  • D. Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials lf credentials are found, disable them in AWS IAM and notify the user

Answer: C

NEW QUESTION 3
A company with multiple accounts is currently using a configuration that does not meet the following security governance policies
• Prevent ingress from port 22 to any Amazon EC2 instance
• Require billing and application tags for resources
• Encrypt all Amazon EBS volumes
A Solutions Architect wants to provide preventive and detective controls including notifications about a specific resource, if there are policy deviations.
Which solution should the Solutions Architect implement?

  • A. Create an AWS CodeCommit repository containing policy-compliant AWS Cloud Formation templates.Create an AWS Service Catalog portfolio Import the Cloud Formation templates by attaching the CodeCommit repository to the portfolio Restrict users across all accounts to items from the AWSService Catalog portfolio Use AWS Config managed rules to detect deviations from the policie
  • B. Configure an Amazon CloudWatch Events rule for deviations, and associate a CloudWatch alarm to send notifications when the TriggeredRules metric is greater than zero.
  • C. Use AWS Service Catalog to build a portfolio with products that are in compliance with the governance policies in a central account Restrict users across all accounts lo AWS Service Catalog products Share a compliant portfolio to other accounts Use AWS Config managed rules to detect deviations from the policies Configure an Amazon CloudWatch Events rule to send a notification when a deviation occurs
  • D. Implement policy-compliant AWS Cloud Formation templates for each account and ensure that all provisioning is completed by Cloud Formation Configure Amazon Inspector to perform regular checks against resources Perform policy validation and write the assessment output to Amazon CloudWatch Log
  • E. Create a CloudWatch Logs metric filter to increment a metric when a deviation occurs Configure a CloudWatch alarm to send notifications when the configured metric is greater than zero
  • F. Restrict users and enforce least privilege access using AWS I A
  • G. Consolidate all AWS CloudTrail logs into a single account Send the CloudTrail logs to Amazon Elasticsearch Service (Amazon ES). Implement monitoring alerting, and reporting using the Kibana dashboard in Amazon ES and with Amazon SNS.

Answer: C

NEW QUESTION 4
A company is running a large application on-premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache Cassandra for the database. The company wants to migrate the application to AWS to improve service reliability. The IT team also wants to reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make code changes to support the migration.
Which design is the LEAST complex to manage after the migration?

  • A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
  • B. Migrate the existing Cassandra database to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.
  • C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
  • D. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ configuration.
  • E. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
  • F. Migrate the existing Cassandra database to Amazon DynamoDB.
  • G. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
  • H. Migrate the existing Cassandra database to Amazon DynamoDB.

Answer: B

NEW QUESTION 5
A company that is new to AWS reports it has exhausted its service limits across several accounts that are on the Basic Support plan. The company would like to prevent this from happening in the future.
What is the MOST efficient way of monitoring and managing all service limits in the company’s accounts?

  • A. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, provide notifications using Amazon SNS if the limits are close to exceeding the threshold.
  • B. Reach out to AWS Support to proactively increase the limits across all account
  • C. That way, the customer avoids creating and managing infrastructure just to raise the service limits.
  • D. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, programmatically increase the limits that are close to exceeding the threshold.
  • E. Use Amazon CloudWatch and AWS Lambda to periodically calculate the limits across all linked accounts using AWS Trusted Advisor, and use Amazon SNS for notifications if a limit is close to exceeding the threshol
  • F. Ensure that the accounts are using the AWS Business Support plan at a minimum.

Answer: D

Explanation:
https://github.com/awslabs/aws-limit-monitor https://aws.amazon.com/solutions/limit-monitor/

NEW QUESTION 6
A company is running a high-user-volume media-sharing application on premises It currently hosts about 400 TB of data with millions of video files The company is migrating this application to AWS to improve reliability and reduce costs
The Solutions Architecture team plans to store the videos in an Amazon S3 bucket and use Amazon
CloudFront to distribute videos to users. The company needs to migrate this application to AWS within 10 days with the least amount of downtime possible. The company currently has 1 Gbps connectivity to the internet with 30 percent free capacity
Which of the following solutions would enable the company to migrate the workload to AWS and meet an of the requirements?

  • A. Use a multipart upload in Amazon S3 client at to parallel-upload the data to the Amazon S3 bucket over the internet Use the throttling feature to ensure that the Amazon S3 client does not use more than 30 percent of available internet capacity
  • B. Request an AWS Snowmobile with 1 PB capacity to be delivered to the data center Load the data into Snowmobile and send it back to have AWS download that data to the Amazon S3 bucket Sync the new data that was generated white migration was in flight
  • C. Use an Amazon S3 client to transfer data from the data center to the Amazon S3 bucket over the internet Use the throttling feature to ensure the Amazon S3 client does not use more than 30 percent of available internet capacity
  • D. Request multiple AWS Snowball devices to be delivered to the data center Load the data concurrently into these devices and send it back Have AWS download that data to the Amazon S3 bucket Sync the new data that was generated while migration was in flight.

Answer: D

Explanation:
https://www.edureka.co/blog/aws-snowball-and-snowmobile-tutorial/

NEW QUESTION 7
A company has implemented AWS Organizations. It has recently set up a number of new accounts and wants to deny access to a specific set of AWS services in these new accounts.
How can this be controlled MOST efficiently?

  • A. Create an IAM policy in each account that denies access to the service
  • B. Associate the policy with an IAM group, and add all IAM users to the group.
  • C. Create a service control policy that denies access to the service
  • D. Add all of the new accounts to a single organizations unit (OU), and apply the policy to that OU.
  • E. Create an IAM policy in each account that denies access to the servic
  • F. Associate the policy with an IAM role, and instruct users to log in using their corporate credentials and assume the IAM role.
  • G. Create a service control policy that denies access to the services, and apply the policy to the root of the organization.

Answer: B

NEW QUESTION 8
A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high load, resulting in severely elevated query response times.
Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.)

  • A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in the web application to reduce the load on the backend database tier.
  • B. Configure the target group health check to point at a simple HTML page instead of a product catalog page and the Amazon Route 53 health check against the product page to evaluate full application functionalit
  • C. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
  • D. Configure the target group health check to use a TCP check of the Amazon EC2 web server and the Amazon Route 53 health check against the product page to evaluate full application functionalit
  • E. Configure Amazon CloudWatch alarms to notify administrators when the site fails.
  • F. Configure an Amazon CloudWatch alarm for Amazon RDS with an action to recover a high-load, impaired RDS instance in the database tier.
  • G. Configure an Amazon ElastiCache cluster and place it between the web application and RDS MySQL instances to reduce the load on the backend database tier.

Answer: BE

NEW QUESTION 9
A company needs to run a software package that has a license that must be run on the same physical host for the duration of its use. The software package is only going to be used for 90 days. The company requires patching and restarting of all instances every 30 days.
How can these requirements be met using AWS?

  • A. Run a dedicated instance with auto-placement disabled.
  • B. Run the instance on a dedicated host with Host Affinity set to Host.
  • C. Run an On-Demand instance with a Reserved Instance to ensure consistent placement.
  • D. Run the instance on a licensed host with termination set for 90 days.

Answer: B

Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-dedicated-hosts-work.html

NEW QUESTION 10
A company is currently running a production workload on AWS that is very I/O intensive. Its workload consists of a single tier with 10 c4.8xlarge instances, each with 2 TB gp2 volumes. The number of processing jobs has recently increased, and latency has increased as well. The team realizes that they are constrained on the IOPS. For the application to perform efficiently, they need to increase the IOPS by 3,000 for each of the instances.
Which of the following designs will meet the performance goal MOST cost effectively?

  • A. Change the type of Amazon EBS volume from gp2 to io1 and set provisioned IOPS to 9,000.
  • B. Increase the size of the gp2 volumes in each instance to 3 TB.
  • C. Create a new Amazon EFS file system and move all the data to this new file syste
  • D. Mount this file system to all 10 instances.
  • E. Create a new Amazon S3 bucket and move all the data to this new bucke
  • F. Allow each instance to access this S3 bucket and use it for storage.

Answer: B

NEW QUESTION 11
A company has an application behind a load balancer with enough Amazon EC2 instances to satisfy peak demand. Scripts and third-party deployment solutions are used to configure EC2 instances when demand increases or an instance fails. The team must periodically evaluate the utilization of the instance types to ensure that the correct sizes are deployed.
How can this workload be optimized to meet these requirements?

  • A. Use CloudFormer` to create AWS CloudFormation stacks from the current resource
  • B. Deploy that stack by using AWS CloudFormation in the same regio
  • C. Use Amazon CloudWatch alarms to send notifications about underutilized resources to provide cost-savings suggestions.
  • D. Create an Auto Scaling group to scale the instances, and use AWS CodeDeploy to perform the configuratio
  • E. Change from a load balancer to an Application Load Balance
  • F. Purchase a third-party product that provides suggestions for cost savings on AWS resources.
  • G. Deploy the application by using AWS Elastic Beanstalk with default option
  • H. Register for an AWS Support Developer pla
  • I. Review the instance usage for the application by using Amazon CloudWatch, and identify less expensive instances that can handle the loa
  • J. Hold monthly meetings to review new instance types and determine whether Reserved instances should be purchased.
  • K. Deploy the application as a Docker image by using Amazon EC
  • L. Set up Amazon EC2 Auto Scaling and Amazon ECS scalin
  • M. Register for AWS Business Support and use Trusted Advisor checks to provide suggestions on cost savings.

Answer: D

NEW QUESTION 12
A Solutions Architect has been asked to look at a company’s Amazon Redshift cluster, which has quickly become an integral part of its technology and supports key business process. The Solutions Architect is to increase the reliability and availability of the cluster and provide options to ensure that if an issue arises, the cluster can either operate or be restored within four hours.
Which of the following solution options BEST addresses the business need in the most cost-effective manner?

  • A. Ensure that the Amazon Redshift cluster has been set up to make use of Auto Scaling groups with the nodes in the cluster spread across multiple Availability Zones.
  • B. Ensure that the Amazon Redshift cluster creation has been template using AWS CloudFormation so it can easily be launched in another Availability Zone and data populated from the automated Redshift back-ups stored in Amazon S3.
  • C. Use Amazon Kinesis Data Firehose to collect the data ahead of ingestion into Amazon Redshift and create clusters using AWS CloudFormation in another region and stream the data to both clusters.
  • D. Create two identical Amazon Redshift clusters in different regions (one as the primary, one as the secondary). Use Amazon S3 cross-region replication from the primary to secondary). Use Amazon S3 cross-region replication from the primary to secondary region, which triggers an AWS Lambda function to populate the cluster in the secondary region.

Answer: B

Explanation:
https://aws.amazon.com/redshift/faqs/?nc1=h_ls Q: What happens to my data warehouse cluster availability and data durability if my data warehouse cluster's Availability Zone (AZ) has an outage? If your Amazon Redshift data warehouse cluster's Availability Zone becomes unavailable, you will not be able to use your cluster until power and network access to the AZ are restored. Your data warehouse cluster's data is preserved so you can start using your Amazon Redshift data warehouse as soon as the AZ becomes available again. In addition, you can also choose to restore any existing snapshots to a new AZ in the same Region. Amazon Redshift will restore your most frequently accessed data first so you can resume queries as quickly as possible.
FROM 37

NEW QUESTION 13
A bank is re-architecting its mainframe-based credit card approval processing application to a cloud-native application on the AWS cloud.
The new application will receive up to 1,000 requests per second at peak load. There are multiple steps to each transaction, and each step must receive the result of the previous step. The entire request must return an authorization response within less than 2 seconds with zero data loss. Every request must receive a response. The solution must be Payment Card Industry Data Security Standard (PCI DSS)-compliant.
Which option will meet all of the bank’s objectives with the LEAST complexity and LOWEST cost while also meeting compliance requirements?

  • A. Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs multiple steps and returns a JSON object with the approval statu
  • B. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application.
  • C. Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated instances in a target group to process incoming request
  • D. Use Auto Scaling to scale the cluster out/in based on average CPU utilizatio
  • E. Deploy a web service that processes all of the approval steps and returns a JSON object with the approval status.
  • F. Deploy the application on Amazon EC2 on Dedicated Instance
  • G. Use an Elastic Load Balancer in front of a farm of application servers in an Auto Scaling group to handle incoming request
  • H. Scale out/in based on a custom Amazon CloudWatch metric for the number of inbound requests per second after measuring the capacity of a single instance.
  • I. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with an Amazon SQS input queu
  • J. As each step completes, it writes its result to the next step’s queu
  • K. The final step returns a JSON object with the approval statu
  • L. Open a support case to increase the limit for the number of concurrent Lambdas to allow room for bursts of activity due to the new application.

Answer: B

NEW QUESTION 14
A company must deploy multiple independent instances of an application. The front-end application is internet accessible. However, corporate policy stipulates that the backends are to be isolated from each other and the internet, yet accessible from a centralized administration server. The application setup should be automated to minimize the opportunity for mistakes as new instances are deployed.
Which option meets the requirements and MINIMIZES costs?

  • A. Use an AWS CloudFormation template to create identical IAM roles for each regio
  • B. Use AWS CloudFormation StackSets to deploy each application instance by using parameters to customize for each instance, and use security groups to isolate each instance while permitting access to the central server.
  • C. Create each instance of the application IAM roles and resources in separate accounts by using AWS CloudFormation StackSet
  • D. Include a VPN connection to the VPN gateway of the central administration server.
  • E. Duplicate the application IAM roles and resources in separate accounts by using a single CloudFormation templat
  • F. Include VPC peering to connect the VPC of each application instance to acentral VPC.
  • G. Use the parameters of the AWS CloudFormation template to customize the deployment into separate account
  • H. Include a NAT gateway to allow communication back to the central administration server.

Answer: A

NEW QUESTION 15
An online retailer needs to regularly process large product catalogs, which are handled in batches. These are sent out to be processed by people using the Amazon Mechanical Turk service, but the retailer has asked its Solutions Architect to design a workflow orchestration system that allows it to handle multiple concurrent Mechanical Turk operations, deal with the result assessment process, and reprocess failures.
Which of the following options gives the retailer the ability to interrogate the state of every workflow with the LEAST amount of implementation effort?

  • A. Trigger Amazon CloudWatch alarms based upon message visibility in multiple Amazon SQS queues (one queue per workflow stage) and send messages via Amazon SNS to trigger AWS Lambda functions to process the next ste
  • B. Use Amazon ES and Kibana to visualize Lambda processing logs to see the workflow states.
  • C. Hold workflow information in an Amazon RDS instance with AWS Lambda functions polling RDS for status change
  • D. Worker Lambda functions then process the next workflow step
  • E. Amazon QuickSight will visualize workflow states directly out of Amazon RDS.
  • F. Build the workflow in AWS Step Functions, using it to orchestrate multiple concurrent workflow
  • G. The status of each workflow can be visualized in the AWS Management Console, and historical data can be written to Amazon S3 and visualized using Amazon QuickSight.
  • H. Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through Mechanical Tur
  • I. Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states.

Answer: C

Explanation:
AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Instead of writing a Decider program, you define state machines in JSON. AWS customers should consider using Step Functions for new applications. If Step Functions does not fit your needs, then you should consider Amazon Simple Workflow (SWF). Amazon SWF provides you complete control over your orchestration logic, but increases the complexity of developing applications. You may write decider programs in the programming language of your choice, or you may use the Flow framework to use programming constructs that structure asynchronous interactions for you. AWS will continue to provide the Amazon SWF service, Flow framework, and support all Amazon SWF customers. https://aws.amazon.com/swf/faqs/

NEW QUESTION 16
A retail company has a custom NET web application running on AWS that uses Microsoft SQL Server for the database The application servers maintain a user's session locally.
Which combination of architecture changes are needed ensure all tiers of the solution are highly available? (Select THREE.)

  • A. Refactor the application to store the user's session in Amazon ElastiCache Use Application Load Balancers to distribute the load between application instances
  • B. Set up the database to generate hourly snapshots using Amazon EBS Configure an Amazon CloudWatch Events rule to launch a new database instance if the primary one fails
  • C. Migrate the database to Amazon RDS tor SQL Server Configure the RDS instance to use a Multi-AZ deployment
  • D. Move the NET content to an Amazon S3 bucket Configure the bucket for static website hosting
  • E. Put the application instances in an Auto Scaling group Configure the Auto Scaling group to create new instances if an instance becomes unhealthy
  • F. Deploy Amazon CloudFront in front of the application tier Configure CloudFront to serve content from healthy application instances only

Answer: BDE

NEW QUESTION 17
A company has a website that enables users to upload videos Company policy states the uploaded videos must be analyzed for restricted content An uploaded video is placed in Amazon S3, and a message is pushed to an Amazon SQS queue with the video's location A backend application pulls this location from Amazon SQS and analyzes the video
The video analysis is compute-intensive and occurs sporadically during the day The website scales with demand The video analysis application runs on a fixed number of instances Peak demand occurs during the holidays, so the company must add instances to the application during this time All instances used are currently on-demand Amazon EC2 T2 instances The company wants to reduce the cost of the current solution.
Which of the following solutions is MOST cost-effective?

  • A. Keep the website on T2 instances Determine the minimum number of website instances required during off-peak times and use Spot Instances to cover them while using Reserved Instances to covet peak demand Use Amazon EC2 R4 and Amazon EC2 R5 Reserved Instances in an Auto Scaling group for the video analysis application
  • B. Keep the website on 12 instances Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand Use Spot Fleet for thevideo analysis application comprised of Amazon EC2 C4 and Amazon EC2 C5 Spot Instances
  • C. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 C4 instances Determine the minimum number of website instances required during off-peak times and use On-Demand instances to cover them while using Spot capacity to cover peak demand Use Spot Fleet for the video analysis application comprised of C4 and Amazon EC2 C5 instances
  • D. Migrate the website to AWS Elastic Beanstalk and Amazon EC2 R4 instances Determine the minimum number of website instances required during off-peak times and use Reserved Instances to cover them while using On-Demand Instances to cover peak demand Use Spot Fleet for the video analysis application comprised of R4 and Amazon EC2 R5 instances.

Answer: B

NEW QUESTION 18
A Solutions Architect is designing a network solution for a company that has applications running in a data center in Northern Virginia. The applications in the company’s data center require predictable performance to applications running in a virtual private cloud (VPC) located in us-east-1, and a secondary VPC in us-west-2 within the same account. The company data center is collocated in an AWS Direct Connect facility that serves the us-est-1 region. The company has already ordered an AWS Direct Connect connection and a cross-connect has been established.
Which solution will meet the requirements at the LOWEST cost?

  • A. Provision a Direct Connect gateway and attach the virtual private (VGW) for the VPC in us-east-1 and the VGW for the VPC in us-west-2. Create a private VIF on the Direct Connect connection and associate it to the Direct Connect gateway.
  • B. Create private VIFs on the Direct Connect connection for each of the company’s VPCs in the us-est-1 and us-west-2 region
  • C. Configure the company’s data center router to connect directly with the VPCs in those regions via the private VIFs.
  • D. Deploy a transit VPC solution using Amazon EC2-based router instances in the us-east-1 region.Establish IPsec VPN tunnels between the transit routers and virtual private gateways (VGWs) located in the us-east-1 and us-west-2 regions, which are attached to the company’s VPCs in those region
  • E. Create a public VIF on the Direct Connect connection and establish IPsec VPN tunnels over the public VIF between the transit routers and the company’s data center router.
  • F. Order a second Direct Connect connection to a Direct Connect facility with connectivity to theus-west-2 regio
  • G. Work with partner to establish a network extension link over dark fiber from the Direct Connect facility to the company’s data cente
  • H. Establish private VIFs on the Direct Connect connections for each of the company’s VPCs in the respective region
  • I. Configure the company’s data center router to connect directly with the VPCs in those regions via the private VIFs.

Answer: A

Explanation:
https://aws.amazon.com/blogs/aws/new-aws-direct-connect-gateway-inter-region-vpc-access/

NEW QUESTION 19
A company is migrating its on-premises build artifact server to an AWS solution. The current system consists of an Apache HTTP server that serves artifacts to clients on the local network, restricted by the perimeter firewall. The artifact consumers are largely build automation scripts that download artifacts via anonymous HTTP, which the company will be unable to modify within its migration timetable.
The company decides to move the solution to Amazon S3 static website hosting. The artifact consumers will be migrated to Amazon EC2 instances located within both public and private subnets in a virtual private cloud (VPC).
Which solution will permit the artifact consumers to download artifacts without modifying the existing automation scripts?

  • A. Create a NAT gateway within a public subnet of the VP
  • B. Add a default route pointing to the NAT gateway into the route table associated with the subnets containing consumer
  • C. Configure the bucket policy to allow the s3:ListBucket and s3:GetObject actions using the condition IpAddress and the condition key aws:SourceIp matching the elastic IP address if the NAT gateway.
  • D. Create a VPC endpoint and add it to the route table associated with subnets containing consumers.Configure the bucket policy to allow s3:ListBucket and s3:GetObject actions using the condition StringEquals and the condition key aws:sourceVpce matching the identification of the VPC endpoint.
  • E. Create an IAM role and instance profile for Amazon EC2 and attach it to the instances that consume build artifact
  • F. Configure the bucket policy to allow the s3:ListBucket and s3:GetObjects actions for theprincipal matching the IAM role created.
  • G. Create a VPC endpoint and add it to the route table associated with subnets containing consumers.Configure the bucket policy to allow s3:ListBucket and s3:GetObject actions using the condition IpAddress and the condition key aws:SourceIp matching the VPC CIDR block.

Answer: B

NEW QUESTION 20
A company is designing a new highly available web application on AWS. The application requires consistent and reliable connectivity from the application servers in AWS to a backend REST API hosted in the company’s on-premises environment. The backend connection between AWS and on-premises will be routed over an AWS Direct Connect connection through a private virtual interface. Amazon Route 53 will be used to manage private DNS records for the application to resolve the IP address on the backend REST API.
Which design would provide a reliable connection to the backend API?

  • A. Implement at least two backend endpoints for the backend REST API, and use Route 53 health checks to monitor the availability of each backend endpoint and perform DNS-level failover.
  • B. Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the first Direct Connect connection.
  • C. Install a second cross connect for the same Direct Connect connection from the same network carrier, and join both connections to the same link aggregation group (LAG) on the same private virtual interface.
  • D. Create an IPSec VPN connection routed over the public internet from the on-premises data center to AWS and attach it to the same virtual private gateway as the Direct Connect connection.

Answer: A

NEW QUESTION 21
A company has been using a third-party provider for its content delivery network and recently decided to switch to Amazon CloudFront the Development team wants to maximize performance for the global user base. The company uses a content management system (CMS) that serves both static and dynamic content. The CMS is both md an Application Load Balancer (ALB) which is set as the default origin for the distribution. Static assets are served from an Amazon S3 bucket. The Origin Access Identity (OAI) was created property d the S3 bucket policy has been updated to allow the GetObject action from the OAI, but static assets are receiving a 404 error
Which combination of steps should the Solutions Architect take to fix the error? (Select TWO. )

  • A. Add another origin to the CloudFront distribution for the static assets
  • B. Add a path based rule to the ALB to forward requests for the static assets
  • C. Add an RTMP distribution to allow caching of both static and dynamic content
  • D. Add a behavior to the CloudFront distribution for the path pattern and the origin of the static assets
  • E. Add a host header condition to the ALB listener and forward the header from CloudFront to add traffic to the allow list

Answer: AD

NEW QUESTION 22
A company is migrating to the cloud. It wants to evaluate the configurations of virtual machines in its existing data center environment to ensure that it can size new Amazon EC2 instances accurately. The company wants to collect metrics, such as CPU, memory, and disk utilization, and it needs an inventory of what processes are running on each instance. The company would also like to monitor network connections to map communications between servers.
Which would enable the collection of this data MOST cost effectively?

  • A. Use AWS Application Discovery Service and deploy the data collection agent to each virtual machine in the data center.
  • B. Configure the Amazon CloudWatch agent on all servers within the local environment and publish metrics to Amazon CloudWatch Logs.
  • C. Use AWS Application Discovery Service and enable agentless discovery in the existing virtualization environment.
  • D. Enable AWS Application Discovery Service in the AWS Management Console and configure the corporate firewall to allow scans over a VPN.

Answer: A

NEW QUESTION 23
A company uses Amazon S3 to store documents that may only be accessible to an Amazon EC2 instance in a certain virtual private cloud (VPC). The company fears that a malicious insider with access to this instance could also set up an EC2 instance in another VPC to access these documents.
Which of the following solutions will provide the required protection?

  • A. Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint.
  • B. Use EC2 instance profiles and an S3 bucket policy to limit access to the role attached to the instance profile.
  • C. Use S3 client-side encryption and store the key in the instance metadata.
  • D. Use S3 server-side encryption and protect the key with an encryption context.

Answer: A

Explanation:
https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html
Endpoint connections cannot be extended out of a VPC. Resources on the other side of a VPN connection, VPC peering connection, AWS Direct Connect connection, or ClassicLink connection in your VPC cannot use the endpoint to communicate with resources in the endpoint service.

NEW QUESTION 24
A company has a standard three-tier architecture using two Availability Zones. During the company’s off season, users report that the website is not working. The Solutions Architect finds that no changes have been made to the environment recently, the website is reachable, and it is possible to log in. However, when the Solutions Architect selects the “find a store near you” function, the maps provided on the site by a third-party RESTful API call do not work about 50% of the time after refreshing the page. The outbound API calls are made through Amazon EC2 NAT instances.
What is the MOST likely reason for this failure and how can it be mitigated in the future?

  • A. The network ACL for one subnet is blocking outbound web traffi
  • B. Open the network ACL and prevent administration from making future changes through IAM.
  • C. The fault is in the third-party environmen
  • D. Contact the third party that provides the maps and request a fix that will provide better uptime.
  • E. One NAT instance has become overloade
  • F. Replace both EC2 NAT instances with a larger-sized instance and make sure to account for growth when making the new instance size.
  • G. One of the NAT instances faile
  • H. Recommend replacing the EC2 NAT instances with a NAT gateway.

Answer: D

Explanation:
The issue is 50% failure, means the balancing over 2 AZs is failing on one NAT instance in one AZ. The solution is to replace the NAT instance with fully managed and high available NAT gateway.

NEW QUESTION 25
......

100% Valid and Newest Version SAP-C01 Questions & Answers shared by Passcertsure, Get Full Dumps HERE: https://www.passcertsure.com/SAP-C01-test/ (New 179 Q&As)