Replace AWS-Certified-Big-Data-Specialty Cram 2021

We provide real AWS-Certified-Big-Data-Specialty exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Amazon AWS-Certified-Big-Data-Specialty Exam quickly & easily. The AWS-Certified-Big-Data-Specialty PDF type is available for reading and printing. You can print more and practice many times. With the help of our Amazon AWS-Certified-Big-Data-Specialty dumps pdf and vce product and material, you can easily pass the AWS-Certified-Big-Data-Specialty exam.

Free demo questions for Amazon AWS-Certified-Big-Data-Specialty Exam Dumps Below:

NEW QUESTION 1
A user is planning to setup infrastructure on AWS for the Christmas sales. The user is planning to use
Auto Scaling based on the schedule for proactive scaling. What advise would you give to the user?

  • A. It is good to schedule now because if the user forgets later on it will not scale up
  • B. The scaling should be setup only one week before Christmas
  • C. Wait till end of November before scheduling the activity
  • D. It is not advisable to use scheduled based scaling

Answer: C

NEW QUESTION 2
The Trusted Advisor service provides insight regarding which four categories of an AWS account?

  • A. Security, fault tolerance, high availability, and connectivity
  • B. Security, access control, high availability, and performance
  • C. Performance, cost optimization, security, and fault tolerance
  • D. Performance, cost optimization, access control, and connectivity

Answer: C

NEW QUESTION 3
An Amazon EMR cluster using EMRFS has access to Megabytes of data on Amazon S3, originating
from multiple unique data sources. The customer needs to query common fields across some of the data sets to be able to perform interactive joins and then display results quickly.
Which technology is most appropriate to enable this capability?

  • A. Presto
  • B. MicroStrategy
  • C. Pig
  • D. R Studio

Answer: A

NEW QUESTION 4
What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Amazon EBS-backed instances can be stopped and restarted
  • B. Instance-store backed instances can be stopped and restarted
  • C. Auto scaling requires using Amazon EBS-backed instances
  • D. Virtual Private Cloud requires EBS backed instances

Answer: A

NEW QUESTION 5
An online retailer is using Amazon DynamoDB to store data relate to customer transactions. The
items in the table contain several string attributes describing the transaction as well as a JSON attribute containing the shopping cart and other details corresponding to the transactions. Average item size is ~250KB, most of which is associated with the JSON attribute. The average generates
~3GB of data per month.
Customers access the table to display their transaction history and review transaction details as needed. Ninety percent of queries against the table are executed when building the transaction history view, with the other 10% retrieving transaction details. The table is partitioned on CustomerID and sorted on transaction data.
The client has very high read capacity provisioned for the table and experiences very even utilization, but complains about the cost of Amazon DynamoDB compared to other NoSQL solutions.
Which strategy will reduce the cost associated with the client’s read queries while not degrading quality?

  • A. Modify all database calls to use eventually consistent reads and advise customers that transaction history may be one second out-of-date.
  • B. Change the primary table to partition on TransactionID, create a GSI partitioned on customer and sorted on date, project small attributes into GSI and then query GSI for summary data and the primary table for JSON details.
  • C. Vertically partition the table, store base attributes on the primary table and create a foreign key reference to a secondary table containing the JSON dat
  • D. Query the primary table for summary data and the secondary table for JSON details.
  • E. Create an LSI sorted on date project the JSON attribute into the index and then query the primary table for summary data and the LSI for JSON details

Answer: C

NEW QUESTION 6
A company generates a large number of files each month and needs to use AWS import/export to
move these files into Amazon S3 storage. To satisfy the auditors, the company needs to keep a record of which files were imported into Amazon S3.
What is a low-cost way to create a unique log for each import job?

  • A. Use the same log file prefix in the import/export manifest files to create a versioned log file in Amazon S3 for all imports
  • B. Use the log file prefix in the import/export manifest file to create a unique log file in Amazon S3 for each import
  • C. Use the log file checksum in the import/export manifest file to create a log file in Amazon S3 for each import
  • D. Use script to iterate over files in Amazon S3 to generate a log after each import/export job

Answer: B

NEW QUESTION 7
An enterprise customer is migrating to Redshift and is considering using dense storage nodes in its
Redshift cluster. The customer wants to migrate 50 TB of data. The customer’s query patterns involve performing many joins with thousands of rows. The customer needs to know how many nodes are needed in its target Redshift cluster. The customer has a limited budget and needs to avoid performing tests unless absolutely needed. Which approach should this customer use?

  • A. Start with many small nodes
  • B. Start with fewer large nodes
  • C. Have two separate clusters with a mix of small and large nodes
  • D. Insist on performing multiple tests to determine the optimal configuration

Answer: D

NEW QUESTION 8
An organization needs a data store to handle the following data types and access patterns
• Faceting
• Search
• Flexible schema (JSON) and fixed schema
• Noise word elimination
Which data store should the organization choose?

  • A. Amazon Relational Database Service (RDS)
  • B. Amazon Redshift
  • C. Amazon DynamoDB
  • D. Amazon Elasticsearch Service

Answer: C

NEW QUESTION 9
When an EC2 instance that is backed by an s3-based AMI is terminated. What happens to the data on the root volume?

  • A. Data is unavailable until the instance is restarted
  • B. Data is automatically deleted
  • C. Data is automatically saved as an EBS snapshot
  • D. Data is automatically saved as an EBS volume

Answer: B

NEW QUESTION 10
A media advertising company handles a large number of real-time messages sourced from over 200
websites in real time. Processing latency must be kept low. Based on calculations, a 60- shared Amazon Kinesis stream is more then sufficient to handle the maximum data throughput, even with traffic spikes. The company also uses an Amazon Kinesis Client Library (KCL) application running on Amazon Elastic Compute Cloud (EC2) managed by an Auto Scaling group. Amazon CloudWatch indicates an average of 25% CPU and a modest level of network traffic across all running servers.
The company reports a 150% to 200% increase in latency of processing messages from Amazon kinesis during peak times. There are NO reports of delay from the sites publishing to Amazon Kinesis. What is the appropriate solution to address the latency?

  • A. Increase the number of shared in the Amazon Kinesis stream to 80 for greater concurrency
  • B. Increate the size of the Amazon EC2 instances to increase network throughput
  • C. Increase the minimum number of instances in the Auto Scaling group
  • D. Increase Amazon DynamoDB throughput on the checkpointing table

Answer: A

NEW QUESTION 11
A company has reproducible data that they want to store on Amazon Web Services. The company may want to retrieve the data on a frequent basis. Which Amazon web services storage option allows the customer to optimize storage costs and still achieve high availability for their data?

  • A. Amazon S3 Reduced Redundancy Storage
  • B. Amazon EBS Magnetic Volume
  • C. Amazon Glacier
  • D. Amazon S3 Standard Storage

Answer: A

NEW QUESTION 12
A data engineer is running a DWH on a 25-node Redshift cluster of a SaaS service. The data engineer
needs to build a dashboard that will be used by customers. Five big customers represent 80% of usage, and there is a long tail of dozens of smaller customers. The data engineer has selected the dashboarding tool.
How should the data engineer make sure that the larger customer workloads do NOT interfere with the smaller customer workloads?

  • A. Apply query filters based on customer-id that can NOT be changed by the user and apply distribution keys on customer id
  • B. Place the largest customers into a single user group with a dedicated query queue and place the rest of the customer into a different query queue
  • C. Push aggregations into an RDS for Aurora instanc
  • D. Connect the dashboard application to Aurora rather than Redshift for faster queries
  • E. Route the largest customers to a dedicated Redshift cluster, Raise the concurrency of the multi- tenant Redshift cluster to accommodate the remaining customers

Answer: D

NEW QUESTION 13
An organization needs to design and deploy a large-scale data storage solution that will be highly durable and highly flexible with respect to the type and structure of data being stored. The data to be stored will be sent or generated from a variety of sources and must be persistently available for access and processing by multiple applications.
What is the most cost-effective technique to meet these requirements?

  • A. Use Amazon Simple Storage Service (S3) as the actual data storage system, coupled with appropriate tools for ingestion/acquisition of data and for subsequent processing and querying.
  • B. Deploy a long-running Amazon Elastic MapReduce (EMR) cluster with Amazon Elastic Block Store (EBS) volumes for persistent HDFS storage and appropriate Hadoop ecosystem tools for processing and querying.
  • C. Use Amazon Redshift with data replication to Amazon Simple Storage Service (S3) for comprehensive durable data storage, processing and querying.
  • D. Launch an Amazon Relational Database Service (RDS), and use the enterprise grade and capacity of the Amazon Aurora Engine for storage processing and querying.

Answer: A

NEW QUESTION 14
You have been asked to use your department’s existing continuous integration (CI) tool to test a
three- tier web architecture defined in an AWS CloudFormation template. The tool already supports AWS APIs and can launch new AWS CloudFormation stacks after polling version control. The CI tool reports on the success of the AWS CloudFormation stack creation by using the DescribeStacks API to look for the CREATE_COMPLETE status.
The architecture tiers defined in the template consist of:
. One load balancer
. Five Amazon EC2 instances running the web application
. One multi-AZ Amazon RDS instance How would you implement this? Choose 2 answers

  • A. Define a WaitCondition and a WaitConditionhandle for the output of a output of a UserData command that does sanity checking of the application’s post-install state
  • B. Define a CustomResource and write a script that runs architecture-level integration tests through the load balancer to the application and database for the state of multiple tiers
  • C. Define a WaitCondition and use a WaitConditionHandle that leverages the AWS SDK to run the DescribeStacks API call until the CREATE_COMPLETE status is returned
  • D. Define a CustomResource that leverages the AWS SDK to run the DescribeStacks API call until the CREATE_COMPLETE status is returned
  • E. Define a UserDataHandle for the output of a UserData command that does sanity checking of the application’s post-install state and runs integration tests on the state of multiple tiers through load balancer to the application
  • F. Define a UserDataHandle for the output of a CustomResource that does sanity checking of the application’s post-install state

Answer: AF

NEW QUESTION 15
A company that manufactures and sells smart air conditioning units also offers add-on services so
that customers can see real-time dashboards in a mobile application or a web browser. Each unit sends its sensor information in JSON format every two seconds for processing and analysis. The company also needs to consume this data to predict possible equipment problems before they occur. A few thousand pre-purchased units will be delivered in the next couple of months. The company expects high market growth in the next year and needs to handle a massive amount of data and scale interruption.
Which ingestion solution should the company use?

  • A. Write sensor data records to Amazon Kinesis Stream
  • B. Process the data using KCL applications for the end-consumer dashboard and anomaly detection workflows.
  • C. Batch sensor data Amazon Simple Storage Service (S3) every 15 minute
  • D. Flow the data downstream to the end-consumer dashboard and to the anomaly detection application.
  • E. Write sensor data records to Amazon Kinesis Firehose with Amazon Simply Storage Service (S3) as the destinatio
  • F. Consume the data with a KCL application for the end-consumer dashboard and anomaly detection.
  • G. Write sensor data records to Amazon Relational Database Service (RDS). Build both the end- consumer dashboard application on top of Amazon RDS.

Answer: A

NEW QUESTION 16
You are deploying an application to collect votes for a very popular television show. Millions of users
will submit votes using mobile devices. The votes must be collected into a durable, scalable, and highly available data store for real-time public tabulation. Which service should you use?

  • A. Amazon DynamoDB
  • B. Amazon Redshift
  • C. Amazon Kinesis
  • D. Amazon Simple Queue Service

Answer: C

NEW QUESTION 17
You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private IP address assigned, an internet gateway is attached to the VPC, and the public route table is configured to send all internet-based internet. Why is the internet unreachable from this instance?

  • A. The Internet gateway security group must allow all outbound traffic
  • B. The instance does not have a public IP address
  • C. The instance “Source/Destination check” property must be enabled
  • D. The instance security group must allow all inbound traffic

Answer: B

NEW QUESTION 18
You are managing the AWS account of a big organization. The organization has more than
1000+ employees and they want to provide access to the various services to most of the employees. Which of the below mentioned options is the best possible solution in this case?

  • A. The user should create a separate IAM user for each employee and provide access to them as per the policy
  • B. The user should create an IAM role and attach STS with the rol
  • C. The user should attach that role to the EC2 instance and setup AWS authentication on that server
  • D. The user should create IAM groups as per the organization’s departments and add each user to the group for better access control
  • E. Attach an IAM role with the organization’s authentication service to authorize each user forvarious AWS services

Answer: D

NEW QUESTION 19
An Amazon Kinesis stream needs to be encrypted. Which approach should be used to accomplish this task?

  • A. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the producer
  • B. Use a partition key to segment the data by MD5 hash functions which makes indecipherable while in transit
  • C. Perform a client-side encryption of the data before it enters the Amazon Kinesis stream on the consumer
  • D. Use a shard to segment the data which has built-in functionality to make it indecipherable while in transit

Answer: B

NEW QUESTION 20
A user has launched an EC2 instance and deployed a production application in it. The user wants to prohibit any mistakes from the production team to avoid accidental termination. How can the user achieve this?

  • A. The user can the set DisableApiTermination attribute to avoid accidental termination
  • B. It is not possible to avoid accidental termination
  • C. The user can set the Deletion termination flag to avoid accidental termination
  • D. The user can set the InstanceInitiatedShutdownBehavior flag to avoid accidental termination

Answer: A

NEW QUESTION 21
......

Thanks for reading the newest AWS-Certified-Big-Data-Specialty exam dumps! We recommend you to try the PREMIUM DumpSolutions AWS-Certified-Big-Data-Specialty dumps in VCE and PDF here: https://www.dumpsolutions.com/AWS-Certified-Big-Data-Specialty-dumps/ (243 Q&As Dumps)