Increase your chances of passing the Amazon AWS-DEA-C01 exam questions on your first try. Practice with our free online AWS-DEA-C01 exam mock test designed to help you prepare effectively and confidently.
A sales company uses AWS Glue ETL to collect, process, and ingest data into an Amazon S3 bucket. The AWS Glue pipeline creates a new file in the S3 bucket every hour. File sizes vary from 200 KB to 300 KB. The company wants to build a sales prediction model by using data from the previous 5 years. The historic data includes 44,000 files. The company builds a second AWS Glue ETL pipeline by using the smallest worker type. The second pipeline retrieves the historic files from the S3 bucket and processes the files for downstream analysis. The company notices significant performance issues with the second ETL pipeline. The company needs to improve the performance of the second pipeline. Which solution will meet this requirement MOST cost-effectively?
As a data engineering consultant, you've been approached by a client who's experiencing difficulties with their Amazon DynamoDB setup. They've noticed that their query performance degrades as their dataset grows, despite having a primary key design that they believed would be optimal. They ask for your advice on how they can maintain consistent, fast query performance as their dataset scales.
Which of the following suggestions would be the most effective for maintaining query performance in DynamoDB at scale?
A media company uses software as a service (SaaS) applications to gather data by using third-party tools. The
company needs to store the data in an Amazon S3 bucket. The company will use Amazon Redshift to perform
analytics based on the data.
Which AWS service or feature will meet these requirements with the LEAST operational overhead?
A data engineer needs to securely transfer 5 TB of data from an on-premises data center to an Amazon S3
bucket. Approximately 5% of the data changes every day. Updates to the data need to be regularlyproliferated
to the S3 bucket. The data includes files that are in multiple formats. The data engineer needs to automate the
transfer process and must schedule the process to run periodically.
Which AWS service should the data engineer use to transfer the data in the MOST operationally efficient
way?
A growing fintech company is leveraging an Amazon Redshift cluster equipped with dense compute (DC2) nodes. To accommodate their expanding user base, the company needs to dynamically adjust both read and write capacities based on varying workloads. The data engineer has been tasked to configure the Redshift cluster to automatically add additional query processing power.
What action should the data engineer take to enable this functionality?
© Copyrights FreeMockExams 2026. All Rights Reserved
We use cookies to ensure that we give you the best experience on our website (FreeMockExams). If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the FreeMockExams.