DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/SQL/How would you prevent small file problems in S3 when loading data into Redshift?

How would you prevent small file problems in S3 when loading data into Redshift?

SQLmedium0.6 min readPremium

Small files in S3 cause Redshift COPY slowdowns (each file triggers a slice). Solutions: (1) Coalesce before load—run a Spark/Glue job to merge files (e.g., 128MB per file). (2) Use manifest files—COPY from a manifest listing fewer, larger files. (3) Enable MANIFEST in Glue/ETL...

🤖 Analyze Your Answer
Frequency
Low
Asked at 1 company
Category
487
questions in SQL
Difficulty Split
130E|271M|86H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
Capco
Key Concepts Tested
etlpartitionspark

Why This Question Matters

This medium-level SQL question appears frequently in data engineering interviews at companies like Capco. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (etl, partition, spark) will help you answer variations of this question confidently.

How to Approach This

Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones.

Expert Answer
121 words

Small files in S3 cause Redshift COPY slowdowns (each file triggers a slice). Solutions: (1) Coalesce before load—run a Spark/Glue job to merge files (e.g., 128MB per file). (2) Use manifest files—COPY from a manifest listing fewer, larger files. (3) Enable MANIFEST in Glue/ETL to output fewer parts. (4) Buffer in Kinesis Firehose with size/count thresholds. (5) Use Redshift Spectrum for ad-hoc S3 queries without loading. Best practice: target 1–128MB files per Redshift slice. Example Spark: df.coalesce(num_files).write.parquet("s3://bucket/prefix/"). Glue: use 'glueparquet' with write_dynamic_frame and reducePartitions. Redshift: COPY orders FROM 's3://bucket/prefix/' MANIFEST; Why it matters: Design choices compound at scale—wrong approach can cause 100× overhead. Scalability trade-offs: Profile before optimizing; validate on sample then full. Cost implications: Suboptimal choices multiply at billion-row scale.

The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations covering performance optimization and real-world examples.

This answer is partially locked

Unlock the full expert answer with code examples and trade-offs

Recommended

Start AI Mock Interview

Practice real interviews with AI feedback, track progress, and get interview-ready faster.

  • Unlimited AI mock interviews
  • Instant feedback & scoring
  • Full answers to 1,800+ questions
  • Resume analyzer & SQL playground
Create Free Account

Pro starts at $24/mo - cancel anytime

Just need answers for quick revision?

Download curated PDF interview packs

Interview Packs
1,800+ real interview questions sourced from 5 top companies
AmazonGoogleDatabricksSnowflakeMeta
This answer is in the DE Mastery Vault 2026
1,863 questions with expert answers across 7 categories →

Free: Top 20 SQL Interview Questions (PDF)

Get the most asked SQL questions with expert answers. Instant download.

No spam. Unsubscribe anytime.

Related SQL Questions

mediumWrite an SQL query to find the second-highest salary from an employee table.FreemediumDemonstrate the difference between DENSE_RANK() and RANK()FreemediumDiscuss differences between ROW_NUMBER(), RANK(), and DENSE_RANK(), and provide examples from your projects.FreemediumExplain the differences between Data Warehouse, Data Lake, and Delta LakeFreemediumExplain the differences between Repartition and Coalesce. When would you use each?Free

Want to know if YOUR answer is good enough?

Paste your answer and get instant AI feedback with a FAANG-level improved version.

Analyze My Answer — Free

According to DataEngPrep.tech, this is one of the most frequently asked SQL interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

← Back to all questionsMore SQL questions →