DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/Spark/Big Data/What is the small-file problem in Spark, and how do you solve it?

What is the small-file problem in Spark, and how do you solve it?

Spark/Big Datahard0.7 min readPremium

Small-file problem: Too many tiny files (KB–MB) cause metadata explosion (S3/HDFS list operations), slow scans, and many small tasks. **Root causes**: High parallelism (many partitions), over-partitioning by high-cardinality key, streaming append with small batches. **Why it...

🤖 Analyze Your Answer
Frequency
Low
Asked at 2 companies
Category
452
questions in Spark/Big Data
Difficulty Split
88E|81M|283H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
Daniel WellingtonIncedo
Key Concepts Tested
partitionsparkwindow

Why This Question Matters

This hard-level Spark/Big Data question appears frequently in data engineering interviews at companies like Daniel Wellington, Incedo. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (partition, spark, window) will help you answer variations of this question confidently.

How to Approach This

This is a senior-level question that tests architectural thinking. Lead with the high-level design, then drill into specifics. Discuss trade-offs explicitly - there is rarely one correct answer. Show awareness of scale, fault tolerance, and operational complexity.

Expert Answer
131 words

Small-file problem: Too many tiny files (KB–MB) cause metadata explosion (S3/HDFS list operations), slow scans, and many small tasks. Root causes: High parallelism (many partitions), over-partitioning by high-cardinality key, streaming append with small batches. Why it hurts: S3 list costs $0.005/1000 requests; 1M files = $5 just for listing. Query engines (Athena, Presto) open each file; latency grows with file count. Solutions: (1) Coalesce/repartition before write to reduce output partitions. (2) Delta/Parquet auto-compaction (OPTIMIZE). (3) Target file size 128MB–1GB (match block size). (4) Batch streaming writes (e.g., maxFilesPerTrigger) to amortize. Scalability: Compaction itself costs compute; schedule during low-traffic windows. Cost implication: 10K small files vs. 100 optimal files can 10x query cost. Best practice: Monitor output file size distribution; set coalesce targets in CI; use Delta OPTIMIZE ZORDER for hot columns.

The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations covering performance optimization and real-world examples.

This answer is partially locked

Unlock the full expert answer with code examples and trade-offs

Recommended

Start AI Mock Interview

Practice real interviews with AI feedback, track progress, and get interview-ready faster.

  • Unlimited AI mock interviews
  • Instant feedback & scoring
  • Full answers to 1,800+ questions
  • Resume analyzer & SQL playground
Create Free Account

Pro starts at $24/mo - cancel anytime

Just need answers for quick revision?

Download curated PDF interview packs

Interview Packs
1,800+ real interview questions sourced from 5 top companies
AmazonGoogleDatabricksSnowflakeMeta
This answer is in the DE Mastery Vault 2026
1,863 questions with expert answers across 7 categories →

Free: Top 20 SQL Interview Questions (PDF)

Get the most asked SQL questions with expert answers. Instant download.

No spam. Unsubscribe anytime.

Related Spark/Big Data Questions

mediumWhat is the difference between repartition and coalesce in Apache Spark?FreehardWhat is the difference between SparkSession and SparkContext in Spark?FreemediumWhat is the difference between cache() and persist() in Spark? When would you use each?FreemediumWhat is the difference between groupByKey and reduceByKey in Spark?FreemediumWhat is the difference between narrow and wide transformations in Apache Spark? Explain with examples.Free

Want to know if YOUR answer is good enough?

Paste your answer and get instant AI feedback with a FAANG-level improved version.

Analyze My Answer — Free

According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 2 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

← Back to all questionsMore Spark/Big Data questions →