**Repartition(n)**: Performs a full shuffle to redistribute data across exactly `n` partitions. Can increase or decrease partition count. Uses hash partitioning by default—all rows are exchanged across the network. **Coalesce(n)**: Merges existing partitions into fewer...
Pro-Move: Mention partition count impact on small-files problem and cloud storage costs. Red Flag: Saying 'coalesce is always better'—wrong when you need to increase parallelism or fix skew.
This medium-level Spark/Big Data question appears frequently in data engineering interviews at companies like BCG, Citi, Dunnhumby, and 4 others. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (partition, python, spark) will help you answer variations of this question confidently.
Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones. The expert answer includes a code example that demonstrates the implementation pattern.
Repartition(n): Performs a full shuffle to redistribute data across exactly n partitions. Can increase or decrease partition count. Uses hash partitioning by default—all rows are exchanged across the network.
Coalesce(n): Merges existing partitions into fewer partitions without a full shuffle. Only decreases partition count. Data is combined locally where possible; minimal network transfer.
Why It Matters (Architectural Logic): Shuffles are the most expensive operations in Spark—they drive network I/O, serialization, and memory pressure. Choosing the wrong method can 2–3x your job runtime and cost.
This answer is partially locked
Unlock the full expert answer with code examples and trade-offs
Practice real interviews with AI feedback, track progress, and get interview-ready faster.
Pro starts at $19/mo - cancel anytime
Trusted by 10,000+ aspiring data engineers
A comprehensive guide to Spark interview questions covering RDDs, DataFrames, partitioning, shuffle optimization, and real-world performance tuning.
22 min read →Learn how to approach system design interviews for data engineering roles — from pipeline architecture to streaming systems and data modeling.
20 min read →Everything you need to know about the Amazon data engineering interview loop: process, questions, and preparation strategy.
15 min read →Prepare for Databricks data engineer interviews with real questions about Delta Lake, Unity Catalog, Spark internals, and pipeline architecture.
16 min read →Essential Python interview questions for data engineers covering PySpark, pandas, file handling, API design, and ETL scripting patterns.
17 min read →Practice the 65 most asked data engineering questions at Fragma Data Systems. Covers Spark/Big Data, Behavioral, Python/Coding and more.
13 min read →Practice the 48 most asked data engineering questions at Dunnhumby. Covers Spark/Big Data, Python/Coding, General/Other and more.
9 min read →Practice the 39 most asked data engineering questions at Citi. Covers Spark/Big Data, SQL, General/Other and more.
8 min read →Practice the 36 most asked data engineering questions at BCG. Covers Spark/Big Data, SQL, Cloud/Tools and more.
8 min read →Senior Spark interviews at Amazon, Databricks, and Meta focus on performance tuning, not API syntax. Master these 15 questions to prove you've run Spark at scale.
20 min read →According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 7 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.