DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/Spark/Big Data/What is the difference between repartition and coalesce in Apache Spark?

What is the difference between repartition and coalesce in Apache Spark?

Spark/Big Datamedium1 min readFree Sample

**Repartition(n)**: Performs a full shuffle to redistribute data across exactly `n` partitions. Can increase or decrease partition count. Uses hash partitioning by default—all rows are exchanged across the network. **Coalesce(n)**: Merges existing partitions into fewer...

🤖 Practice this in AI Interview
Frequency
Low
Asked at 7 companies
Category
452
questions in Spark/Big Data
Difficulty Split
88E|81M|283H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
BCGCitiDunnhumbyFragma Data SystemsMastercardPumaSnowflake
Interview Pro Tip

Pro-Move: Mention partition count impact on small-files problem and cloud storage costs. Red Flag: Saying 'coalesce is always better'—wrong when you need to increase parallelism or fix skew.

Key Concepts Tested
partitionpythonspark

Why This Question Matters

This medium-level Spark/Big Data question appears frequently in data engineering interviews at companies like BCG, Citi, Dunnhumby, and 4 others. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (partition, python, spark) will help you answer variations of this question confidently.

How to Approach This

Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones. The expert answer includes a code example that demonstrates the implementation pattern.

Expert Answer
198 wordsIncludes code

Repartition(n): Performs a full shuffle to redistribute data across exactly n partitions. Can increase or decrease partition count. Uses hash partitioning by default—all rows are exchanged across the network.

Coalesce(n): Merges existing partitions into fewer partitions without a full shuffle. Only decreases partition count. Data is combined locally where possible; minimal network transfer.

Why It Matters (Architectural Logic): Shuffles are the most expensive operations in Spark—they drive network I/O, serialization, and memory pressure. Choosing the wrong method can 2–3x your job runtime and cost.

Scalability Trade-offs:

  • repartition: Use when increasing parallelism (e.g., from 10 → 200 partitions before a heavy aggregation) or when data is severely skewed and you need even redistribution. Cost: full shuffle.

  • coalesce: Use when reducing partitions before writes to avoid small files (e.g., coalesce(1) or coalesce(num_output_files)). Cost: minimal; risk of skew if coalescing too aggressively.
  • Cost Implications: On a 1TB dataset, repartition(1) would shuffle ~1TB; coalesce(1) shuffles only the overflow. Prefer coalesce when decreasing partitions to control write file count and cloud storage costs.

    Example:

    # Increase parallelism before expensive wide transform (full shuffle)
    df_repartitioned = df.repartition(200)
    # Reduce before write to avoid small-files problem (minimal shuffle)
    df_coalesced = df.coalesce(10)

    This answer is partially locked

    Unlock the full expert answer with code examples and trade-offs

    Recommended

    Start AI Mock Interview

    Practice real interviews with AI feedback, track progress, and get interview-ready faster.

    • Unlimited AI mock interviews
    • Instant feedback & scoring
    • Full answers to 1,800+ questions
    • Resume analyzer & SQL playground
    Create Free Account

    Pro starts at $19/mo - cancel anytime

    Just need answers for quick revision?

    Download curated PDF interview packs

    Interview Packs
    R
    P
    A
    S

    Trusted by 10,000+ aspiring data engineers

    AmazonGoogleDatabricksSnowflakeMeta
    Related Study Guides
    ⚡

    Apache Spark Interview Questions: Beginner to Advanced

    A comprehensive guide to Spark interview questions covering RDDs, DataFrames, partitioning, shuffle optimization, and real-world performance tuning.

    22 min read →
    🏗️

    System Design for Data Engineers: Complete Prep Guide

    Learn how to approach system design interviews for data engineering roles — from pipeline architecture to streaming systems and data modeling.

    20 min read →
    📦

    Amazon Data Engineer Interview Questions 2026 — Complete Guide

    Everything you need to know about the Amazon data engineering interview loop: process, questions, and preparation strategy.

    15 min read →
    🧱

    Databricks Interview Questions: SQL, Spark & More

    Prepare for Databricks data engineer interviews with real questions about Delta Lake, Unity Catalog, Spark internals, and pipeline architecture.

    16 min read →
    🐍

    Python for Data Engineering: Interview Questions & Answers

    Essential Python interview questions for data engineers covering PySpark, pandas, file handling, API design, and ETL scripting patterns.

    17 min read →
    ⚡

    Fragma Data Systems Data Engineer Interview Questions & Answers (2026)

    Practice the 65 most asked data engineering questions at Fragma Data Systems. Covers Spark/Big Data, Behavioral, Python/Coding and more.

    13 min read →
    ⚡

    Dunnhumby Data Engineer Interview Questions & Answers (2026)

    Practice the 48 most asked data engineering questions at Dunnhumby. Covers Spark/Big Data, Python/Coding, General/Other and more.

    9 min read →
    ⚡

    Citi Data Engineer Interview Questions & Answers (2026)

    Practice the 39 most asked data engineering questions at Citi. Covers Spark/Big Data, SQL, General/Other and more.

    8 min read →
    📘

    BCG Data Engineer Interview Questions & Answers (2026)

    Practice the 36 most asked data engineering questions at BCG. Covers Spark/Big Data, SQL, Cloud/Tools and more.

    8 min read →
    ⚡

    Spark Performance Tuning: 15 Interview Questions That Separate Senior Engineers from Juniors (2026)

    Senior Spark interviews at Amazon, Databricks, and Meta focus on performance tuning, not API syntax. Master these 15 questions to prove you've run Spark at scale.

    20 min read →

    Related Spark/Big Data Questions

    hardWhat is the difference between SparkSession and SparkContext in Spark?FreemediumWhat is the difference between cache() and persist() in Spark? When would you use each?FreemediumWhat is the difference between groupByKey and reduceByKey in Spark?FreemediumWhat is the difference between narrow and wide transformations in Apache Spark? Explain with examples.FreemediumWhat strategies can you use to handle skewed data in Spark?Free

    According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 7 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

    ← Back to all questionsMore Spark/Big Data questions →