DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/Spark/Big Data/What is the difference between narrow and wide transformations in Apache Spark? Explain with examples.

What is the difference between narrow and wide transformations in Apache Spark? Explain with examples.

Spark/Big Datamedium0.9 min read

**Narrow transformations**: Each input partition maps to at most one output partition. No shuffle. Examples: map, filter, flatMap, mapPartitions. **Wide transformations**: Require data from multiple input partitions to produce one output partition. Trigger shuffle. Examples:...

🤖 Practice this in AI Interview
Frequency
Low
Asked at 5 companies
Category
452
questions in Spark/Big Data
Difficulty Split
88E|81M|283H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
CoforgeDelivery HeroDunnhumbyFragma Data SystemsNagarro
Interview Pro Tip

Pro-Move: Explain stage boundaries and pipeline fusion. Red Flag: Not knowing which common ops (e.g., distinct, join) are wide—basic Spark knowledge.

Key Concepts Tested
joinpartitionpythonspark

Why This Question Matters

This medium-level Spark/Big Data question appears frequently in data engineering interviews at companies like Coforge, Delivery Hero, Dunnhumby, and 2 others. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (join, partition, python) will help you answer variations of this question confidently.

How to Approach This

Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones. The expert answer includes a code example that demonstrates the implementation pattern.

Expert Answer
177 wordsIncludes code

Narrow transformations: Each input partition maps to at most one output partition. No shuffle. Examples: map, filter, flatMap, mapPartitions.

Wide transformations: Require data from multiple input partitions to produce one output partition. Trigger shuffle. Examples: groupByKey, reduceByKey, join, distinct, repartition.

Architectural Logic (Why This Matters): Spark pipelines narrow transformations and executes them in a single stage. Wide transformations force a stage boundary—all prior work is materialized (shuffle write), then a new stage reads (shuffle read). Stage count and shuffle volume drive job latency and cost.

Scalability Trade-offs:

  • Minimize wide transforms: each adds network I/O and potential skew.

  • Order matters: filter (narrow) before join (wide) to reduce shuffle size.

  • broadcast() converts a join into a narrow op—use when one side is small.
  • Cost Implications: A pipeline with 5 wide transforms = 5 shuffles. Reordering to 2 wide transforms can cut runtime by 50%+. Partition pruning and predicate pushdown (narrow) reduce data before expensive wide ops.

    Examples:

    # Narrow: pipelineable within same stage
    df.filter(col("age") > 18).select("id", "amount")
    # Wide: triggers shuffle; stage boundary
    df.groupBy("dept").agg(sum("salary"))

    This answer is partially locked

    Unlock the full expert answer with code examples and trade-offs

    Recommended

    Start AI Mock Interview

    Practice real interviews with AI feedback, track progress, and get interview-ready faster.

    • Unlimited AI mock interviews
    • Instant feedback & scoring
    • Full answers to 1,800+ questions
    • Resume analyzer & SQL playground
    Create Free Account

    Pro starts at $19/mo - cancel anytime

    Just need answers for quick revision?

    Download curated PDF interview packs

    Interview Packs
    R
    P
    A
    S

    Trusted by 10,000+ aspiring data engineers

    AmazonGoogleDatabricksSnowflakeMeta
    Related Study Guides
    ⚡

    Fragma Data Systems Data Engineer Interview Questions & Answers (2026)

    Practice the 65 most asked data engineering questions at Fragma Data Systems. Covers Spark/Big Data, Behavioral, Python/Coding and more.

    13 min read →
    ⚡

    Dunnhumby Data Engineer Interview Questions & Answers (2026)

    Practice the 48 most asked data engineering questions at Dunnhumby. Covers Spark/Big Data, Python/Coding, General/Other and more.

    9 min read →
    ⚡

    Spark Performance Tuning: 15 Interview Questions That Separate Senior Engineers from Juniors (2026)

    Senior Spark interviews at Amazon, Databricks, and Meta focus on performance tuning, not API syntax. Master these 15 questions to prove you've run Spark at scale.

    20 min read →

    Related Spark/Big Data Questions

    mediumWhat is the difference between repartition and coalesce in Apache Spark?FreehardWhat is the difference between SparkSession and SparkContext in Spark?FreemediumWhat is the difference between cache() and persist() in Spark? When would you use each?FreemediumWhat is the difference between groupByKey and reduceByKey in Spark?FreemediumWhat strategies can you use to handle skewed data in Spark?Free

    According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 5 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

    ← Back to all questionsMore Spark/Big Data questions →