**Narrow transformations**: Each input partition maps to at most one output partition. No shuffle. Examples: map, filter, flatMap, mapPartitions. **Wide transformations**: Require data from multiple input partitions to produce one output partition. Trigger shuffle. Examples:...
Pro-Move: Explain stage boundaries and pipeline fusion. Red Flag: Not knowing which common ops (e.g., distinct, join) are wide—basic Spark knowledge.
This medium-level Spark/Big Data question appears frequently in data engineering interviews at companies like Coforge, Delivery Hero, Dunnhumby, and 2 others. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (join, partition, python) will help you answer variations of this question confidently.
Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones. The expert answer includes a code example that demonstrates the implementation pattern.
Narrow transformations: Each input partition maps to at most one output partition. No shuffle. Examples: map, filter, flatMap, mapPartitions.
Wide transformations: Require data from multiple input partitions to produce one output partition. Trigger shuffle. Examples: groupByKey, reduceByKey, join, distinct, repartition.
Architectural Logic (Why This Matters): Spark pipelines narrow transformations and executes them in a single stage. Wide transformations force a stage boundary—all prior work is materialized (shuffle write), then a new stage reads (shuffle read). Stage count and shuffle volume drive job latency and cost.
This answer is partially locked
Unlock the full expert answer with code examples and trade-offs
Practice real interviews with AI feedback, track progress, and get interview-ready faster.
Pro starts at $19/mo - cancel anytime
Trusted by 10,000+ aspiring data engineers
Practice the 65 most asked data engineering questions at Fragma Data Systems. Covers Spark/Big Data, Behavioral, Python/Coding and more.
13 min read →Practice the 48 most asked data engineering questions at Dunnhumby. Covers Spark/Big Data, Python/Coding, General/Other and more.
9 min read →Senior Spark interviews at Amazon, Databricks, and Meta focus on performance tuning, not API syntax. Master these 15 questions to prove you've run Spark at scale.
20 min read →According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 5 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.