**Property**: `spark.sql.shuffle.partitions`. Default 200. Controls output partitions after shuffle (groupBy, join).
**Set**: `spark.conf.set("spark.sql.shuffle.partitions", 400)` or `--conf spark.sql.shuffle.partitions=400`.
**Tuning**: 2–4x core count for small data. For large data, higher (e.g., 400–800). Too low = large partitions, spill. Too high = overhead.
**AQE**: Can coalesce at runtime. Set initial high; AQE reduces.
**Cost Implications**: Right count = balanced tasks....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Puma. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.