**Factors**: (1) **Total cores**—executors × cores per executor. (2) **Input partitions**—spark.sql.files.maxPartitionBytes (default 128MB); 1TB = ~8K partitions max. (3) **Shuffle partitions**—spark.sql.shuffle.partitions (default 200). (4) **Cluster size**—autoscale min/max. (5) **External**—Kafka partitions, source parallelism.
**Formula**: Effective parallelism = min(partition_count, total_cores). Bounded by slowest stage.
**Why It Matters**: Over-provisioning cores = waste....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like TCS. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.