The most frequently asked Spark questions in real data engineering interviews. Sorted by frequency.
Apache Spark is central to data engineering roles. These questions cover RDD vs DataFrame internals, partitioning strategies, shuffle optimization, broadcast joins, Catalyst optimizer, Spark SQL, structured streaming, memory management, and real-world performance tuning patterns.
What is the difference between repartition and coalesce in Apache Spark?
What is the difference between SparkSession and SparkContext in Spark?
What is the difference between cache() and persist() in Spark? When would you use each?
What is the difference between groupByKey and reduceByKey in Spark?
What is the difference between narrow and wide transformations in Apache Spark? Explain with examples.
What strategies can you use to handle skewed data in Spark?
Can you explain the architecture of Apache Spark and its components?
Describe the difference between Spark RDDs, DataFrames, and Datasets.
Explain the concept of Broadcast Join in Spark. When should it be used?
Explain the difference between Spark's map() and flatMap() transformations.
How do you handle late-arriving data in Spark Structured Streaming?
How do you optimize Spark jobs for better performance? Mention at least 5 techniques.
How does Spark's Catalyst Optimizer work? Explain its stages.
What is the difference between Managed and External tables in Hive/Spark?
What is the small-file problem in Spark, and how do you solve it?
Architect incremental load in ADF + Databricks with idempotency, late-arrival handling, and cost/scalability implications of watermark vs. change data capture.
Architecturally, how do Job–Stage–Task boundaries in Spark's execution model impact cluster sizing, shuffle cost, and when would you deliberately collapse or split stages?
Architecturally, how would you justify or challenge Hadoop vs. a cloud-native data lake (S3 + EMR/Databricks) for a greenfield enterprise data platform? Discuss scalability ceilings, cost model trade-offs, and operational complexity.
Convert complex SQL (CTEs, window functions, subqueries) to production-grade PySpark. Discuss when to use spark.sql() vs. DataFrame API, and the implications for testability, partitioning, and execution predictability.
Design a cost-aware resource strategy for a Databricks workload with spiky and batch jobs. Explain Dynamic Resource Allocation, when to disable it, and how min/max executors and spot instances affect cost and SLAs.
Design a Delta table layout for mixed workload: point lookups by user_id, range scans by date, and full partition scans. Compare partitioning vs. Z-ordering—when to use each, and the rewrite cost trade-off.
Design a fault-tolerant Spark Streaming checkpoint strategy: what to persist, recovery semantics, and cost/scalability trade-offs with checkpoint frequency.
Design an anti-skew strategy for a join on a high-cardinality key with a long-tail distribution (e.g., a few keys hold 80% of rows). Cover salting, split-skew, AQE, and cost/operational trade-offs.
Explain how Adaptive Query Execution changes the economics of Spark tuning. What problems does it solve at runtime, and when might you still need manual intervention (e.g., salting, broadcast hints)?
Explain strategies for managing schema changes in PySpark over time.
Explain the benefits of using DataFrames over RDDs.
Explain the concept of checkpointing in Spark and why it is important.
Explain the difference between batch and streaming data processing in Data Fusion.
Explain the Medallion Architecture (Bronze, Silver, Gold layers).
Explain wide vs. narrow transformations and how they drive shuffle cost, failure domains, and pipeline design. When would you intentionally add a wide transformation, and how do you minimize its impact?
Given a streaming dataset from Kafka, how would you ingest the data in real-time using Spark?
How do you drop columns with null values in PySpark?
How do you handle data skewness in Spark?
How do you optimize Spark jobs for performance?
How would you implement a sliding window aggregation in Spark Structured Streaming?
How would you read data from a web API using PySpark?
Implement a Spark job to find the top 10 most frequent words in a large text file.
Prioritize Spark optimizations by impact and effort. Discuss partitioning strategy, caching policy, join selection, shuffle reduction, and when each becomes a scalability or cost bottleneck.
Walk through the three AQE features in Spark 3.x (coalesce, join switch, skew join)—how they operate at shuffle boundaries, which configs enable them, and what happens when AQE cannot help.
What are the key components of the Spark execution model (Job, Stage, Task)?
What is Adaptive Query Execution (AQE) in Spark 3.x, and how does it improve performance?
What is broadcasting in Spark, and why is it used? Can you give an example of its use?
What is Spark's Catalyst Optimizer? Explain its stages.
What is the difference between Managed and External Tables in Databricks?
What is the difference between map and flatMap in Spark, and when would you use each?
What is the difference between repartition and coalesce in Spark?
What is the difference between Spark RDDs, DataFrames, and Datasets?
What is the purpose of the Bronze, Silver, and Gold layers in a data pipeline?
What is the small-file problem in Spark, and how do you solve it?
What work is done by the executor memory in Spark?
Download the complete interview prep bundle with expert answers. Study offline, on your commute, anywhere.