Real interview questions from Databricks and companies using the Databricks ecosystem. Covers Spark, Delta Lake, lakehouse architecture, and more.
Databricks interviews focus heavily on Apache Spark internals, Delta Lake architecture, the medallion pattern (bronze/silver/gold layers), Unity Catalog for governance, Auto Loader for incremental ingestion, and performance tuning. These questions reflect what candidates are actually asked in Databricks and partner company interviews.
Tell me about yourself and your experience.
What is the difference between repartition and coalesce in Apache Spark?
What is the difference between SparkSession and SparkContext in Spark?
What are traits in Scala, and how are they different from classes?
What is the difference between cache() and persist() in Spark? When would you use each?
What is the difference between groupByKey and reduceByKey in Spark?
What is the difference between narrow and wide transformations in Apache Spark? Explain with examples.
Explain the differences between a Data Lake and a Data Warehouse.
Explain the differences between Data Warehouse, Data Lake, and Delta Lake
What is the difference between partitioning and bucketing in Spark, and when would you use bucketing?
What strategies can you use to handle skewed data in Spark?
Briefly introduce yourself and walk us through your journey as a Data Engineer so far.
Can you explain the architecture of Apache Spark and its components?
Describe the difference between Spark RDDs, DataFrames, and Datasets.
Explain the concept of Broadcast Join in Spark. When should it be used?
Explain the difference between Spark's map() and flatMap() transformations.
How do you handle late-arriving data in Spark Structured Streaming?
How do you optimize Spark jobs for better performance? Mention at least 5 techniques.
How does Spark's Catalyst Optimizer work? Explain its stages.
What is the difference between a primary key and a unique key?
What is the difference between Managed and External tables in Hive/Spark?
What is the small-file problem in Spark, and how do you solve it?
Architect incremental load in ADF + Databricks with idempotency, late-arrival handling, and cost/scalability implications of watermark vs. change data capture.
Architecturally, how do Job–Stage–Task boundaries in Spark's execution model impact cluster sizing, shuffle cost, and when would you deliberately collapse or split stages?
Architecturally, how would you justify or challenge Hadoop vs. a cloud-native data lake (S3 + EMR/Databricks) for a greenfield enterprise data platform? Discuss scalability ceilings, cost model trade-offs, and operational complexity.
Convert complex SQL (CTEs, window functions, subqueries) to production-grade PySpark. Discuss when to use spark.sql() vs. DataFrame API, and the implications for testability, partitioning, and execution predictability.
Describe the data pipeline architecture you've worked with.
Design a cost-aware resource strategy for a Databricks workload with spiky and batch jobs. Explain Dynamic Resource Allocation, when to disable it, and how min/max executors and spot instances affect cost and SLAs.
Design a Delta table layout for mixed workload: point lookups by user_id, range scans by date, and full partition scans. Compare partitioning vs. Z-ordering—when to use each, and the rewrite cost trade-off.
Design a fault-tolerant Spark Streaming checkpoint strategy: what to persist, recovery semantics, and cost/scalability trade-offs with checkpoint frequency.
Design an anti-skew strategy for a join on a high-cardinality key with a long-tail distribution (e.g., a few keys hold 80% of rows). Cover salting, split-skew, AQE, and cost/operational trade-offs.
Explain how Adaptive Query Execution changes the economics of Spark tuning. What problems does it solve at runtime, and when might you still need manual intervention (e.g., salting, broadcast hints)?
Explain strategies for managing schema changes in PySpark over time.
Explain the concept of checkpointing in Spark and why it is important.
Explain the difference between Azure Data Factory (ADF) and Databricks.
Explain the Medallion Architecture (Bronze, Silver, Gold layers).
Explain wide vs. narrow transformations and how they drive shuffle cost, failure domains, and pipeline design. When would you intentionally add a wide transformation, and how do you minimize its impact?
Given a streaming dataset from Kafka, how would you ingest the data in real-time using Spark?
How do you drop columns with null values in PySpark?
How do you handle data skewness in Spark?
How do you optimize Spark jobs for performance?
How would you implement a sliding window aggregation in Spark Structured Streaming?
How would you read data from a web API using PySpark?
Implement a Spark job to find the top 10 most frequent words in a large text file.
Prioritize Spark optimizations by impact and effort. Discuss partitioning strategy, caching policy, join selection, shuffle reduction, and when each becomes a scalability or cost bottleneck.
Retrieve the most recent sale_timestamp for each product (Latest Transaction).
Tell me about a time when you faced a challenging situation at work and how you handled it.
Walk through the three AQE features in Spark 3.x (coalesce, join switch, skew join)—how they operate at shuffle boundaries, which configs enable them, and what happens when AQE cannot help.
What are the key components of AWS Glue, and how do they work together?
What are the key components of the Spark execution model (Job, Stage, Task)?
Download the complete interview prep bundle with expert answers. Study offline, on your commute, anywhere.