**Section 1 — The Context (The 'Why')** ETL pipelines combining Kafka and Spark Streaming must reconcile batch-oriented processing with continuous ingestion. The primary challenge: offset management—without checkpoints, a Spark job restart replays from the beginning or skips...
This hard-level Spark/Big Data question appears frequently in data engineering interviews at companies like Meesho. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (etl, optimization, partition) will help you answer variations of this question confidently.
This is a senior-level question that tests architectural thinking. Lead with the high-level design, then drill into specifics. Discuss trade-offs explicitly - there is rarely one correct answer. Show awareness of scale, fault tolerance, and operational complexity. The expert answer includes a code example that demonstrates the implementation pattern.
Section 1 — The Context (The 'Why')
ETL pipelines combining Kafka and Spark Streaming must reconcile batch-oriented processing with continuous ingestion. The primary challenge: offset management—without checkpoints, a Spark job restart replays from the beginning or skips data. Another failure mode is duplicate writes from at-least-once delivery when the sink is not idempotent. A naive approach processes without bookmarks or merge keys, causing repeated inserts and incorrect aggregates.
Section 2 — The Diagram
[Kafka Topics] --> [Consumer Group]
|
v
[Spark Structured Streaming]
Micro-batch | Trigger
|
v
[Delta Bronze] --> [Silver]
|
v
[Gold Marts] --> [dbt / BI]
Section 3 — Component Logic
Kafka Topics with partitions keyed by entity ID ensure that related events land in the same partition for ordering. The Consumer Group coordinates offset commits—each partition is consumed by exactly one consumer in the group. Spark Structured Streaming runs micro-batches (e.g., 1–5 min triggers); we use it over DStreams because Structured Streaming supports event-time, watermarking, and exactly-once with checkpoint + idempotent sink. The Delta Bronze layer appends raw data; Silver applies MERGE on (pk, batch_id) for idempotency—reprocessing a batch produces identical results. Gold marts are aggregated tables for BI. Fan-out patterns allow multiple consumers (stream + batch) from the same Kafka topic. Bookmarks in Glue or checkpoint paths in Spark enable incremental processing without full replay.
This answer is partially locked
Unlock the full expert answer with code examples and trade-offs
Practice real interviews with AI feedback, track progress, and get interview-ready faster.
Pro starts at $24/mo - cancel anytime
Get the most asked SQL questions with expert answers. Instant download.
No spam. Unsubscribe anytime.
Paste your answer and get instant AI feedback with a FAANG-level improved version.
Analyze My Answer — FreeAccording to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.