**Section 1 — The Context (The 'Why')** Real-time streaming pipelines face a fundamental tension: events arrive continuously at high velocity while downstream consumers demand low latency, yet the system must guarantee no data loss during broker failures or consumer restarts. A...
This hard-level Spark/Big Data question appears frequently in data engineering interviews at companies like Expedia. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (join, optimization, partition) will help you answer variations of this question confidently.
This is a senior-level question that tests architectural thinking. Lead with the high-level design, then drill into specifics. Discuss trade-offs explicitly - there is rarely one correct answer. Show awareness of scale, fault tolerance, and operational complexity. The expert answer includes a code example that demonstrates the implementation pattern.
Section 1 — The Context (The 'Why')
Real-time streaming pipelines face a fundamental tension: events arrive continuously at high velocity while downstream consumers demand low latency, yet the system must guarantee no data loss during broker failures or consumer restarts. A naive approach—writing directly to a database per event—collapses under load; checkpointing to local disk loses state on executor preemption. The primary challenge is achieving exactly-once semantics across a distributed, fault-prone pipeline while maintaining sub-minute latency SLAs.
Section 2 — The Diagram
[Events] --> [Kinesis/Kafka]
|
v
[Spark/Flink] --> [Delta Lake]
|
v
[Bronze] --> [Silver] --> [Gold]
|
v
[BI | ML | Dashboards]
Section 3 — Component Logic
The Kinesis/Kafka tier acts as the durable ingestion buffer—events land here first regardless of downstream processing speed, enabling backpressure handling at the source. We choose a partitioned stream because parallelism equals throughput; partitioning by business key preserves order within a partition. Spark or Flink executes micro-batches or continuous processing; we choose Spark Structured Streaming for SQL familiarity and Glue integration, or Flink for true event-time windowing and lower latency. The processor must implement exactly-once semantics via idempotent sinks—a Delta Lake merge on (primary_key, batch_id) ensures duplicate events from retries produce one output row. Bronze/Silver/Gold implements the Medallion architecture: Bronze is raw append-only; Silver applies deduplication and merge; Gold contains business aggregates. Data skew mitigation via salting on high-cardinality join keys prevents hotspot partitions. TTL policies on Bronze move cold data to Glacier after retention windows.
This answer is partially locked
Unlock the full expert answer with code examples and trade-offs
Practice real interviews with AI feedback, track progress, and get interview-ready faster.
Pro starts at $24/mo - cancel anytime
Get the most asked SQL questions with expert answers. Instant download.
No spam. Unsubscribe anytime.
Paste your answer and get instant AI feedback with a FAANG-level improved version.
Analyze My Answer — FreeAccording to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.