Blast Radius
Kafka: Replay. Flink fail: Checkpoint. Redis fail: Dashboard stale. Event-time: Late data in window.
Senior Terminology
Exactly-once semantics: Kafka + Flink. Event-time windows: Late data. Backpressure: Lag. Idempotency: Dedup. Each component has a distinct role: the ingestion layer buffers events and applies backpressure when consumers lag; the processing layer must implement idempotency via merge keys (pk, batch_id) to prevent duplicates on retry. Exactly-once semantics require a transactional producer plus idempotent sink—without both, at-least-once delivery doubles data. Data skew mitigation uses salting on high-cardinality keys and partitioning strategies that align with query filters. Fan-out patterns allow multiple consumers from the same source without data duplication. TTL policies on raw layers manage retention and cost. In production, we monitor consumer lag as a backpressure indicator, validate schema at ingest, and use a DLQ for poison messages. The diagram shows data flow left to right; partition count should equal or exceed consumer count for full utilization. For Kafka pipelines, RF=3 and min.insync.replicas=2 ensure no data loss on single broker failure. Checkpoint to durable storage (S3) not local disk. These choices—idempotency, checkpointing, replication factor—determine fault tolerance and data correctness.
Section 4 — The Trade-offs (The 'Senior' part)
CAP Theorem: We choose AP for streams and eventual-consistency layers; CP for ledger and warehouse where correctness matters. During partition, AP systems remain available; CP systems may reject writes.
Cost vs. Performance: Glue $0.44/DPU-hr; MSK $0.21/broker-hr; S3 $0.023/GB; Kinesis $0.015/shard. Right-size: bursty jobs favor serverless; sustained favor EMR.
Blast Radius: Primary component fail: downstream affected. Kafka broker: ISR election <10s. Spark: checkpoint replay. Sink: idempotent retry. DLQ isolates poison. Self-heal via replication and rebalance.
Section 5 — Pro-Tip
Pro-Move: Event-time; late data; exactly-once; partition by key; Redis+Delta.
Red Flag: Processing-time windows From a Principal Engineer perspective, the key differentiators are operational rigor—defined SLAs, runbooks, and chaos testing—and cost consciousness—right-sizing, reserved capacity, and incremental processing to minimize compute. The failure modes we guard against include partition events (Kafka ISR, consumer rebalance), poison messages (DLQ with alerting), and offset loss (S3 checkpoint). Interview red flags include missing idempotency (duplicates on retry), no DLQ (one bad record blocks the pipeline), and checkpointing to ephemeral storage (state lost on preemption). Production systems require monitoring of consumer lag, data freshness SLOs, and cost per record processed. Schema evolution should be additive-only with Schema Registry; partitioning strategies must align with query filters (date, region); blast radius is contained through replication, circuit breakers, and graceful degradation. When choosing between CP and AP: ledger and warehouse layers favor consistency; streams and caches favor availability. Cost optimization: Glue for bursty jobs under 2 hours; EMR for sustained 8+ hour workloads. Always quantify improvements—latency reduction, cost savings, volume handled. Data skew mitigation via salting and AQE prevents hotspot tasks; exactly-once semantics require idempotent sinks; fan-out patterns enable multiple consumers without duplication. TTL policies on Bronze reduce storage cost; incremental processing cuts compute by 90% versus full scans. Replication factor of three with min.insync.replicas=2 ensures durability; consumer count should match or exceed partition count; event-time over processing-time handles late arrivals correctly. Medallion architecture separates raw from curated; quality gates at Silver prevent bad data propagation; conformed dimensions enable cross-mart consistency. In interviews, demonstrate production experience by citing specific metrics: P95 latency, cost per million events, recovery time objective. Avoid generic answers; tie each design choice to a measurable outcome. The trade-off between consistency and availability is per-component: choose CP for financial transactions, AP for analytics. Scale testing should cover 10x peak load; runbooks should document failure recovery steps. Blue-green deployments enable zero-downtime schema evolution; view abstraction with COALESCE supports additive column migration. For real-time systems, define SLOs before building—lag under five minutes and freshness under one hour are common targets. Correlation IDs in log records enable end-to-end tracing when debugging production incidents. Reserve capacity for traffic spikes; implement circuit breakers to prevent cascading failures across dependent services. Document design decisions and their trade-offs for future maintainability. This demonstrates production-grade system design thinking.