DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/System Design/Architecture/Briefly explain the architecture of Kafka.

Briefly explain the architecture of Kafka.

System Design/Architecturehard3 min read

**Section 1 — The Context (The 'Why')** Kafka must handle millions of events per second while guaranteeing durability, ordering within partitions, and consumer group coordination. Failures include broker loss, consumer rebalance storms, and retention vs. storage cost...

🤖 Practice this in AI Interview
Frequency
Low
Asked at 2 companies
Category
179
questions in System Design/Architecture
Difficulty Split
15E|6M|158H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
Delivery HeroGrover
Interview Pro Tip

**Pro-Move**: 'We set min.insync.replicas=2 so one broker can fail without blocking producers.' **Red Flag**: Ignoring replication factor or consumer lag monitoring.

Key Concepts Tested
joinoptimizationpartitionwindow

Why This Question Matters

This hard-level System Design/Architecture question appears frequently in data engineering interviews at companies like Delivery Hero, Grover. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (join, optimization, partition) will help you answer variations of this question confidently.

How to Approach This

This is a senior-level question that tests architectural thinking. Lead with the high-level design, then drill into specifics. Discuss trade-offs explicitly - there is rarely one correct answer. Show awareness of scale, fault tolerance, and operational complexity. The expert answer includes a code example that demonstrates the implementation pattern.

Expert Answer
602 wordsIncludes code

Section 1 — The Context (The 'Why')
Kafka must handle millions of events per second while guaranteeing durability, ordering within partitions, and consumer group coordination. Failures include broker loss, consumer rebalance storms, and retention vs. storage cost trade-offs. A naive single-broker design loses data and cannot scale reads.

Section 2 — The Diagram

[Producers] --> [Brokers]
|
v
[Topics / Partitions]
|
v
[Consumer Groups]
Offset | Rebalance

Section 3 — Component Logic
Producers push to brokers with acks=all for durability. Brokers store partitions; ISR (In-Sync Replicas) ensures exactly-once semantics at the producer level when combined with idempotent producer. Partitioning strategies determine key grouping; partition by user_id for ordering, or random for load balance. Consumer groups enable fan-out patterns—each group consumes independently. Backpressure handling is implicit: slow consumers lag; producers can block or drop. TTL policies (retention.ms) control storage cost vs. replay window. Data skew mitigation: avoid hot keys (e.g., null) in partition key.

Section 4 — The Trade-offs (The 'Senior' part)

  • CAP Theorem: Kafka favors AP. Brokers remain available; during broker failure, ISR promotes a new leader. We sacrifice strong consistency during rebalance for availability.
  • Cost vs. Performance: MSK: $0.21/broker-hr. Confluent Cloud: $0.11/GB in. Self-hosted EC2: ~$0.17/hr. Replication 3: 3x storage. Retention 7d balances cost vs. replay.
  • Blast Radius: Broker fail: ISR leader election <10s. min.insync.replicas=2: one broker fail OK. Consumer fail: rebalance; lag on that partition. No data loss with acks=all, RF=3.
  • Section 5 — Pro-Tip
    Pro-Move: 'We set min.insync.replicas=2 so one broker can fail without blocking producers.' Red Flag: Ignoring replication factor or consumer lag monitoring.

    Supplemental (Senior Context): In production, monitor partition skew, consumer lag, and merge duration. Use correlation IDs for traceability across pipeline stages. Schema evolution: prefer additive changes only; use Schema Registry for streaming to enforce compatibility. Consider data contract tests in CI to catch breaking changes early. Budget 10-20% overhead for replication, checkpoint storage, and DLQ. Data quality gates at each layer prevent bad data propagation. Right-size resources: profile before scaling; over-provisioning wastes budget. Document runbooks for common failures: broker restart, consumer rebalance, sink timeout. Establish SLOs per stage: ingest latency, transform duration, serve freshness. Review partition key choice: avoid high-cardinality keys that cause explosion; use composite keys (date, tenant) for balanced distribution. Test failure injection: kill executors, broker, sink to validate recovery. Optimize for the common case: most queries filter by date. Cold start mitigation: pre-warm connections, cache dimension lookups. Alert on lag exceeding 1hr, error rate above 1%. Cost optimization: lifecycle policies, spot instances, partition pruning. Lineage tracking enables impact analysis. Idempotency keys for replay. Backpressure handling prevents slow consumers from blocking producers. Fan-out patterns allow multiple consumers without re-processing. Exactly-once semantics require replayable source and idempotent sink. Data skew mitigation via salting for high-cardinality joins. Partitioning strategies must align with query patterns for pruning. CAP trade-off: AP for ingest and transform; CP for serve when BI needs accuracy. Blast radius bounded by partition and consumer group. Measure and iterate: latency percentiles, cost per record, error rate. Principal engineer tip: quantify before and after optimizations. Red flag: describing architecture without trade-offs. Glue versus EMR: Glue for bursty sub-2hr jobs; EMR for sustained 8hr+ saving 60%. MSK for Kafka; S3 for lake storage. Self-heal: orchestration retries; idempotent sinks ensure consistency. If primary fails, downstream goes stale but no data loss with replay. Design for operability: runbooks, dashboards, alerts. Avoid tight coupling between stages. Incremental processing reduces compute versus full refresh. Watermark-based deduplication enables idempotency. Partition evolution: add new partitions without rewriting. Retention policies balance cost and compliance. Test at scale: use production-size samples for validation. Always document trade-offs.

    This answer is partially locked

    Unlock the full expert answer with code examples and trade-offs

    Recommended

    Start AI Mock Interview

    Practice real interviews with AI feedback, track progress, and get interview-ready faster.

    • Unlimited AI mock interviews
    • Instant feedback & scoring
    • Full answers to 1,800+ questions
    • Resume analyzer & SQL playground
    Create Free Account

    Pro starts at $19/mo - cancel anytime

    Just need answers for quick revision?

    Download curated PDF interview packs

    Interview Packs
    R
    P
    A
    S

    Trusted by 10,000+ aspiring data engineers

    AmazonGoogleDatabricksSnowflakeMeta
    Related Study Guide
    🏗️

    System Design Interview Patterns for Data Pipelines

    Master 179 system design/architecture questions with expert answers. Real questions from 97+ companies.

    22 min read →

    Related System Design/Architecture Questions

    hardWhat architecture are you following in your current project, and why?FreeeasyCDC During Migration - explain approaches for real-time Change Data CaptureFreehardDescribe the data pipeline architecture you've worked with.FreehardExplain the trade-offs between batch and real-time data processing. Provide examples of when each is appropriate.FreehardCan you explain the trade-offs you made during the design process?

    According to DataEngPrep.tech, this is one of the most frequently asked System Design/Architecture interview questions, reported at 2 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

    ← Back to all questionsMore System Design/Architecture questions →