DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/Spark/Big Data/Architect incremental load in ADF + Databricks with idempotency, late-arrival handling, and cost/scalability implications of watermark vs. change data capture.

Architect incremental load in ADF + Databricks with idempotency, late-arrival handling, and cost/scalability implications of watermark vs. change data capture.

Spark/Big Datamedium1 min readPremium

**Pattern**: Process only new/changed data by tracking last processed boundary (watermark) or using CDC. **ADF approach**: Watermark via lookup/stored procedure storing max(modified_date); filter source WHERE modified_date > @lastRun; parameterize pipeline for lastRun; write to...

🤖 Analyze Your Answer
Frequency
Low
Asked at 2 companies
Category
452
questions in Spark/Big Data
Difficulty Split
88E|81M|283H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
DeloitteIncedo
Interview Pro Tip

Red Flag: Watermark in ADF without idempotent sink—reruns can double-count. Pro-Move: Use Delta CDF + MERGE for true CDC; combine with streaming for near-real-time and batch for backfill in a unified pipeline.

Key Concepts Tested
partition

Why This Question Matters

This medium-level Spark/Big Data question appears frequently in data engineering interviews at companies like Deloitte, Incedo. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (partition) will help you answer variations of this question confidently.

How to Approach This

Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones.

Expert Answer
191 words

Pattern: Process only new/changed data by tracking last processed boundary (watermark) or using CDC.

ADF approach: Watermark via lookup/stored procedure storing max(modified_date); filter source WHERE modified_date > @lastRun; parameterize pipeline for lastRun; write to sink. Use triggers for scheduling.

Databricks approach: Delta Lake MERGE INTO for upserts; Change Data Feed (CDF) for CDC; read CDF or filter on _change_version. Pattern: (1) Read max(modified) from target, (2) Extract source WHERE modified > last_modified, (3) MERGE INTO target on business key.

Architectural choices: Watermark (time-based) is simpler, cheaper, but misses out-of-order or late arrivals unless you add lookback. CDC captures every change with higher fidelity and operational cost (DB triggers, Debezium, etc.).

Scalability: Incremental reduces scan size—cost scales with delta, not full table. But watermark table/lookup becomes hotspot at scale; consider partitioning or caching. For very high churn, full scan occasionally may be cheaper than complex CDC.

Cost implications: Watermark = one extra query per run (cheap). CDC = log shipping, storage, and processing overhead. Delta OPTIMIZE + Z-order after incremental loads improves read performance but adds compaction cost.

Idempotency: Use surrogate keys; MERGE handles updates; design for re-running same batch without duplicates.

This answer is partially locked

Unlock the full expert answer with code examples and trade-offs

Recommended

Start AI Mock Interview

Practice real interviews with AI feedback, track progress, and get interview-ready faster.

  • Unlimited AI mock interviews
  • Instant feedback & scoring
  • Full answers to 1,800+ questions
  • Resume analyzer & SQL playground
Create Free Account

Pro starts at $24/mo - cancel anytime

Just need answers for quick revision?

Download curated PDF interview packs

Interview Packs
1,800+ real interview questions sourced from 5 top companies
AmazonGoogleDatabricksSnowflakeMeta
This answer is in the DE Mastery Vault 2026
1,863 questions with expert answers across 7 categories →

Related Spark/Big Data Questions

mediumWhat is the difference between repartition and coalesce in Apache Spark?FreehardWhat is the difference between SparkSession and SparkContext in Spark?FreemediumWhat is the difference between cache() and persist() in Spark? When would you use each?FreemediumWhat is the difference between groupByKey and reduceByKey in Spark?FreemediumWhat is the difference between narrow and wide transformations in Apache Spark? Explain with examples.Free

Want to know if YOUR answer is good enough?

Paste your answer and get instant AI feedback with a FAANG-level improved version.

Analyze My Answer — Free

According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 2 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

← Back to all questionsMore Spark/Big Data questions →