DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/Spark/Big Data/What is the difference between cache() and persist() in Spark? When would you use each?

What is the difference between cache() and persist() in Spark? When would you use each?

Spark/Big Datamedium0.7 min read

**cache()**: Equivalent to `persist(MEMORY_AND_DISK)`. Stores partitions in memory; spills to disk if memory is insufficient. **persist(storage_level)**: Explicit control over storage: MEMORY_ONLY, MEMORY_AND_DISK, MEMORY_ONLY_SER, MEMORY_AND_DISK_SER, DISK_ONLY....

🤖 Practice this in AI Interview
Frequency
Low
Asked at 5 companies
Category
452
questions in Spark/Big Data
Difficulty Split
88E|81M|283H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
AccentureCoforgeFreechargeImpetusYash Technologies
Interview Pro Tip

Pro-Move: Tie storage level to cluster sizing and cost. Red Flag: Caching and never unpersisting—memory leak and wasted spend.

Key Concepts Tested
partitionspark

Why This Question Matters

This medium-level Spark/Big Data question appears frequently in data engineering interviews at companies like Accenture, Coforge, Freecharge, and 2 others. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (partition, spark) will help you answer variations of this question confidently.

How to Approach This

Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones.

Expert Answer
147 words

cache(): Equivalent to persist(MEMORY_AND_DISK). Stores partitions in memory; spills to disk if memory is insufficient.

persist(storage_level): Explicit control over storage: MEMORY_ONLY, MEMORY_AND_DISK, MEMORY_ONLY_SER, MEMORY_AND_DISK_SER, DISK_ONLY.

Architectural Logic (Why It Matters): Caching trades memory/disk for recomputation cost. The right choice depends on reuse count, data size, serialization overhead, and cluster resources.

Scalability & Cost Trade-offs:

  • MEMORY_ONLY: Fastest access; no serialization. Risk of eviction under memory pressure—partial recompute. Use for smaller datasets with high reuse.

  • MEMORY_ONLY_SER: ~2–4x less memory; CPU cost for serialization. Better for large caches when memory is constrained.

  • MEMORY_AND_DISK: Fault-tolerant—spills to disk if evicted. Avoids full recompute. Default for cache().

  • DISK_ONLY: When memory is severely limited; slower but predictable.
  • Cost Implications: Caching a 500GB DataFrame in MEMORY_ONLY on 100 executors with 8GB each = eviction thrashing. Use MEMORY_AND_DISK or MEMORY_ONLY_SER. Always unpersist() when done to free resources and avoid unnecessary cluster cost.

    This answer is partially locked

    Unlock the full expert answer with code examples and trade-offs

    Recommended

    Start AI Mock Interview

    Practice real interviews with AI feedback, track progress, and get interview-ready faster.

    • Unlimited AI mock interviews
    • Instant feedback & scoring
    • Full answers to 1,800+ questions
    • Resume analyzer & SQL playground
    Create Free Account

    Pro starts at $19/mo - cancel anytime

    Just need answers for quick revision?

    Download curated PDF interview packs

    Interview Packs
    R
    P
    A
    S

    Trusted by 10,000+ aspiring data engineers

    AmazonGoogleDatabricksSnowflakeMeta
    Related Study Guides
    📘

    Accenture Data Engineer Interview Questions & Answers (2026)

    Practice the 33 most asked data engineering questions at Accenture. Covers SQL, Spark/Big Data, Behavioral and more.

    8 min read →
    ⚡

    Spark Performance Tuning: 15 Interview Questions That Separate Senior Engineers from Juniors (2026)

    Senior Spark interviews at Amazon, Databricks, and Meta focus on performance tuning, not API syntax. Master these 15 questions to prove you've run Spark at scale.

    20 min read →

    Related Spark/Big Data Questions

    mediumWhat is the difference between repartition and coalesce in Apache Spark?FreehardWhat is the difference between SparkSession and SparkContext in Spark?FreemediumWhat is the difference between groupByKey and reduceByKey in Spark?FreemediumWhat is the difference between narrow and wide transformations in Apache Spark? Explain with examples.FreemediumWhat strategies can you use to handle skewed data in Spark?Free

    According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 5 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

    ← Back to all questionsMore Spark/Big Data questions →