1) Broadcast joins for small tables—avoid shuffle. 2) Predicate pushdown—filter at source (Parquet/ORC) to reduce scan. 3) Partition tuning—spark.sql.shuffle.partitions ~2–4× cores; match partition columns to filter/join keys. 4) Cache only when reused; unpersist when done to...
Red Flag: Listing techniques without prioritization or 'it depends.' Pro-Move: 'Spark UI showed 80% time in shuffle—we fixed skew with salting; next bottleneck was scan, so we added partition pruning'—shows systematic debugging.
This hard-level Spark/Big Data question appears frequently in data engineering interviews at companies like Fragma Data Systems, Presidio, Swiggy. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (join, optimization, partition) will help you answer variations of this question confidently.
This is a senior-level question that tests architectural thinking. Lead with the high-level design, then drill into specifics. Discuss trade-offs explicitly - there is rarely one correct answer. Show awareness of scale, fault tolerance, and operational complexity.
1) Broadcast joins for small tables—avoid shuffle. 2) Predicate pushdown—filter at source (Parquet/ORC) to reduce scan. 3) Partition tuning—spark.sql.shuffle.partitions ~2–4× cores; match partition columns to filter/join keys. 4) Cache only when reused; unpersist when done to free memory. 5) Prefer Spark SQL over UDFs—Catalyst optimization. 6) Skew handling—salted keys, AQE skew join. 7) Kryo serialization for RDD; avoid Java default. 8) Coalesce before write to avoid small files. Why: Each addresses a different bottleneck—shuffle, scan, GC, serialization. Cost: Wrong configs can 10x runtime and cost. Best practice: Profile first (Spark UI); fix largest bottleneck; iterate.
This answer is partially locked
Unlock the full expert answer with code examples and trade-offs
Practice real interviews with AI feedback, track progress, and get interview-ready faster.
Pro starts at $19/mo - cancel anytime
Trusted by 10,000+ aspiring data engineers
Practice the 66 most asked data engineering questions at Swiggy. Covers SQL, Spark/Big Data, Python/Coding and more.
13 min read →Practice the 52 most asked data engineering questions at Presidio. Covers SQL, Spark/Big Data, General/Other and more.
10 min read →Senior Spark interviews at Amazon, Databricks, and Meta focus on performance tuning, not API syntax. Master these 15 questions to prove you've run Spark at scale.
20 min read →According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 3 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.