DataEngPrep.tech
QuestionsPracticeAI CoachDashboardPacksBlog
ProLogin
Home/Questions/General/Other/An existing job running longer suddenly: how to analyze the issue?

An existing job running longer suddenly: how to analyze the issue?

General/Othermedium0.4 min readPremium

**STAR approach**: **Situation**: Job ran 2 hr, now 6 hr. **Task**: Find root cause. **Analysis**: (1) Data volume growth—compare input size. (2) Skew—Spark UI shows uneven partitions. (3) Resource contention—other jobs. (4) Source throttling—DB or API limits. (5) Partition...

🤖 Analyze Your Answer
Frequency
Low
Asked at 1 company
Category
243
questions in General/Other
Difficulty Split
151E|43M|49H
in this category
Total Bank
1,863
across 7 categories
Asked at these companies
Citi
Interview Pro Tip

Pro-Move: 'We had a job slow 3×—Spark UI showed 1 partition at 2hr; added salting, back to 30min.' Red Flag: Restarting or adding resources without diagnosing—mask the problem.

Key Concepts Tested
partitionspark

Why This Question Matters

This medium-level General/Other question appears frequently in data engineering interviews at companies like Citi. While less common, it tests deeper understanding that distinguishes strong candidates. Mastering the underlying concepts (partition, spark) will help you answer variations of this question confidently.

How to Approach This

Break this problem into components. Identify the core trade-offs involved, then walk the interviewer through your reasoning step by step. Demonstrate awareness of edge cases and production considerations - this is what separates good answers from great ones.

Expert Answer
71 words

STAR approach: Situation: Job ran 2 hr, now 6 hr. Task: Find root cause. Analysis: (1) Data volume growth—compare input size. (2) Skew—Spark UI shows uneven partitions. (3) Resource contention—other jobs. (4) Source throttling—DB or API limits. (5) Partition pruning—missing partitions? Actions: Add partitions; salting for skew; increase resources; fix source limits. Result: Document in runbook; add monitoring. Best practice: Compare metrics (input GB, partition count); use Spark UI for skew.

The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations covering performance optimization and real-world examples.

This answer is partially locked

Unlock the full expert answer with code examples and trade-offs

Recommended

Start AI Mock Interview

Practice real interviews with AI feedback, track progress, and get interview-ready faster.

  • Unlimited AI mock interviews
  • Instant feedback & scoring
  • Full answers to 1,800+ questions
  • Resume analyzer & SQL playground
Create Free Account

Pro starts at $24/mo - cancel anytime

Just need answers for quick revision?

Download curated PDF interview packs

Interview Packs
1,800+ real interview questions sourced from 5 top companies
AmazonGoogleDatabricksSnowflakeMeta
This answer is in the DE Mastery Vault 2026
1,863 questions with expert answers across 7 categories →

Related General/Other Questions

hardHave you worked on Data Warehousing projects?FreemediumHow would you read data from a web API? What steps would you follow after reading the data?FreehardRetrieve the most recent sale_timestamp for each product (Latest Transaction).FreehardWhat is the difference between OLTP and OLAP?FreemediumWhat is the difference between SQL and NoSQL databases?Free

Want to know if YOUR answer is good enough?

Paste your answer and get instant AI feedback with a FAANG-level improved version.

Analyze My Answer — Free

According to DataEngPrep.tech, this is one of the most frequently asked General/Other interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.

← Back to all questionsMore General/Other questions →