**Why Unix in Data Eng:** File staging, cleanup, lightweight transforms before Spark. find, awk, sed, xargs, cron—ubiquitous, no runtime deps.
**Use Cases:** find /data -name '*.csv' -mtime +7 | xargs gzip. awk -F',' '{print $1,$3}' for column extraction. sed for bulk replace. Cron for scheduling: 0 2 * * * /scripts/ingest.sh.
**Scalability:** Shell for single-node, pre-processing. For distributed: Spark. Best practice: set -e (exit on error), validate paths, log to file....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Comcast. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked Python/Coding interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.