**getmerge** (HDFS to local): `hadoop fs -getmerge /input/dir /local/merged.txt`
**In-HDFS merge** (cat + put): `hadoop fs -cat /input/dir/* | hadoop fs -put - /output/merged`
**Spark** (preferred for large data): `spark.read.text("/input/*").coalesce(1).write.text("/output")`
**Why Avoid Single Huge File**: (1) No parallelism for downstream reads. (2) Single block limits map tasks. (3) Memory pressure if read entirely....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Infosys. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.