**Option A (import-all-tables with filter)**:
```bash
sqoop import-all-tables --connect jdbc:mysql://host:3306/mydb \
--username user -P --warehouse-dir /user/hadoop/import \
--exclude-tables audit_log,temp --num-mappers 4
```
**Option B (table-by-table, recommended for prod)**:
```bash
for t in table1 table2 table3 table4 table5; do
sqoop import --connect jdbc:mysql://host/mydb --table $t \
--target-dir /user/hadoop/import/$t --num-mappers 4 \
--split-by id
# Use primary key...
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Meesho. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.