Two distinct operations: (1) **Drop columns that are entirely null** (no non-null values): null_cols = [c for c in df.columns if df.filter(col(c).isNotNull()).count() == 0]; df = df.drop(*null_cols). **Caveat**: count() triggers a full scan—expensive on large tables. (2) **Drop rows with null in specified columns**: df.dropna(subset=["col1", "col2"]). **Scalability**: The column-null check is O(partitions × columns) and can be costly; consider sampling or inferring from schema/sample....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Datametica, Globant. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 2 companies. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.