**Code**:
```python
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
spark = SparkSession.builder.getOrCreate()
df = spark.read.parquet("/path/to/data")
filtered = df.filter((col("status") == "active") & (col("amount") > 100))
count = filtered.count()
```
**Why**: Filter pushdown to Parquet. Count is action. Use column objects for complex conditions.
**Scalability Trade-offs**: Push filter to source. Avoid UDF in filter.
**Cost Implications**: Pushdown = less I/O....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Bitwise. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.