Write a PySpark job that calculates the number of unique users who logged in per day, but exclude any logins from inactive users listed in a separate file.
**Code**:
```python
from pyspark.sql.functions import countDistinct, to_date
logins = spark.read.parquet("/logins")
inactive = spark.read.text("/inactive_users").selectExpr("value as user_id")
active_logins = logins.join(inactive, "user_id", "left_anti")
daily_unique = active_logins.groupBy(to_date("login_ts").alias("date")).agg(countDistinct("user_id").alias("unique_users"))
```
**Why left_anti**: Excludes rows that match....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Dunnhumby. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
According to DataEngPrep.tech, this is one of the most frequently asked Spark/Big Data interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.