from pyspark.sql import Window
from pyspark.sql.functions import avg, col, row_number
df = spark.table("employees").filter("age < 30")
dept_avg = df.groupBy("dept_id").agg(avg("salary").alias("dept_avg_sal"))
df2 = df.join(dept_avg, "dept_id").filter(col("salary") > col("dept_avg_sal"))
window = Window.partitionBy("dept_id").orderBy(col("salary").desc())
df2.withColumn("rn", row_number().over(window)).filter("rn <= 3").select("dept_id", "employee_id", "salary", "age").show() **Why**: Filter earl...
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Freight Tiger. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked SQL interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.