Spark & Big Data questions from Infosys data engineering interviews.
These spark & big data questions are sourced from Infosys data engineering interviews. Each includes an expert-level answer.
What is the difference between SparkSession and SparkContext in Spark?
Architecturally, how would you justify or challenge Hadoop vs. a cloud-native data lake (S3 + EMR/Databricks) for a greenfield enterprise data platform? Discuss scalability ceilings, cost model trade-offs, and operational complexity.
How would you read data from a web API using PySpark?
What is broadcasting in Spark, and why is it used? Can you give an example of its use?
What is the difference between map and flatMap in Spark, and when would you use each?
What is the purpose of the Bronze, Silver, and Gold layers in a data pipeline?
What work is done by the executor memory in Spark?
When and how do you use Broadcast Join?
Why is SparkSession used in Spark 2.0 and later versions?
Write a Python script to find the count of each word in a text file using Spark.
Write the PySpark code to find the second highest salary in each department.
Can you explain the concept of incremental loading in Sqoop and how to use it for job processing?
Can you explain the concept of mappers in Spark, and how are they used in data transformations?
How would you move a file to another path in Databricks File System (DBFS)?
How would you read data from an RDBMS using Spark? Provide the syntax.
What Hadoop command would you use to merge multiple files into one?
What is YARN, and how does it manage resources in a Hadoop ecosystem?
What is the difference between managed and external tables in Hive or Spark SQL?
What performance tuning techniques do you apply in both Sqoop and Spark to optimize their execution?
Download the complete interview prep bundle with expert answers. Study offline, on your commute, anywhere.