**Simple**: s3.upload_file('data.parquet', 'my-bucket', 'path/data.parquet')—handles multipart for >8MB. **From DataFrame**: df.to_parquet('s3://bucket/path/', engine='pyarrow') with s3fs. **Programmatic**: pyarrow read_table, write to BytesIO, s3.put_object(Body=buffer.getvalue()). **Large files**: upload_file does multipart automatically; for >5GB, configure multipart threshold. **Encryption**: put_object(ServerSideEncryption='aws:kms')....
The complete answer continues with detailed implementation patterns, architectural trade-offs, and production-grade considerations. It covers performance optimization strategies, common pitfalls to avoid, and real-world examples from companies like Walmart. The answer also includes follow-up discussion points that interviewers commonly explore.
Continue Reading the Full Answer
Unlock the complete expert answer with code examples, trade-offs, and pro tips - plus 1,863+ more.
Or upgrade to Platform Pro - $39
Engineers who used these answers got offers at
AmazonDatabricksSnowflakeGoogleMeta
According to DataEngPrep.tech, this is one of the most frequently asked Cloud/Tools interview questions, reported at 1 company. DataEngPrep.tech maintains a curated database of 1,863+ real data engineering interview questions across 7 categories, verified by industry professionals.