
Sign up to save your podcasts
Or


In this episode of Data Engineering Central, I sit down with the founder of DataFlint, Daniel Aronovich, to talk about the realities of working with Apache Spark, distributed data systems, and the future of data engineering.
We start with his early journey into tech—how he first discovered large-scale data systems and the lessons he learned from working with real-world Spark workloads.
* The conversation then turns toward the future of data engineering, particularly the growing role of AI in software development and data infrastructure. We discuss why generic AI coding assistants often struggle with complex distributed systems, whether AI will eventually be able to automatically optimize data pipelines, and how the role of the data engineer may evolve in the coming years.
We covered a lot of career advice for new and upcoming data professionals.
We also discuss the origin of DataFlint, a tool designed to help engineers better understand and optimize Spark workloads by analyzing execution plans, logs, and runtime context.
If you work with Spark, large-scale data pipelines, or modern data platforms, this conversation will give you a deeper look into how the data engineering landscape is evolving.
Thanks for reading Data Engineering Central! This post is public so feel free to share it.
By Data Engineering in Real LifeIn this episode of Data Engineering Central, I sit down with the founder of DataFlint, Daniel Aronovich, to talk about the realities of working with Apache Spark, distributed data systems, and the future of data engineering.
We start with his early journey into tech—how he first discovered large-scale data systems and the lessons he learned from working with real-world Spark workloads.
* The conversation then turns toward the future of data engineering, particularly the growing role of AI in software development and data infrastructure. We discuss why generic AI coding assistants often struggle with complex distributed systems, whether AI will eventually be able to automatically optimize data pipelines, and how the role of the data engineer may evolve in the coming years.
We covered a lot of career advice for new and upcoming data professionals.
We also discuss the origin of DataFlint, a tool designed to help engineers better understand and optimize Spark workloads by analyzing execution plans, logs, and runtime context.
If you work with Spark, large-scale data pipelines, or modern data platforms, this conversation will give you a deeper look into how the data engineering landscape is evolving.
Thanks for reading Data Engineering Central! This post is public so feel free to share it.