
Sign up to save your podcasts
Or
A new software product usually starts with a single database. That database manages the tables for user accounts and basic transactions.
When a product becomes popular, the database grows in size. There are more transactions and more users. A company grows around that product, and the company starts to accumulate more data in different sources. Analytics systems, time series databases, and logging tools start to generate data.
Moving this data around between systems starts to become complicated. Apache Kafka is often used as a system for moving data between these different systems, performing transformations, and generating aggregations and summaries of these large quantities of data.
Robin Moffatt works at Confluent, and has written numerous articles about how to move data between systems and design effective workflows for data pipelines. Robin joins the show to talk about modern data platforms and databases, and the patterns for using Kafka to connect those systems to each other.
If you are interested in learning more about how companies are using Kafka, the Kafka Summit in San Francisco is September 30th – October 1st. Companies like LinkedIn, Uber, and Netflix will be talking about how they use Kafka. Full disclosure: Confluent (the company where Tim works) is a sponsor of Software Engineering Daily.
A new software product usually starts with a single database. That database manages the tables for user accounts and basic transactions.
When a product becomes popular, the database grows in size. There are more transactions and more users. A company grows around that product, and the company starts to accumulate more data in different sources. Analytics systems, time series databases, and logging tools start to generate data.
Moving this data around between systems starts to become complicated. Apache Kafka is often used as a system for moving data between these different systems, performing transformations, and generating aggregations and summaries of these large quantities of data.
Robin Moffatt works at Confluent, and has written numerous articles about how to move data between systems and design effective workflows for data pipelines. Robin joins the show to talk about modern data platforms and databases, and the patterns for using Kafka to connect those systems to each other.
If you are interested in learning more about how companies are using Kafka, the Kafka Summit in San Francisco is September 30th – October 1st. Companies like LinkedIn, Uber, and Netflix will be talking about how they use Kafka. Full disclosure: Confluent (the company where Tim works) is a sponsor of Software Engineering Daily.