Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
Data By the Bay is the first Data Grid conference matrix with 6 vertical application areas  spanned by multiple horizontal data pipelines, platforms, and algorithms.  We are unifying data science and data engineering, showing what really works to run businesses at scale.
View analytic
Monday, May 16 • 9:05am - 9:45am
Keynote: Building a Real-time Streaming Platform Using Kafka Streams and Kafka Connect

Sign up or log in to save this to your schedule and see who's attending!

Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of continuously changing data in real time? The answer is stream processing, and one system that has become a core hub for streaming data is Apache Kafka.
This presentation will give a brief introduction to Apache Kafka and describe it's usage as a platform for streaming data. It will explain how Kafka serves as a foundation for both streaming data pipelines and applications that consume and process real-time data streams. It will introduce some of the newer components of Kafka that help make this possible, including Kafka Connect, framework for capturing continuous data streams, and Kafka Streams, a lightweight stream processing library. Finally it will describe the lessons learned by companies like LinkedIn building massive streaming data architectures.

Speakers
avatar for Jay Kreps

Jay Kreps

Co-founder and CEO, Confluent
Jay Kreps is the co-founder and CEO of Confluent, a company backing the popular Apache Kafka messaging system. Prior to founding Confluent, he was the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store), Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).


Monday May 16, 2016 9:05am - 9:45am
Gardner

Attendees (41)