Krakow, Poland, 31 May - 2 June 2023
Apache Pinot is not the first database optimized for analytical queries, so why has it found its way into game-changing applications at companies like LinkedIn, Stripe, and Uber, and why is it being embraced by people building real-time, event-driven systems? There are so many databases to choose from, and so many ways to do real-time processing of streaming data, why this database, and why in these use cases? To put it simply: because it's fast.
Pinot can ingest more than a million events per second directly from Kafka, making it a natural fit for streaming systems. But our goal is to expose insights about these events to users immediately, with query latencies that let Pinot queries directly serve UI features in the browser and on mobile devices. To do this, it is Pinot reads that must be fast, not just its Kafka ingest.
In this talk, we'll look at how the Pinot read path scales out, dividing query processing among arbitrarily many individual nodes. We'll spend the bulk of the time looking at the fascinating set of Pinot indexing strategies. Now, there is no real magic in making reads fast: we either need to scan less or scan faster, and Pinot's indexes artfully help it do both.
Come to this talk to dive into some Pinot internals, learn how distributed column-oriented databases are built, and how Pinot specifically optimizes reads, making it the choice of more and more leading real-time, user-facing analytics applications that are on the leading edge right now.
Tim is a teacher, author, and technology leader with StarTree, where he serves as the VP of Developer Relations. He is a regular speaker at conferences and a presence on YouTube explaining complex technology topics in an accessible way. He tweets as @tlberglund, blogs every few years at http://timberglund.com, and lives in Littleton, CO, USA. He has three grown children and two grandchildren, with a third on the way.
Ticket prices will go up in...
You missed out!