March 18, 2026, 10:00am PT | 1:00pm ET
As personalization, real-time inference, and feature-rich machine learning (ML) workloads become more complex and more closely associated with revenue, platform teams are being asked to serve orders of magnitude more contextual data, with strict latency guarantees, and zero tolerance for downtime. Unfortunately, most real-time data stacks were never designed for this reality.
The result? Unpredictable tail latency, cascading failures under load, painful scaling exercises, and spiraling infrastructure costs.
In this webinar, DragonflyDB co-founder and CEO Oded Poncz, will break down why real-time context has become the new data primitive for intelligent systems and why platform engineers are at the center of making this work.
We’ll explore the architectural gaps in traditional real-time infrastructure, what modern AI/ML workloads actually demand from the platform layer, and how purpose-built systems like Dragonfly enable teams to scale context predictably, efficiently, and without operational compromise.
You’ll walk away with a clear understanding of how to design and operate a real-time data layer that can support modern intelligent systems.
Highlights
1. Learn why real-time context at scale is a hard requirement for modern intelligent systems.
2. Understand why real-time context must not only be fast but also predictable even under heavy load.
3. Explore how systems can serve increasingly large contextual datasets without exponential cost or operational complexity.
4. Discover why legacy architectures struggle with the needs of context-rich AI/ML applications, so modern context demands modern architecture.
By registering, you consent to receiving email communication from The New Stack and Dragonfly. You may opt out at any time.
Offered Free by: The New Stack + Dragonfly
See All Resources from: The New Stack + Dragonfly





