AI-powered Observability for Your SRE Team See Dstl8 in Action

What Hundreds of SREs Said About the Future of Observability at KubeCon 2025

November 20, 2025
By Bob Quillin
Share Bluesky-logo X-twitter-logo Linkedin-logo Youtube-logo
KubeCon 2025 felt different. This year, SREs didn’t walk the expo floor looking for dashboards, traces, or cost-cutting hacks. They walked around searching for answers. and many of those conversations started with our two core offerings: And when they saw “Practical AI for SREs” on our booth wall, they stopped, pointed, and said: “That’s me.” […]

KubeCon 2025 felt different.

This year, SREs didn’t walk the expo floor looking for dashboards, traces, or cost-cutting hacks. They walked around searching for answers. and many of those conversations started with our two core offerings:

  • Dstl8, our AI-native observability platform built around edge distillation and continuous AI
  • Gonzo, our open-source real-time log TUI that gives engineers instant clarity in their terminal

And when they saw “Practical AI for SREs” on our booth wall, they stopped, pointed, and said: “That’s me.”

  • The conversations were more focused and energetic.
  • The pains were familiar.
  • And the desire for practical AI – AI that helps them cut through log noise instead of creating more of it – came through repeatedly.

Across three days in Atlanta, we spoke with hundreds of SREs, platform engineers, developers, and DevOps leaders. Here’s what we heard, what resonated, and what this year’s KubeCon signals for where observability is heading next.


The Industry Shift Was Clear: Cloud-Native Is Becoming AI-Native


One of the strongest confirmations came not only from our booth traffic but from the broader industry narrative. As SiliconANGLE captured in their KubeCon coverage:

Cloud-native and AI-native development are merging… it’s really an incredible place we’re in right now.” Chris Aniszczyk, Chief Technology Office, CNCF

 - SiliconANGLE, "AI Leads Platform Engineering Revival" GUEST COLUMN by Jason English, Intellyx


That was exactly the energy at KubeCon. Cloud-native gave us Kubernetes, elasticity, portability, and automation at scale. But it also created oceans of telemetry, deeply distributed systems, and an overwhelming amount of operational data. The next logical step is what SiliconANGLE calls “AI-native” infrastructure – systems that don’t just run workloads efficiently, but systems that understand those workloads.

This is the emerging platform engineering revival: operational platforms augmented with AI that can interpret, summarize, and reason about what’s happening – in real time.

And across the show floor, SREs were asking for precisely that.


What SREs Told Us: Pain Points Are Universal


From small teams to hyperscalers, the same themes surfaced again and again:

  • Log overload – too much noise, not enough signal
  • Dashboard fatigue – slow, stale, and rarely helpful in the moment
  • Up-skilling teams – managers are looking to up-level their SRE teams, teach them “how to fish”
  • Developer-ready tools – help product engineers not familiar or versed in observability
  • Complex queries – only a few experts can write them
  • Surprise cost spikes in traditional log-based tools
  • AI trust concerns – “Can I bring my own model?” was common
  • Limited tracing adoption – beyond auto-instrumentation, usage remains shallow

Everyone is drowning in telemetry and searching for clarity, not more collection.


From Cloud-Native to AI-Native: A New Operational Model


A decade ago, cloud-native reshaped how teams built and deployed software. But as environments grew more distributed and noisy, simply collecting more telemetry stopped delivering more insight.

KubeCon 2025 made it clear: we’re entering the AI-native era, built on the shoulders of Kubernetes and cloud-native infrastructure.

If there’s one thing I can take away from this conference, it’s that this now 10-year-old cloud native community is going to be just fine, and continue to innovate new ways to stay ahead of the pressures of AI development, and whatever comes next." Jason English, Intellyx

 - SiliconANGLE, "AI Leads Platform Engineering Revival"


In this new model:

  • Telemetry isn’t captured – it’s interpreted
  • Logs aren’t stored – they’re distilled
  • Incidents aren’t detected – they’re explained
  • Platforms don’t run workloads – they reason about workloads

AI-native systems bring meaning closer to the source, turning raw signals into understanding.

This shift aligns with what we wrote in Inverting the Observability Pyramid, where we described how traditional, top-heavy observability stacks are collapsing under their own weight. The pyramid is flipping – or more accurately, being distilled.

Instead of pushing all telemetry upward into giant lakes of raw data, teams are beginning to refine signal at the edge and surface only what matters.


Gonzo: When Simplicity Meets Practitioner Need


One of the clearest patterns we saw at KubeCon was how strongly practitioners gravitated toward simple tools that deliver immediate clarity.

In a sea of configuration-heavy dashboards and slow exploratory UIs, SREs were drawn to Gonzo because it was fast, focused, open-source, and alive. It gave them real-time signal in their command line where they work every day.

And that reaction mapped perfectly to themes we covered in our recent log-analysis series.

In Surface the Right Insights: Why Logs Still Matter (and Why Now)”, we highlighted a core truth that came up again and again in booth conversations:

Logs remain the most complete signal we have - but the real challenge is turning that signal into insight.”

- Surface the Right Insights: Why Logs Still Matter (and Why Now)


SREs at KubeCon echoed this sentiment. They weren’t asking for bigger dashboards. They weren’t asking for more log storage. They wanted interpretation. They wanted meaning. They wanted answers.

That’s also the theme of “Making Logs Conversational with Gonzo,” where we wrote:

The gap isn’t in the data; it’s in the interpretation. The goal is to converse with your application and infrastructure telemetry in real time.”

- Find the Right Answers: Making Logs Conversational with Gonzo


That line could have been a direct quote from half the SREs we met. Gonzo clicked with practitioners because it embodies that philosophy:

  • Instant access to real-time patterns
  • A TUI that feels natural for ops and platform engineers
  • A workflow that matches how incidents unfold
  • A low-friction way to make logs feel navigable, not overwhelming

We also heard a theme we captured in Part 1 of the series:

“The problem isn’t that we have logs. The problem is that we collect too much of them, too late, at too high a cost for effective log analysis.”

- Surface the Right Insights: Why Logs Still Matter (and Why Now)


That’s why Gonzo resonated as a front door to insight – a practical, lightweight entry point into understanding what’s happening right now, without the baggage of conventional log pipelines. It reminded us of something we phrased in the Gonzo series:

“Logs are the diary of your systems. The goal is to distill what matters from it and eliminate the noise.”


That framing showed up over and over at KubeCon. SREs weren’t excited about dashboards. They weren’t excited about ingestion pipelines. They were excited about tools that give them signal in context, not more data to manage.

Gonzo delivered exactly that – and the reaction validated that the industry is ready for logs to become conversational, interpretable, and real-time, not just collected and stored.


Dstl8: Edge Distillation & Multi-Layer, Continuous AI


If Gonzo represented the immediate, hands-on clarity practitioners need, Dstl8 represented the deeper shift happening in observability – the move from storing data to understanding it.

While still in preview mode with a growing set of SREs and platform engineers (sign up here), Dstl8 prompted the bigger questions:

  • “Wait, so you don’t need to store all the logs?”
  • “How does it analyze everything in real time?”
  • “How does the continuous AI layer actually reason about incidents?”

The message that resonated most strongly:

“You don’t need to store all your logs to understand what’s happening.”


That single idea — distilling logs at the edge and passing only the meaningful insights forward – flipped the traditional observability model on its head. It echoed the themes of Inverting the Observability Pyramid, where we wrote about the collapse of top-heavy observability models.

Dstl8 gave people a window into what AI-native observability looks like:

  • Real-time sentiment analysis, categorization and pattern detection
  • Summaries instead of raw streams
  • Semantic context from the moment logs are created
  • A continuous reasoning layer that stays active, not reactive
  • An emphasis on understanding rather than collecting

Gonzo pulled people in because it’s practical – it works where they work – and then Dstl8 showed them the future. Together, the two products illustrated a broader industry shift: from cloud-native data collection to AI-native interpretation that’s designed for everyday SRE use.


Continuous AI & Edge Intelligence: Where Observability Is Heading


In our VMblog Q&A leading into the event, we talked about how edge distillation and continuous AI form the next layer of observability:

“Edge distillation and continuous AI transform Kubernetes monitoring.” 

- VMblog Q&A, KubeCon 2025


KubeCon validated that direction.

Across dozens of conversations, SREs expressed interest in:

  • Real-time interpretation at the edge, not after the fact aggregation
  • Summaries, sentiment, categorizations, pattern detection, and semantic understanding of logs
  • Architectures built on signal clarity, not data hoarding
  • AI that is transparent, explainable, and controllable

The direction of travel is clear: observability is shifting from collect everything to understand what’s important.

This is the heart of the AI-native transition.


What’s Next for the Industry


Several broad industry trends emerged that we expect to define observability, AI for SREs, and platform engineering over the next 12–24 months:

1. Real-time log interpretation outpaces traditional storage models

Teams want to know what’s happening now, not after they dig through terabytes of raw data, hack new queries or puzzle through stale dashboards.

2. Edge intelligence becomes essential

It’s increasingly impractical to centralize all telemetry; OpenTelemetry enables smarter computation that can now move to the edge.

3. Summaries and structured insights become primary outputs

Engineers want context, sentiment, and explanations – not just events.

4. Practitioners choose tools that feel immediate

Speed, simplicity, and clarity are more powerful than tens to hundreds of dashboards.

5. Observability moves from dashboards to decisions

The future isn’t about visualizing everything – it’s about understanding what’s important.

6. Open-source accelerates the AI-native revolution

OSS serves as a community-driven laboratory for new ideas and a launch point for AI-native observability tooling.


Conclusion: Observability Is Entering Its AI-Native Era


KubeCon 2025 revealed something important:

  • SREs don’t want more dashboards
  • They don’t want more queries.
  • And they definitely don’t want more log storage.

They want clarity. They want interpretation. They want systems that understand their infrastructure – and help them understand it, too.

This year in Atlanta, the industry signaled a clear inflection point: observability is moving from visualization to insight, from storage to distillation, from cloud-native to AI-native.

And the SRE community is more than ready for what comes next.

For media inquiries, please contact
press@controltheory.com