In recent years, observability has evolved from being a set of technical practices to becoming a real tool for understanding complex systems. With the proliferation of distributed architectures, heterogeneous components and ever-increasing data volumes, organisations need not only to be able to “see” what is happening, but also to quickly understand what it means.
That’s where artificial intelligence comes in: not as a superstructure, but as a cognitive accelerator that makes signals understandable and immediately usable.
The reason why AI is revolutionising observability
AI technologies – especially generative ones – bring a new dimension to observability platforms: the ability to understand complexity.
It’s not just about analysing logs, metrics, or traces, but about finding correlations, meanings, and causes that, just a few years ago, would have taken hours of manual analysis.
In its report Use Generative AI to Enhance Observability, Gartner® states that “Heads of I&O are expected to deliver resilient and cost-effective services using increasingly complex modern architectures. They must leverage GenAI to improve the efficiency and value of their observability platforms, enabling them to be more effective”.
In our opinion, Gartner is basically saying that Infrastructure & Operations leaders should use GenAI to improve their ability to make sense of data signals (not just collect them), reducing time between events and actions, and increasing the return on investment in observability tools.
This leads to an important conclusion: AI is evolving from a supporting function to a structural component in the management of increasing complex situations
Currently, AI allows us to:
- ask questions using natural language and obtain contextualised answers;
- getting operational summaries and likely scenarios;
- recognise abnormal patterns in advance;
- connecting events between different domains;
- generate hypotheses without requiring manual analysis.
We are moving from assistants who ‘answer’ questions to components that participate in the operational process (triage, context enhancement, remediation proposals, guided or automated execution), with an increasing level of autonomy and decision-making responsibility.
The paradox of machine translation: when AI goes in the wrong direction
The integration of AI into observability introduces a contradiction that lots of organisations underestimate: the ability to interpret data quickly can turn into a problem if the system learns from the wrong behaviours.
A real-life example: an organisation configures AI to analyse alerts from a poorly designed legacy system, where “normal alerts” are actually symptoms of architectural problems that the team has learned to ignore. The AI learns this dysfunctional normality and begins to reduce the priority or “hide” signals that actually need to be addressed at their root cause.
And what’s the result? An observability system that seems more efficient but actually keeps technical debt going and makes it hard to spot.
These are real issues that organisations need to deal with, such as:
- what we consider “normal” and who makes that decision.
- how we prevent AI from reinforcing practices that accept degradation.
- what feedback loops, thresholds, and human controls do we put in place to avoid confusing apparent stability with the correct functioning of the system?
We feel Gartner confirms this trend in its report with the following statement: «LLM use cases in observability are evolving from GenAI-based assistants toward agentic AI-driven operational response».
This means that the direction is clear: AI will move from simple assistance to the ability to propose and, eventually, validate intelligent operational actions.
From data to meaning: AI as an interpretative tool
Those who work with modern systems know that an increase in data does not automatically lead to an increase in understanding. In fact, quite often the opposite is true.
There are three main problems that frequently recur:
- too many signals and too little context,
- analysis times not compatible with the dynamism of the systems,
- Difficulty in transforming technical insights into business decisions.
AI acts precisely at this point of conflict: it connects, summarises and explains.
It does not eliminate complexity, but makes it readable.
And, above all, it provides information that is useful not only to technical specialists, but also to product teams, operations and business functions.
We believe Gartner confirms this point when it states that GenAI enables to «democratize observability insights, boosting productivity for expert users while unlocking self-service access for less technical stakeholders».
In this context, to our understanding “democratising” does not mean simplifying: it means making insights accessible and understandable even to those who do not work with logs and traces on a daily basis, without detracting from the expertise of specialists. However, for this to work, a clear framework is required: reliable sources, shared taxonomies, and a level of governance that prevents plausible but incorrect answers.
A non-obvious insight: AI doesn’t reduce the need for expertise, it just redistributes it
There is a widespread narrative that AI in observability would reduce dependence on expensive specialists, making systems “interpretable by everyone”. The reality is more complex and more interesting.
AI does not remove the need for deep technical expertise: it redefines it. What changes is where expertise is needed and how it is applied.
Before AI in observability, the majority of expertise was focused on manual reading and correlation activities:
- rebuild causal chains between metrics, logs, and traces
- explain the context to non-technical teams
- translate events into outcomes and priorities.
With AI in observability, expertise focuses more on what determines the quality of interpretation:
- defining ontologies, rules, context and boundaries (what AI can and cannot conclude)
- take care of data quality, ‘golden’ signals and feedback loops
- validate hypotheses and proposed actions, especially in ambiguous or rare cases.
To sum up: AI can broaden access to understanding, but understanding still needs to be a technical and organisational responsibility, not just a platform feature.
What changes for organisations (and why it matters now)
The combination of AI and observability is driving three tangible changes in how organisations operate:
Faster decision-making
AI reduces the time needed to understand an event, identify the cause and take corrective action.
Greater alignment between systems and business
Technical signals are transformed into language that is understandable even to those who have to make strategic decisions.
Greater digital resilience
The ability to predict critical scenarios allows us to act before they become reality.
This is not a marginal development: it is a paradigm shift.
Spindox’s contribution: vision, method and practical application of AI to observability
In this changing landscape, it is not enough to simply understand the technology: you need to know how to apply it strategically, by aligning it to the real objectives of organisations.
This is where Spindox’s contribution comes in, gained through daily work with advanced observability platforms and the application of AI to decision-making and operational processes.
How Spindox applies AI to observability
Spindox integrates AI across three complementary dimensions:
- Smart understanding of complex systems
Thanks to its experience in distributed systems modelling and data management, Spindox uses AI to create interpretative maps that help organisations look at systems as ecosystems, not just as collections of metrics.
- Translation of signals into business information
Spindox’s consulting approach aims to build a common language between technology and decision-making.
AI – integrated into observable pipelines – facilitates the transition by translating technical data into understandable operational information.
- Evolving roadmaps rather than isolated implementations
AI is incorporated into observability paths supported by proprietary methodologies such as SS4O – Spindox Standards for Observability, which enable companies to increase the value generated by the combination of AI and observability over time.
SS4O is an integrated framework of methods, tools and procedures to adopt observability in a structured and measurable way. It makes it possible to measure observability coverage and maturity, define evolution roadmaps based on real metrics and priorities, and link observability practices to business results, making the value generated clear. It is not just a technical standard: it is a consulting approach that integrates observability into decision-making processes and governance. Thanks to its integration and strategic reading skills (technology-agnostic), Spindox helps move from reactive observability control to proactive observability focused on prevention, optimisation and decision-making.
In this way, technology does not simply remain a set of features but becomes part of an evolutionary design.
A glimpse into the future
The introduction of AI capabilities – and, potentially, agentic AI – will radically change the relationship between observability and operations: platforms will become more autonomous, more contextual, and more capable of adapting to system variability.
However, this development brings up a question that many organisations would rather not face: if your systems become automatically interpretable, are you still sure you understand what they are interpreting?
This is the difference between observability that accelerates understanding and observability that replaces it. It is the difference between a system that makes specialists more effective and one that makes them superfluous, until the moment when the AI makes a mistake and no one knows the reason.
Governance, method and vision we are talking about are not methodological ornaments: they are the answer to a fundamental question. Who is responsible for what is understood when interpretation is delegated to machines?
Spindox works with organisations not only to integrate AI into observability, but to ensure that this integration enhances, rather than erodes, their ability to understand their systems.
If you want to read Gartner’s full report Use Generative AI to Enhance Observability you can find it here.
Disclaimer
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner, Inc. Use Generative AI to Enhance Observability. Martin Caren, Matt Crossley. 15 September 2025.


