STOCK TITAN

Datadog Expands LLM Observability with New Capabilities to Monitor Agentic AI, Accelerate Development and Improve Model Performance

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI
Datadog (NASDAQ: DDOG) has unveiled new AI monitoring capabilities to help organizations better manage and evaluate their AI investments. The company introduced three key features: AI Agent Monitoring, LLM Experiments, and AI Agents Console. AI Agent Monitoring, now generally available, provides interactive mapping of agent decision paths and debugging tools. LLM Experiments, in preview, enables testing and validation of LLM application changes. AI Agents Console, also in preview, offers visibility into both in-house and third-party agent behavior. These tools address a critical market need, as only 25% of AI initiatives currently deliver promised ROI. The new capabilities aim to provide comprehensive observability, testing, and governance for AI systems, helping organizations measure performance, optimize costs, and ensure security compliance.
Datadog (NASDAQ: DDOG) ha presentato nuove funzionalità di monitoraggio AI per aiutare le organizzazioni a gestire e valutare meglio i loro investimenti in intelligenza artificiale. L'azienda ha introdotto tre funzionalità principali: AI Agent Monitoring, LLM Experiments e AI Agents Console. AI Agent Monitoring, ora disponibile in generale, offre una mappatura interattiva dei percorsi decisionali degli agenti e strumenti di debug. LLM Experiments, in fase di anteprima, consente di testare e validare le modifiche alle applicazioni basate su LLM. AI Agents Console, anch'essa in anteprima, fornisce visibilità sul comportamento sia degli agenti interni che di terze parti. Questi strumenti rispondono a una necessità critica del mercato, dato che solo il 25% delle iniziative AI attualmente raggiunge il ROI promesso. Le nuove funzionalità mirano a offrire osservabilità completa, test e governance per i sistemi AI, aiutando le organizzazioni a misurare le prestazioni, ottimizzare i costi e garantire la conformità alla sicurezza.
Datadog (NASDAQ: DDOG) ha lanzado nuevas capacidades de monitoreo de IA para ayudar a las organizaciones a gestionar y evaluar mejor sus inversiones en inteligencia artificial. La compañía presentó tres funciones clave: AI Agent Monitoring, LLM Experiments y AI Agents Console. AI Agent Monitoring, ya disponible de forma general, ofrece un mapeo interactivo de las rutas de decisión de los agentes y herramientas de depuración. LLM Experiments, en versión preliminar, permite probar y validar cambios en aplicaciones basadas en LLM. AI Agents Console, también en versión preliminar, brinda visibilidad sobre el comportamiento de agentes internos y de terceros. Estas herramientas responden a una necesidad crítica del mercado, ya que solo el 25% de las iniciativas de IA actualmente cumplen con el ROI prometido. Las nuevas capacidades buscan proporcionar observabilidad integral, pruebas y gobernanza para sistemas de IA, ayudando a las organizaciones a medir el rendimiento, optimizar costos y asegurar el cumplimiento de seguridad.
Datadog(NASDAQ: DDOG)는 조직이 AI 투자를 더 효과적으로 관리하고 평가할 수 있도록 돕는 새로운 AI 모니터링 기능을 공개했습니다. 회사는 AI Agent Monitoring, LLM Experiments, AI Agents Console의 세 가지 주요 기능을 선보였습니다. 현재 일반에 공개된 AI Agent Monitoring은 에이전트 의사 결정 경로를 인터랙티브하게 매핑하고 디버깅 도구를 제공합니다. 미리보기 단계인 LLM Experiments는 LLM 애플리케이션 변경 사항을 테스트하고 검증할 수 있게 합니다. 역시 미리보기인 AI Agents Console은 내부 및 타사 에이전트 행동에 대한 가시성을 제공합니다. 이 도구들은 현재 AI 프로젝트의 단 25%만이 약속된 ROI를 달성하는 시장의 중요한 요구를 해결합니다. 새로운 기능들은 AI 시스템에 대한 포괄적인 관찰성, 테스트 및 거버넌스를 제공하여 조직이 성능을 측정하고 비용을 최적화하며 보안 준수를 보장하도록 돕는 것을 목표로 합니다.
Datadog (NASDAQ : DDOG) a dévoilé de nouvelles fonctionnalités de surveillance de l'IA pour aider les organisations à mieux gérer et évaluer leurs investissements en intelligence artificielle. La société a présenté trois fonctionnalités clés : AI Agent Monitoring, LLM Experiments et AI Agents Console. AI Agent Monitoring, désormais disponible, offre une cartographie interactive des chemins de décision des agents ainsi que des outils de débogage. LLM Experiments, en version préliminaire, permet de tester et valider les modifications des applications LLM. AI Agents Console, également en version préliminaire, offre une visibilité sur le comportement des agents internes et tiers. Ces outils répondent à un besoin crucial du marché, puisque seulement 25 % des initiatives IA atteignent actuellement le retour sur investissement promis. Ces nouvelles fonctionnalités visent à fournir une observabilité complète, des tests et une gouvernance des systèmes IA, aidant les organisations à mesurer la performance, optimiser les coûts et garantir la conformité en matière de sécurité.
Datadog (NASDAQ: DDOG) hat neue KI-Überwachungsfunktionen vorgestellt, die Organisationen dabei unterstützen sollen, ihre KI-Investitionen besser zu verwalten und zu bewerten. Das Unternehmen präsentierte drei Hauptfunktionen: AI Agent Monitoring, LLM Experiments und AI Agents Console. AI Agent Monitoring, jetzt allgemein verfügbar, bietet eine interaktive Darstellung der Entscheidungswege von Agenten sowie Debugging-Tools. LLM Experiments, in der Vorschau, ermöglicht das Testen und Validieren von Änderungen an LLM-Anwendungen. AI Agents Console, ebenfalls in der Vorschau, bietet Einblick in das Verhalten sowohl hausinterner als auch externer Agenten. Diese Werkzeuge adressieren einen kritischen Marktbedarf, da derzeit nur 25 % der KI-Initiativen die versprochene Kapitalrendite (ROI) liefern. Die neuen Funktionen zielen darauf ab, umfassende Beobachtbarkeit, Tests und Governance für KI-Systeme bereitzustellen, um Organisationen bei der Leistungsbewertung, Kostenoptimierung und Sicherstellung der Sicherheitskonformität zu unterstützen.
Positive
  • Addresses a critical market gap in AI observability and monitoring
  • Enables organizations to measure and optimize ROI on AI investments
  • Provides comprehensive visibility into both in-house and third-party AI agents
  • Partnership with major AI companies like Mistral AI and Anthropic demonstrates market validation
Negative
  • Products like LLM Experiments and AI Agents Console are still in preview phase
  • Enters a highly competitive market space in AI monitoring

Insights

Datadog expands its product suite with AI agent monitoring solutions, positioning strategically in the high-growth AI observability market.

Datadog's introduction of three new AI monitoring capabilities represents a significant strategic expansion of its LLM Observability product suite. The timing is excellent as organizations rapidly deploy AI agents but struggle with visibility into their behavior and ROI justification—only 25% of AI initiatives are currently delivering promised returns.

The new AI Agent Monitoring tool (now generally available) provides a crucial solution by mapping agent decision paths, allowing engineering teams to identify performance issues and optimize complex agent systems. This addresses a critical gap in the market where companies lack visibility into AI system behavior.

The LLM Experiments feature (in preview) enables testing and validation of prompt changes or model swaps against real production data, helping quantify improvements in accuracy, throughput, and cost. This functionality accelerates development cycles while protecting against performance regressions.

The AI Agents Console (in preview) provides centralized governance of both in-house and third-party AI agents, offering visibility into usage patterns, impact metrics, and potential security risks. This is increasingly important as organizations incorporate external AI agents from companies like OpenAI, Salesforce, and Anthropic into critical workflows.

Notably, Datadog has secured endorsements from key AI players Mistral AI and Anthropic, highlighting the product's relevance and potential market traction. The strategic partnerships also position Datadog well within the AI ecosystem.

These capabilities directly address the accountability gap in AI investments by providing measurable performance metrics and ROI validation tools—a pain point explicitly acknowledged in the announcement. By targeting this underserved need, Datadog is positioning itself at the intersection of two high-growth areas: observability and AI infrastructure.

AI Agent Monitoring, LLM Experiments and AI Agents Console help organizations measure and justify agentic AI investments

New York, New York--(Newsfile Corp. - June 10, 2025) - Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new agentic AI monitoring and experimentation capabilities to give organizations end-to-end visibility, rigorous testing capabilities, and centralized governance of both in-house and third-party AI agents. Presented at DASH, Datadog's annual observability conference, the new capabilities include AI Agent Monitoring, LLM Experiments and AI Agents Console.

The rise of generative AI and autonomous agents is transforming how companies build and deliver software. But with this innovation comes complexity. As companies race to integrate AI into their products and workflows, they face a critical gap. Most organizations lack visibility into how their AI systems behave, what agents are doing and whether they are delivering real business value.

Datadog is addressing this gap by bringing observability best practices to the AI stack. Part of Datadog's LLM Observability product, these new capabilities allow companies to monitor agentic systems, run structured LLM experiments, and evaluate usage patterns and the impact of both custom and third-party agents. This enables teams to deploy quickly and safely, accelerate iteration and improvements to their LLM applications, and prove impact.

"A recent study found only 25 percent of AI initiatives are currently delivering on their promised ROI—a troubling stat given the sheer volume of AI projects companies are pursuing globally," said Yrieix Garnier, VP of Product at Datadog. "Today's launches aim to help improve that number by providing accountability for companies pushing huge budgets toward AI projects. The addition of AI Agent Monitoring, LLM Experiments and AI Agents Console to our LLM Observability suite gives our customers the tools to understand, optimize and scale their AI investments."

Now generally available, Datadog's AI Agent Monitoring instantly maps each agent's decision path—inputs, tool invocations, calls to other agents and outputs—in an interactive graph. Engineers can drill down into latency spikes, incorrect tool calls or unexpected behaviors like infinite agent loops, and correlate them with quality, security and cost metrics. This simplifies the debugging of complex, distributed and non-deterministic agent systems, resulting in optimized performance.

"Agents represent the evolution beyond chat assistants, unlocking the potential of generative AI. As we equip these agents with more tools, comprehensive observability is essential to confidently transition use cases into production. Our partnership with Datadog ensures teams have the visibility and insights needed to deploy agentic solutions at scale," said Timothée Lacroix, Co-founder & CTO at Mistral AI.

In preview, Datadog launched LLM Experiments to test and validate the impact of prompt changes, model swaps or application changes on the performance of LLM applications. The tool works by running and comparing experiments against datasets created from real production traces (input/output pairs) or uploaded by customers. This allows users to quantify improvements in response accuracy, throughput and cost—and guard against regressions.

"AI agents are quickly graduating from concept to production. Applications powered by Claude 4 are already helping teams handle real-world tasks in many domains, from customer support to software development and R&D," said Michael Gerstenhaber, VP of Product at Anthropic. "As these agents take on more responsibility, observability becomes key to ensuring they behave safely, deliver value, and stay aligned with user and business goals. We're very excited about Datadog's new LLM Observability capabilities that provide the visibility needed to scale these systems with confidence."

Moreover, as organizations embed external AI agents—such as OpenAI's Operator, Salesforce's Agentforce, Anthropic's Claude-powered assistants or IDE copilots—into critical workflows, they need to understand their behavior, how they're being used, and what permissions they have across multiple systems to better optimize their agent deployments. To overcome this, Datadog unveiled AI Agents Console in preview, which allows organizations to establish and maintain visibility into in-house and third-party agent behavior, measure agent usage, impact and ROI, and proactively check for security and compliance risks.

To learn more about Datadog's latest AI Observability capabilities, please visit: https://www.datadoghq.com/product/llm-observability/.

AI Agent Monitoring, LLM Experiments and AI Agents Console were announced during the keynote at DASH, Datadog's annual conference. The replay of the keynote is available here. During DASH, Datadog also announced launches in Applied AI, AI Security, Log Management and released its Internal Developer Portal.

About Datadog

Datadog is the observability and security platform for cloud applications. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, user experience monitoring, cloud security and many other capabilities to provide unified, real-time observability and security for our customers' entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior and track key business metrics.

Forward-Looking Statements

This press release may include certain "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption "Risk Factors" and elsewhere in our Securities and Exchange Commission filings and reports, including the Annual Report on Form 10-K filed with the Securities and Exchange Commission on May 6, 2025, as well as future filings and reports by us. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this release as a result of new information, future events, changes in expectations or otherwise.

Contact
Dan Haggerty
press@datadoghq.com

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/255068

FAQ

What new AI monitoring features did Datadog (DDOG) announce in June 2025?

Datadog announced three new features: AI Agent Monitoring (generally available), LLM Experiments (in preview), and AI Agents Console (in preview) to provide end-to-end visibility and testing capabilities for AI systems.

How does Datadog's AI Agent Monitoring work?

AI Agent Monitoring maps each agent's decision path, including inputs, tool invocations, calls to other agents, and outputs in an interactive graph, allowing engineers to debug issues and optimize performance.

What problem is Datadog's new AI monitoring solution addressing?

It addresses the fact that only 25% of AI initiatives deliver promised ROI, helping organizations measure, optimize, and justify their AI investments through comprehensive monitoring and testing capabilities.

What is the purpose of Datadog's LLM Experiments feature?

LLM Experiments allows users to test and validate the impact of prompt changes, model swaps, or application changes on LLM performance using real production traces or customer-uploaded datasets.

How does Datadog's AI Agents Console benefit organizations?

AI Agents Console helps organizations maintain visibility into both in-house and third-party agent behavior, measure usage and ROI, and monitor security and compliance risks across their AI deployments.
Datadog, Inc.

NASDAQ:DDOG

DDOG Rankings

DDOG Latest News

DDOG Stock Data

40.98B
311.34M
2.9%
88.35%
3.77%
Software - Application
Services-prepackaged Software
Link
United States
NEW YORK