STOCK TITAN

Datadog's Platform Expands to Support Monitoring and Troubleshooting of Generative AI Applications

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Neutral)
Tags
AI
Rhea-AI Summary
Datadog announces new capabilities to monitor and troubleshoot generative AI-based applications. The observability solutions include integrations for the AI stack and a complete solution for LLM observability. The LLM observability features a model catalog, model performance monitoring, and model drift detection.
Positive
  • Datadog's new observability capabilities for generative AI-based applications are likely to enhance customer experience and improve the performance of AI models.
  • The integrations for the AI stack, including AI infrastructure, compute, embeddings, data management, model serving and deployment, and orchestration framework, provide a comprehensive solution for deploying AI applications.
  • The LLM observability solution, currently in private beta, offers features like model catalog, model performance monitoring, and model drift detection, enabling organizations to detect and resolve real-world application problems.
  • These new capabilities can help organizations monitor and improve their LLM-based applications, making them more cost-efficient and ensuring positive end-user experiences.
Negative
  • None.

Datadog adds observability for Large Language Models and generative AI application components

SAN FRANCISCO, Aug. 3, 2023 /PRNewswire/ -- Datadog, Inc. (NASDAQ: DDOG), the monitoring and security platform for cloud applications, today announced new capabilities that help customers monitor and troubleshoot issues in their generative AI-based applications.

Generative AI-based features such as AI assistants and copilots are quickly becoming an important part of all software product roadmaps. While there is a lot of promise in these emerging capabilities, deploying them in customer-facing applications brings many challenges including cost, availability and accuracy.

The tech stacks used in generative AI are evolving quickly while new application frameworks, models, vector databases, service chains and supporting technologies are seeing rapid adoption and usage. In order to keep up, organizations require observability solutions that can adapt and evolve along with AI stacks. 

Today, Datadog announced a broad set of generative AI observability capabilities to help teams deploy LLM-based applications to production with confidence and help them troubleshoot health, cost and accuracy in real time.

These capabilities include integrations for the end-to-end AI stack:

  • AI Infrastructure and compute: NVIDIA, CoreWeave, AWS, Azure and Google Cloud
  • Embeddings and data management: Weaviate, Pinecone and Airbyte
  • Model serving and deployment: Torchserve, VertexAI and Amazon Sagemaker
  • Model layer: OpenAI and Azure OpenAI
  • Orchestration framework: LangChain

Additionally, Datadog released in beta a complete solution for LLM observability, which brings together data from applications, models and various integrations to help engineers quickly detect and resolve real-world application problems like model cost spikes, performance degradations, drift, hallucinations and more to ensure positive end user experiences.

LLM observability includes:

  • Model catalog: Monitor and alert on model usage, costs and API performance.
  • Model performance: Identify model performance issues based on different data characteristics provided out of the box, such as prompt and response lengths, API latencies and token counts.
  • Model drift: Categorization of prompts and responses into clusters enabling performance tracking and drift detection over time.

"It's essential for teams to measure the time and resources they are investing in their AI models, especially as tech stacks continue to modernize," said Yrieix Garnier, VP of Product at Datadog. "These latest LLM monitoring capabilities and integrations for the AI stack will help organizations monitor and improve their LLM-based applications and capabilities while also making them more cost efficient."

Datadog's AI/LLM integrations are now generally available. To learn more, please visit: https://www.datadoghq.com/blog/ai-integrations.
Datadog's LLM observability solution is in private beta. To learn more, please visit: https://www.datadoghq.com/blog/dash-2023-new-feature-roundup/

About Datadog

Datadog is the observability and security platform for cloud applications. Our SaaS platform integrates and automates infrastructure monitoring, application performance monitoring, log management, real-user monitoring, and many other capabilities to provide unified, real-time observability and security for our customers' entire technology stack. Datadog is used by organizations of all sizes and across a wide range of industries to enable digital transformation and cloud migration, drive collaboration among development, operations, security and business teams, accelerate time to market for applications, reduce time to problem resolution, secure applications and infrastructure, understand user behavior, and track key business metrics.

Forward-Looking Statements

This press release may include certain "forward-looking statements" within the meaning of Section 27A of the Securities Act of 1933, as amended, or the Securities Act, and Section 21E of the Securities Exchange Act of 1934, as amended including statements on the benefits of new products and features. These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies and prospects, which are based on the information currently available to us and on assumptions we have made. Actual results may differ materially from those described in the forward-looking statements and are subject to a variety of assumptions, uncertainties, risks and factors that are beyond our control, including those risks detailed under the caption "Risk Factors" and elsewhere in our Securities and Exchange Commission filings and reports, including the Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission on May 5, 2023, as well as future filings and reports by us. Except as required by law, we undertake no duty or obligation to update any forward-looking statements contained in this release as a result of new information, future events, changes in expectations or otherwise.

Contact
Dan Haggerty
press@datadoghq.com

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/datadogs-platform-expands-to-support-monitoring-and-troubleshooting-of-generative-ai-applications-301892193.html

SOURCE Datadog, Inc.

Datadog announced new observability capabilities for generative AI-based applications.

Datadog provides integrations for AI infrastructure and compute (NVIDIA, CoreWeave, AWS, Azure, and Google Cloud), embeddings and data management (Weaviate, Pinecone, and Airbyte), model serving and deployment (Torchserve, VertexAI, and Amazon Sagemaker), model layer (OpenAI and Azure OpenAI), and orchestration framework (LangChain).

LLM observability is a complete solution offered by Datadog for monitoring and troubleshooting Large Language Models (LLMs) used in generative AI-based applications.

LLM observability includes a model catalog for monitoring usage, costs, and API performance, model performance monitoring based on data characteristics, and model drift detection through categorization and tracking of prompts and responses.

The new capabilities can help organizations monitor and improve their LLM-based applications, making them more cost-efficient and ensuring positive end-user experiences.
Datadog Inc

NASDAQ:DDOG

DDOG Rankings

DDOG Latest News

DDOG Stock Data

Software and Other Prerecorded Compact Disc, Tape, and Record Reproducing
Manufacturing
Link
Software and Other Prerecorded Compact Disc, Tape, and Record Reproducing , Technology Services, Packaged Software, Manufacturing
United States
New York

About DDOG

datadog is the essential monitoring platform for cloud applications. we bring together data from servers, containers, databases, and third-party services to make your stack entirely observable. these capabilities help devops teams avoid downtime, resolve performance issues, and ensure customers are getting the best user experience.