STOCK TITAN

Cognizant Launches Secure AI Services to Help Enterprises Safely Scale Agentic Systems

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI

Cognizant (NASDAQ: CTSH) launched Cognizant Secure AI Services on May 7, 2026 to help enterprises secure, govern and scale AI and agentic systems across build and run-time environments. The offering centers on a secure Agent Development Lifecycle, Cognizant Neuro Cybersecurity, and Cognizant Trust for continuous traceability and compliance.

The service targets risks such as model tampering, poisoned prompts, deepfake-driven fraud and unsafe agent behavior, and Cognizant says it is already working with 250+ global enterprises in regulated industries.

Loading...
Loading translation...

AI-generated analysis. Not financial advice.

Positive

  • New product launch: Cognizant Secure AI Services
  • Three core foundations: ADLC, Neuro Cybersecurity, Cognizant Trust
  • Addresses risks: deepfakes, model tampering, poisoned prompts
  • Engaged with 250+ global enterprises in regulated industries
  • Build and run-time monitoring for traceability and audit evidence

Negative

  • None.

News Market Reaction – CTSH

+1.19%
1 alert
+1.19% News Effect

On the day this news was published, CTSH gained 1.19%, reflecting a mild positive market reaction.

Data tracked by StockTitan Argus on the day of publication.

Key Figures

Enterprise AI clients: 250+ global enterprises Foundational pillars: 3 foundations
2 metrics
Enterprise AI clients 250+ global enterprises Already working with clients on AI security and deployment
Foundational pillars 3 foundations Secure ADLC, Neuro Cybersecurity, and Responsible AI via Cognizant Trust

Market Reality Check

Price: $51.68 Vol: Volume 6,243,551 is sligh...
normal vol
$51.68 Last Close
Volume Volume 6,243,551 is slightly below the 20-day average of 6,746,543 (relative volume 0.93). normal
Technical Shares at $51.33 trade well below the $70.69 200-day MA and sit just above the 52-week low of $50.81, far from the $87.03 52-week high.

Peers on Argus

CTSH fell 1.04% while key peers were mixed: FIS, WIT and LDOS declined, CDW drop...

CTSH fell 1.04% while key peers were mixed: FIS, WIT and LDOS declined, CDW dropped sharply, and BR rose. With no peers in the momentum scanner and mixed directions, the move appears more stock-specific than a broad sector rotation.

Previous AI Reports

5 past events · Latest: Apr 30 (Positive)
Same Type Pattern 5 events
Date Event Sentiment Move Catalyst
Apr 30 AI partnership sports Positive -3.3% Expanded AI services partnership with Aston Martin Aramco Formula One™ Team.
Apr 23 AI patents granted Positive -6.3% AI Lab received three new U.S. patents, lifting totals to 65 and 88.
Apr 22 Agentic retail CX launch Positive -3.4% Launched Agentic Retail CX using Gemini Enterprise to enhance contact center AI.
Apr 21 OpenAI Codex partnership Positive +0.3% Selected by OpenAI to scale Codex for enterprise software engineering clients.
Apr 21 AI workforce training Positive +0.3% Launched Skillspring platform to accelerate workforce AI readiness and training.
Pattern Detected

Recent AI-tagged announcements have generally been positive in tone but often met with negative or muted next-day moves, suggesting a tendency for market reactions to underwhelm relative to upbeat AI narratives.

Recent Company History

Over the past few weeks, Cognizant has issued multiple AI-focused updates, including partnerships with OpenAI and Google Cloud, retail-focused agentic CX offerings, workforce AI training via Skillspring, and an Aston Martin Aramco Formula One™ AI services deal. Despite these innovation-heavy releases, several drew negative next-day price reactions. Today’s Secure AI Services launch continues this pattern of expanding Cognizant’s AI stack and agentic capabilities across enterprises.

Historical Comparison

-2.5% avg move · In the last five AI-tagged releases, CTSH averaged a -2.47% next-day move, often negative despite up...
AI
-2.5%
Average Historical Move AI

In the last five AI-tagged releases, CTSH averaged a -2.47% next-day move, often negative despite upbeat AI initiatives. Today’s Secure AI Services launch fits into this ongoing AI expansion theme.

Recent AI news shows Cognizant broadening its AI stack from patents and workforce training to Codex-based engineering, retail agentic CX, and now Secure AI Services for governing and securing agentic systems.

Market Pulse Summary

This announcement introduces Cognizant Secure AI Services to help enterprises secure and govern agen...
Analysis

This announcement introduces Cognizant Secure AI Services to help enterprises secure and govern agentic AI systems, emphasizing "provable trust" across build and run time. It complements recent AI partnerships and offerings, reinforcing Cognizant’s broader AI strategy with 250+ global enterprises. Investors may watch how quickly the secure ADLC, Neuro Cybersecurity, and Responsible AI foundations translate into large-scale deployments and measurable client adoption.

Key Terms

agentic systems, provable trust, agent development lifecycle (adlc), responsible ai, +3 more
7 terms
agentic systems technical
"secure, govern and scale AI and agentic systems across their operations"
Agentic systems are software or machines designed to pursue goals and make decisions autonomously, adapting to new information and carrying out actions without continuous human direction. For investors they matter because such systems can increase efficiency and create new business opportunities but also bring operational risk, legal and regulatory exposure, and reputational stakes—like hiring an independent worker who can deliver value but needs clear oversight and controls.
provable trust technical
"move from assumed trust toward "provable trust" – an approach grounded in evidence"
Provable trust is a system where claims about assets, transactions or controls are backed by verifiable digital records anyone can check, rather than by a promise from a single party. Like a tamper-evident receipt or a public log you can audit yourself, it lets investors confirm that what a company or platform says is true, reducing the need to rely solely on third parties and lowering the risk of hidden errors or fraud.
agent development lifecycle (adlc) technical
"A secure Agent Development Lifecycle (ADLC) that embeds protection across design"
A sequence of stages used to design, build, test, deploy and maintain autonomous software agents or AI assistants, covering planning, data preparation, model training, safety checks, live operation and ongoing updates. Think of it like the product life cycle for a robotic employee: careful design and testing up front, then monitoring and fixes while it works. Investors care because a robust lifecycle reduces operational failures, regulatory or safety risks, development costs and downtime, and speeds reliable delivery of revenue-generating or cost-saving automation.
responsible ai technical
"Responsible AI, a continuous trust and assurance layer delivered through Cognizant Trust™"
Responsible AI means designing, testing and running automated decision systems so they are safe, fair, explainable and follow laws and privacy rules. For investors it matters because responsible AI reduces the chances of costly errors, regulatory fines, data breaches or reputational damage, and helps ensure products perform reliably over time—think of it as safety checks and clear instructions you put on a new machine before letting it operate at scale.
identity and access management technical
"span model security, data protection, AI DevOps security, identity and access management"
Identity and access management is the set of tools and processes that verify who a person or device is and control what digital resources they can use, like a company’s system for issuing security badges and setting which doors each badge opens. It matters to investors because strong identity controls reduce the chance of costly data breaches, regulatory fines and downtime, protect customer trust, and can lower IT costs and operational risk.
deepfake-driven fraud technical
"risks organizations face today, from deepfake-driven fraud and model tampering"
Deepfake-driven fraud is the use of realistic synthetic audio, video, or images made by artificial intelligence to impersonate executives, spokespeople, or partners in order to deceive stakeholders and steal money, data, or sensitive access. Like a convincing mask or forged signature, these fakes can trigger false instructions, market-moving rumors, or unauthorized transfers, creating direct financial loss, reputational damage, regulatory exposure, and sudden stock volatility that investors need to monitor.
generative ai technical
"securing autonomous agents and generative AI systems operating across enterprise workflows"
Generative AI is a type of computer technology that can create new content, like text, images, or music, on its own. It’s important because it can produce realistic and useful material quickly, which could change how we create art, write stories, or even develop new products. Think of it as a smart robot that can invent and produce things almost like a human.

AI-generated analysis. Not financial advice.

New offering delivers AI-powered defense across the enterprise and supports the practice of provable trust for AI systems

TEANECK, N.J., May 7, 2026 /PRNewswire/ -- Cognizant (NASDAQ: CTSH) announced the launch of Cognizant Secure AI Services, a new integrated offering designed to help enterprises secure, govern and scale AI and agentic systems across their operations.

As AI systems move into enterprise-wide deployment, organizations are embedding AI into decision-making, automation, customer engagement and core workflows. Increasingly, these systems bring autonomous and agentic capabilities that can reason, act and interact with enterprise data, APIs and external applications. While this shift has the potential to unlock transformative value, it also introduces new security, governance and run-time risks that traditional cybersecurity models were not designed to address.

Traditional security was built for deterministic software. AI systems are probabilistic and context-driven, and they can be manipulated in ways legacy tools were never designed to detect. Manipulated models, poisoned prompts and corrupted agent behavior can trigger confidently wrong actions at scale.

The offering is designed to help enterprises move from assumed trust toward "provable trust" – an approach grounded in evidence, traceability and continuous assurance. Cognizant engineers trust twice, first at build time, by securing models, data and pipelines before deployment, and then at run time, by monitoring AI behavior in production to detect manipulation, help manage and mitigate unsafe actions and preserve audit‑supporting evidence.

"AI is fundamentally changing how enterprise systems behave," said Vishal Salvi, Global Head of Cognizant's Cybersecurity Service Line. "These systems are adaptive, context-driven and increasingly autonomous – and securing them requires continuous assurance across build and run-time environments. With Cognizant Secure AI Services, we are helping enterprises engineer trust into AI systems from day one and to sustain that trust as those systems evolve."

Cognizant Secure AI Services is built on three foundations:

  • A secure Agent Development Lifecycle (ADLC) that embeds protection across design, build, test, deploy and change of AI systems;
  • Cognizant Neuro® Cybersecurity, a consolidated control plane that unifies AI and enterprise signals for threat response, correlation and audit-supporting evidence;
  • Responsible AI, a continuous trust and assurance layer delivered through Cognizant Trust™ that provides traceability, policy enforcement and supports compliance alignment based on client-defined requirements as AI systems scale.

Together, these capabilities span model security, data protection, AI DevOps security, identity and access management, agent behavior controls and generative AI risk management, aiming to enable enterprises to secure AI systems across their stages of operation.

Cognizant is already working with 250+ global enterprises across regulated industries to assess, secure and operationalize digital transformation programs, including AI deployments. Early engagements address some of the most consequential risks organizations face today, from deepfake-driven fraud and model tampering to securing autonomous agents and generative AI systems operating across enterprise workflows, while establishing the governance and audit frameworks, in collaboration with clients, required to scale AI responsibly in regulated environments.

Arjun Chauhan, Practice Director, Everest Group, said: "In today's rapidly evolving landscape, organizations are increasingly looking for a more holistic approach to AI security that moves beyond siloed solutions. There is a growing need for unified frameworks that can address risks across both the build phase and the run-and-operate lifecycle. Additionally, the ability to integrate best-of-breed technologies into a cohesive, operationalized model is becoming critical to drive real-world impact. Platforms that offer a strong, unified cybersecurity foundation, while seamlessly extending to AI-specific security capabilities, are likely to be positioned well to deliver scalable and enterprise-ready outcomes."

Built for enterprise integration, supporting regulatory alignment and operational resilience, Cognizant Secure AI Services helps organizations scale AI with the practice of provable trust. For more information, visit Cognizant Secure AI Services

About Cognizant
Cognizant (Nasdaq: CTSH) is an AI Builder and technology services provider, bridging the gap between AI investment and enterprise value by building full-stack AI solutions for our clients. Our deep industry, process and engineering expertise enables us to build an organization's unique context into technology systems that amplify human potential, drive tangible outcomes and keep global enterprises ahead in a fast-changing world. See how at www.cognizant.com or @cognizant. 

For more information, contact:

U.S.

Name Ben Gorelick

Email benjamin.gorelick@cognizant.com


Europe / APAC

Name Sarah Douglas

Email sarah.douglas@cognizant.com


India

Name Vipin Nair

Email Vipin.nair@cognizant.com

 

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/cognizant-launches-secure-ai-services-to-help-enterprises-safely-scale-agentic-systems-302765310.html

SOURCE Cognizant Technology Solutions

FAQ

What is Cognizant Secure AI Services (CTSH) launched May 7, 2026?

It is a new integrated offering to secure and scale enterprise AI systems. According to the company, it combines a secure Agent Development Lifecycle, Cognizant Neuro Cybersecurity and Cognizant Trust for continuous assurance, traceability and policy enforcement across build and run-time.

How does Cognizant Secure AI Services help with AI governance for CTSH clients?

It provides traceability, policy enforcement and continuous assurance for AI systems. According to the company, Cognizant Trust supports compliance alignment based on client-defined requirements and preserves audit-supporting evidence across operations.

What core capabilities are included in Cognizant Secure AI Services (CTSH)?

The service includes an Agent Development Lifecycle, a unified Neuro Cybersecurity control plane, and a Responsible AI trust layer. According to the company, these enable model security, data protection, identity controls, agent behavior controls and generative AI risk management.

Which enterprise risks does Cognizant Secure AI Services address for CTSH customers?

It targets risks like deepfake-driven fraud, model tampering, poisoned prompts and unsafe autonomous agent actions. According to the company, the offering detects manipulation, mitigates unsafe actions and preserves evidentiary audit trails in production.

How widely is Cognizant deploying Secure AI Services among regulated clients (CTSH)?

Cognizant says it is working with over 250 global enterprises across regulated industries. According to the company, early engagements focus on securing agentic systems and establishing governance and audit frameworks to scale AI responsibly.