STOCK TITAN

Enterprises Rush into GenAI Without Security Foundations, New Ponemon Study Finds

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags

OpenText (NASDAQ: OTEX) and Ponemon released a global report on March 23, 2026, finding rapid GenAI adoption but weak security and governance foundations.

Key findings: 52% of enterprises have fully or partially deployed GenAI; 79% lack full AI maturity in cybersecurity; only 41% have AI-specific data privacy policies. The study surveyed 1,878 IT and security practitioners worldwide.

Loading...
Loading translation...

Positive

  • 52% of enterprises report full or partial GenAI deployment
  • Survey sampled 1,878 global IT and security practitioners
  • 51% report AI reduces time to detect anomalies

Negative

  • 79% of organizations have not reached full AI maturity in cybersecurity
  • Only 41% have AI-specific data privacy policies
  • 62% say minimizing model and bias risks is very or extremely difficult
  • 58% find prompt/input risks very or extremely difficult to minimize

News Market Reaction – OTEX

+2.04%
1 alert
+2.04% News Effect

On the day this news was published, OTEX gained 2.04%, reflecting a moderate positive market reaction.

Data tracked by StockTitan Argus on the day of publication.

Key Figures

GenAI deployment: 52% of enterprises AI maturity: 1 in 5 enterprises Not fully mature: 79% of organizations +5 more
8 metrics
GenAI deployment 52% of enterprises Have fully or partially deployed GenAI
AI maturity 1 in 5 enterprises Report reaching AI maturity in cybersecurity activities
Not fully mature 79% of organizations Have not reached full AI maturity in cybersecurity
AI privacy policies 41% of organizations Have AI-specific data privacy policies in place
Risk-based governance 43% of respondents Have adopted a risk-based AI governance approach
Model and bias risks 62% of respondents Say it is very or extremely difficult to minimize model and bias risks
Prompt/input risks 58% of respondents Say prompt or input risks are very or extremely difficult to minimize
Survey sample size 1,878 practitioners Global IT and IT security respondents surveyed in November 2025

Market Reality Check

Price: $22.25 Vol: Volume 3,561,346 is 61% a...
high vol
$22.25 Last Close
Volume Volume 3,561,346 is 61% above the 20-day average of 2,211,009, indicating elevated trading into this AI governance release. high
Technical Trading about 28% below the 200-day MA of 31.37 and 43.43% below the 52-week high, near the 52-week low at 3.01% above it.

Peers on Argus

OTEX was down 0.18% pre-news with elevated volume, while close peers were mixed:...
1 Down

OTEX was down 0.18% pre-news with elevated volume, while close peers were mixed: NICE (+0.16%), PEGA (+2.09%), CVLT (+0.43%), ESTC (-1.28%), SRAD (+0.82%). Momentum scanner only flagged DSGX (-4.19%) and not OTEX, supporting a stock-specific context rather than a broad sector move.

Common Catalyst AI and cybersecurity feature in peer headlines (e.g., CVLT AI threat detection), aligning with OTEX’s AI governance and security focus.

Historical Context

5 past events · Latest: Feb 10 (Positive)
Pattern 5 events
Date Event Sentiment Move Catalyst
Feb 10 Buyback increase Positive +2.8% Expanded Fiscal 2026 share repurchase authorization to US$500M under NCIB.
Feb 05 Quarterly earnings Positive +10.0% Reported Q2 FY2026 results with dividend and CEO appointment plus asset sales.
Feb 02 Asset divestiture Positive -3.1% Agreed to sell Vertica for US$150M to focus on core cloud and AI products.
Jan 29 CEO appointment Positive -3.6% Named Ayman Antoun as CEO to drive cloud, digital modernization and enterprise AI.
Jan 15 Debt redemption Positive -2.4% Nabors redeemed 7.500% notes and reduced net debt; sector-level capital structure move.
Pattern Detected

Recent OTEX news often saw positive reactions to financial/shareholder actions, while strategic shifts and leadership changes showed more mixed or negative reactions.

Recent Company History

Over the past few months, OTEX announced a larger US$500M share repurchase program, Q2 FY2026 results with $1.327B revenue and dividend continuity, and divestitures of Vertica and eDOCS to refocus on core cloud and enterprise AI. Leadership transitioned toward Ayman Antoun as CEO, emphasizing cloud and AI growth. Market reactions were positive to buybacks and earnings but negative around divestitures and CEO news. The current AI security and governance study fits the ongoing narrative of positioning around enterprise AI and cybersecurity.

Market Pulse Summary

This announcement highlights widespread gaps in AI security and governance, with 79% of organization...
Analysis

This announcement highlights widespread gaps in AI security and governance, with 79% of organizations not yet fully mature in AI cybersecurity and only 41% reporting AI-specific data privacy policies. For OTEX, the study reinforces its narrative around enterprise AI and cybersecurity just after divestitures and leadership changes. Investors may track how these insights translate into product adoption, recurring revenue, and margins, while watching future disclosures on AI-related offerings, customer uptake, and progress in aligning AI tools with robust governance frameworks.

Key Terms

genai, agentic ai, ai maturity, ai governance, +4 more
8 terms
genai technical
"more than half of enterprises (52%) have fully or partially deployed GenAI"
Generative AI (genai) is a type of artificial intelligence designed to create new content, such as text, images, or music, that resembles human-produced work. It matters to investors because it has the potential to transform industries by automating tasks, enhancing creativity, and enabling new products and services, which can influence company performance and market opportunities.
agentic ai technical
"Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI"
Agentic AI refers to computer systems that can make their own decisions and take actions without needing someone to tell them what to do each time. It's like giving a robot a degree of independence to solve problems or achieve goals on its own, which matters because it could change how we work and interact with technology in everyday life.
ai maturity technical
"Only 1 in 5 enterprises report reaching AI maturity – where AI in cybersecurity activities"
A company's AI maturity is a measure of how developed and reliable its artificial intelligence capabilities are—covering data, models, tools, teams, repeatable processes, and oversight—from one‑off experiments to production systems used across the business. Investors care because higher AI maturity makes the technology more likely to cut costs, create new revenue streams and avoid costly mistakes or regulatory problems; think of it as the difference between a prototype and a dependable, road‑ready product.
ai governance technical
"fewer than half (43%) of respondents have adopted a risk-based AI governance approach"
AI governance is the set of rules, oversight and practices a company uses to design, test, deploy and monitor artificial intelligence tools so they behave reliably, safely and fairly. Think of it as a vehicle’s maintenance schedule, road rules and driver checks for software: it reduces the chance of costly errors, legal penalties or reputation damage and helps investors judge whether a business can scale AI responsibly and protect value.
model and bias risks technical
"62% of respondents say it is very difficult or extremely difficult to minimize model and bias risks"
Model and bias risks are the dangers that automated models, algorithms, or statistical tools give misleading or unfair results because of faulty assumptions, poor data, or built‑in prejudices. For investors, these risks matter because they can produce wrong valuations, bad trading signals, underestimated losses, or regulatory and reputational problems—like relying on a GPS that confidently sends you the wrong way because its map is outdated or missing key streets.
prompt or input risks technical
"Fifty-eight percent (58%) say prompt or input risks are very or extremely difficult"
Prompt or input risks are the dangers that arise from the words, data or instructions fed into automated systems, models or decision processes; like giving a calculator the wrong numbers or a recipe with missing steps, the output can be misleading, biased, or incorrect. For investors this matters because flawed inputs can produce bad signals or decisions, expose confidential information, trigger regulatory trouble, harm reputation or cause financial loss, so knowing how inputs are created and checked is key to judging reliability.
threat detection technical
"AI falls short in threat detection as bias and reliability risks persist"
Threat detection is the process of spotting signs that something harmful or unwanted is happening or about to happen to an organization—such as a cyberattack, fraud, data leak, product safety issue, or regulatory noncompliance. Think of it as a building’s smoke detector: it alerts managers early so they can stop damage, preserve operations and reputation, and avoid costly fines or loss of investor trust.
ai autonomy technical
"may be limiting effectiveness and AI autonomy due to governance and maturity gaps"
AI autonomy is the degree to which a software system can make decisions and take actions without human supervision, ranging from simple rule-following automation to independent agents that plan and execute complex tasks. Investors care because higher autonomy can boost productivity and cut labor costs like a self-managing employee, but it also changes revenue prospects, operational risk, and regulatory exposure if the system makes mistakes or requires oversight.

AI-generated analysis. Not financial advice.

Global research from OpenText and Ponemon shows strong security foundations are critical to scaling Enterprise AI

WATERLOO, ON, March 23, 2026 /PRNewswire/ -- OpenText™ (NASDAQ: OTEX) (TSX: OTEX) today released a new global report, "Managing Risks and Optimizing the Value of AI, GenAI & Agentic AI," developed in partnership with the Ponemon Institute. The research revealed that, while more than half of enterprises (52%) have fully or partially deployed GenAI, security and governance is falling behind.

This gap highlights a growing challenge for the industry as organizations are adopting generative AI quickly, but many are doing so without the governance and security foundations needed to manage its risks.

"AI maturity isn't just about adopting AI tools—it's about doing it responsibly," said Muhi Majzoub, EVP, Product & Engineering. "Security and governance are foundational to getting real value from AI. When they're built into AI systems from the start, organizations can operate with greater transparency, monitor systems continuously, and trust the outcomes AI delivers."

Only 1 in 5 enterprises report reaching AI maturity – where AI in cybersecurity activities is fully deployed and security risks are assessed – and fewer than half (43%) have adopted a risk-based strategy to govern AI systems. As AI systems become more autonomous and embedded in critical operations, closing this maturity gap will be essential for ensuring trust, compliance, and long-term business value.

AI Security and Governance are Lagging

According to the survey, significant gaps between the pace of AI deployment and the practices needed to govern and secure it effectively.

  • Nearly 8 in 10 organizations (79%) have not yet reached full AI maturity in cybersecurity, where systems are fully deployed and security risks are assessed.
  • Only 41% of organizations have AI-specific data privacy policies in place.
  • A majority (62%) of respondents say it is difficult to minimize model and bias risks (like the breach of ethical and responsible AI principles) in the language model development.
  • Fewer than half (43%) of respondents have adopted a risk-based AI governance approach that addresses AI-related risks like bias, security threats, or ethical issues.
    • Fifty-eight percent (58%) say prompt or input risks (e.g., misleading, inaccurate, or harmful responses) are very or extremely difficult to minimize.
    • Over half of respondents (56%) also report challenges in managing user risks, including the unintended spread of misinformation.
  • Nearly six in ten respondents (59%) say AI makes it more difficult to comply with privacy and security regulations, yet only 41% report having AI-specific data privacy policies in place.

Without Trust and Explainability, AI is Failing to Deliver Results and Requiring Human Oversight

Many organizations are deploying AI to improve efficiency, including within security operations. Yet reported challenges around trust, reliability, and explainability suggest the very tools designed to enhance security may be limiting effectiveness and AI autonomy due to governance and maturity gaps.

  • AI falls short in threat detection as bias and reliability risks persist:
    • Just 51% of respondents say AI is effective in reducing the time to detect anomalies or emerging threats. Fewer than half (48%) rate AI as effective in threat detection and hunting for deeper insights and reducing manual workload.
    • AI model and bias risks are limiting effectiveness. Nearly two-thirds (62%) of respondents say it is very difficult or extremely difficult to minimize model and bias risks, including unfair or discriminatory outputs.
    • Operational reliability also presents a challenge, with 45% of respondents citing errors in AI decision rules as a top barrier to effectiveness, while 40% report errors in data inputs ingested by AI.
  • Fully autonomous AI still far from reach:
    • Fewer than half of organizations (47%) say their AI models can learn robust norms and make safe decisions autonomously, reflecting tempered confidence as AI models take on more independence.
    • As a result, more than half of respondents (51%) say human oversight is needed in AI governance due to the speed at which attackers can adapt.

"The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start," said Majzoub. "As AI becomes embedded in day-to-day operations, organizations need secure information management as the foundation; clear governance frameworks, policy-based controls, and continuous monitoring that ensure AI systems remain trustworthy and compliant. Just as important is aligning AI with the right data, security practices, and oversight from the outset so innovation can scale responsibly and deliver measurable business value."

Survey Methodology

The Ponemon Institute independently surveyed 1,878 IT and IT security practitioners across North America, Asia-Pacific, Europe, the Middle East, Africa, and Latin America. The study captured input from organizations of varying sizes and industries, including financial services, healthcare, technology, energy, and manufacturing. The research was conducted in November 2025. Respondents included executives, decision-makers, and practitioners across IT security, engineering, infrastructure, risk and compliance, and other roles involved in AI and security strategy.

Additional Resources

Copyright ©2026 Open Text. OpenText is a trademark or registered trademark of Open Text. The list of trademarks is not exhaustive of other trademarks. Registered trademarks, product names, company names, brands and service names mentioned herein are property of Open Text. All rights reserved. For more information, visit: https://www.opentext.com/about/copyright-information

About OpenText 

OpenText™ is a global leader in secure information management for AI, helping organizations protect, govern, and activate their data with confidence. Our technologies turn data into information with context to form the knowledge base for AI. Learn more at www.opentext.com

Cautionary Statement Regarding Forward-Looking Statements 

Certain statements in this press release may contain words considered forward-looking statements or information under applicable securities laws. These statements are based on OpenText's current expectations, estimates, forecasts and projections about the operating environment, economies and markets in which the company operates. These statements are subject to important assumptions, risks and uncertainties that are difficult to predict, and the actual outcome may be materially different. OpenText's assumptions, although considered reasonable by the company at the date of this press release, may prove to be inaccurate and consequently its actual results could differ materially from the expectations set out herein. For additional information with respect to risks and other factors which could occur, see OpenText's Annual Report on Form 10-K, Quarterly Reports on Form 10-Q and other securities filings with the SEC and other securities regulators. Readers are cautioned not to place undue reliance upon any such forward-looking statements, which speak only as of the date made. Unless otherwise required by applicable securities laws, OpenText disclaims any intention or obligations to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise. Further, readers should note that we may announce information using our website, press releases, securities law filings, public conference calls, webcasts and the social media channels identified on the Investors section of our website (https://investors.opentext.com). Such social media channels may include the Company's or our executive's blog, X, formerly known as Twitter, account or LinkedIn account. The information posted through such channels may be material. Accordingly, readers should monitor such channels in addition to our other forms of communication. 

OTEX-G

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/enterprises-rush-into-genai-without-security-foundations-new-ponemon-study-finds-302721434.html

SOURCE Open Text Corporation

FAQ

What did OpenText (OTEX) and Ponemon report on March 23, 2026 about GenAI adoption?

They found rapid GenAI adoption but weak governance: 52% have deployed GenAI. According to the company, the global survey of 1,878 practitioners shows security and governance lagging behind deployment.

How common is AI maturity in cybersecurity according to OpenText's March 23, 2026 report (OTEX)?

AI maturity is uncommon: 79% have not reached full AI maturity in cybersecurity. According to the company, only about one in five enterprises report reaching AI maturity with assessed security risks.

What percent of organizations have AI-specific data privacy policies in the OTEX/Ponemon study?

Only 41% of organizations report having AI-specific data privacy policies. According to the company, this gap contributes to regulatory and compliance challenges as AI becomes more embedded.

What risks limit AI effectiveness in security per the OpenText (OTEX) study?

Model bias and input risks are major limits: 62% cite model/bias difficulty and 58% cite prompt/input risks. According to the company, these issues reduce trust, explainability, and automation.

How might OpenText's March 23, 2026 findings affect enterprise security priorities for OTEX customers?

The report highlights governance and monitoring gaps that may shift priorities to secure information management and oversight. According to the company, organizations need policy-based controls and continuous monitoring to scale AI responsibly.
Open Text Corp

NASDAQ:OTEX

View OTEX Stock Overview

OTEX Rankings

OTEX Latest News

OTEX Latest SEC Filings

OTEX Stock Data

5.53B
245.24M
Software - Application
Services-computer Integrated Systems Design
Link
Canada
ONTARIO CANADA