STOCK TITAN

Notifications

Limited Time Offer! Get Platinum at the Gold price until January 31, 2026!

Sign up now and unlock all premium features at an incredible discount.

Read more on the Pricing page

As Agentic AI Gains Traction, 86% of Enterprises Anticipate Heightened Risks, Yet Only 2% of Companies Meet Responsible AI Gold Standards

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Neutral)

Infosys (NYSE:INFY) released a comprehensive study on Responsible AI (RAI) implementation, revealing significant gaps in enterprise readiness. The research, surveying over 1,500 business executives across six countries, found that while 78% of companies view RAI as a growth driver, only 2% have adequate RAI controls.

Key findings show that 95% of executives reported AI-related incidents in the past two years, with 77% experiencing financial losses and 53% facing reputational damage. The study identified that RAI leaders experienced 39% lower financial losses and 18% lower incident severity compared to others.

The research emphasizes the urgent need for organizations to shift from reactive compliance to proactive strategic implementation of RAI, particularly as 86% of executives believe agentic AI will introduce new risks and compliance challenges.

Loading...
Loading translation...

Positive

  • 78% of senior leaders recognize RAI as a revenue growth driver
  • RAI leaders experienced 39% lower financial losses from AI incidents
  • 83% believe future AI regulations will boost AI initiatives
  • Companies identified as RAI leaders show 18% lower severity from AI incidents

Negative

  • 95% of executives reported AI-related incidents in past two years
  • Only 2% of companies meet full RAI capability standards
  • 77% of organizations reported financial losses from AI incidents
  • Companies are underinvesting in RAI by 30% on average
  • 53% of organizations suffered reputational damage from AI incidents

Insights

Infosys positions itself as a thought leader in responsible AI governance while highlighting alarming enterprise implementation gaps.

Infosys's research reveals a concerning responsible AI readiness gap that positions the company strategically in the growing AI governance market. The study found 95% of enterprises experienced AI-related incidents in the past two years, with 77% reporting financial losses and 53% suffering reputational damage. Most alarming is that only 2% of companies meet Infosys's responsible AI implementation standards.

The research carries significant strategic implications for Infosys. By publishing these findings, the company establishes itself as a thought leader in responsible AI governance—a crucial differentiator as enterprises navigate increasing AI risks. The report effectively creates market demand for Infosys's AI governance consulting services by highlighting the financial and reputational consequences of poor AI implementation.

This market positioning is particularly timely as 86% of executives anticipate heightened risks from agentic AI. Infosys showcases its expertise through its AI3S (Scan, Shield, Steer) framework and recommendations for establishing centralized responsible AI offices—services the company is well-positioned to provide.

For Infosys investors, this research represents a strategic business development initiative rather than just academic research. The findings create urgency around responsible AI adoption while simultaneously positioning Infosys as having the expertise to help enterprises implement these necessary safeguards—potentially driving significant consulting and implementation revenue as AI adoption continues accelerating.

With 95% of enterprises facing incidents, Infosys research reveals wide gap between AI adoption and responsible AI readiness, exposing most enterprises to reputational risks and financial loss

BENGALURU, India, Aug. 14, 2025 /PRNewswire/ -- Infosys Knowledge Institute (IKI), the research arm of Infosys (NSE: INFY), (BSE: INFY), (NYSE: INFY), a global leader in next-generation digital services and consulting, today unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI. The report, Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, UK, US, and New Zealand. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss.

The report analyzed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77% of organizations reported financial loss, and 53% of organizations have suffered reputational impact from such AI-related incidents.

Key findings include:

AI risks are widespread and can be severe

  • 95% of C-suite and director-level executives report AI-related incidents in the past two years.
  • 39% characterize the damage experienced from such AI issues as "severe" or "extremely severe."
  • 86% of executives aware of agentic AI believe it will introduce new risks and compliance issues.

Responsible AI (RAI) capability is patchy and inefficient for most enterprises

  • Only 2% of companies (termed "RAI leaders") met the full standards set in the Infosys RAI capability benchmark — termed "RAISE BAR" with 15% (RAI followers) meeting three-quarters of the standards.
  • The "RAI leader" cohort experienced 39% lower financial losses and 18% lower severity from AI incidents.
  • Leaders do several things better to achieve these results, including developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan.

Executives view RAI as a growth driver

  • 78% of senior leaders see RAI as aiding their revenue growth and 83% say that future AI regulations would boost, rather than inhibit, the number of future AI initiatives.
  • However, on average, companies believe they are underinvesting in RAI by 30%.

With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions:

  • Learn from the leaders: Study the practices of high-maturity RAI organizations who have already faced diverse incident types and developed robust governance.
  • Blend product agility with platform governance: Combine decentralized product innovation with centralized RAI guardrails and oversight.
  • Embed RAI guardrails into secure AI platforms: Use platform-based environments that enable AI agents to operate within preapproved data and systems.
  • Establish a proactive RAI office: Create a centralized function to monitor risk, set policy, and scale governance with tools like Infosys' AI3S (Scan, Shield, Steer).

Balakrishna D.R., EVP – Global Services Head, AI and Industry Verticals, Infosys, said, "Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force."

Jeff Kavanaugh, Head of Infosys Knowledge Institute, Infosys, said, "Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era."

To read the full report, please visit here.

Methodology

Infosys used an anonymous format to conduct an online survey of 1,502 business executives across industries across Australia, New Zealand, France, Germany, the United Kingdom, and the United States Australia, France, Germany, UK, US, and New Zealand, as well as qualitative interviews with 40 senior executives.

About Infosys

Infosys is a global leader in next-generation digital services and consulting. Over 320,000 of our people work to amplify human potential and create the next opportunity for people, businesses, and communities. We enable clients in 59 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer clients, as they navigate their digital transformation powered by cloud and AI. We enable them with an AI-first core, empower the business with agile digital at scale and drive continuous improvement with always-on learning through the transfer of digital skills, expertise, and ideas from our innovation ecosystem. We are deeply committed to being a well-governed, environmentally sustainable organization where diverse talent thrives in an inclusive workplace.

Visit www.infosys.com to see how Infosys (NSE, BSE, NYSE: INFY) can help your enterprise navigate your next.

Safe Harbor

Certain statements in this release concerning our future growth prospects, or our future financial or operating performance, are forward-looking statements intended to qualify for the 'safe harbor' under the Private Securities Litigation Reform Act of 1995, which involve a number of risks and uncertainties that could cause actual results or outcomes to differ materially from those in such forward-looking statements. The risks and uncertainties relating to these statements include, but are not limited to, risks and uncertainties regarding the execution of our business strategy, increased competition for talent, our ability to attract and retain personnel, increase in wages, investments to reskill our employees, our ability to effectively implement a hybrid work model, economic uncertainties and geo-political situations, technological disruptions and innovations such as artificial intelligence ("AI"), generative AI, the complex and evolving regulatory landscape including immigration regulation changes, our ESG vision, our capital allocation policy and expectations concerning our market position, future operations, margins, profitability, liquidity, capital resources, our corporate actions including acquisitions, and cybersecurity matters. Important factors that may cause actual results or outcomes to differ from those implied by the forward-looking statements are discussed in more detail in our US Securities and Exchange Commission filings including our Annual Report on Form 20-F for the fiscal year ended March 31, 2025. These filings are available at www.sec.gov. Infosys may, from time to time, make additional written and oral forward-looking statements, including statements contained in the Company's filings with the Securities and Exchange Commission and our reports to shareholders. The Company does not undertake to update any forward-looking statements that may be made from time to time by or on behalf of the Company unless it is required by law.

 

PDF: https://mma.prnewswire.com/media/2750567/Responsible_AI_Radar_infographic.pdf

Logo: https://mma.prnewswire.com/media/633365/4364085/Infosys_Logo.jpg

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/as-agentic-ai-gains-traction-86-of-enterprises-anticipate-heightened-risks-yet-only-2-of-companies-meet-responsible-ai-gold-standards-302530047.html

SOURCE Infosys

FAQ

What percentage of companies meet Infosys's Responsible AI standards in 2025?

According to the study, only 2% of companies meet the full standards set in the Infosys RAI capability benchmark, while 15% meet three-quarters of the standards.

How many enterprises reported AI-related incidents according to Infosys's 2025 study?

95% of C-suite and director-level executives reported experiencing AI-related incidents in the past two years.

What percentage of organizations faced financial losses from AI incidents in the Infosys study?

77% of organizations reported financial losses from AI-related incidents, while 53% suffered reputational impact.

How many executives believe agentic AI will introduce new risks?

86% of executives who are aware of agentic AI believe it will introduce new risks and compliance issues.

What is the average underinvestment in Responsible AI according to INFY's research?

Companies are underinvesting in Responsible AI by an average of 30%, despite 78% viewing it as a revenue growth driver.
Infosys

NYSE:INFY

INFY Rankings

INFY Latest News

INFY Latest SEC Filings

INFY Stock Data

71.63B
4.15B
14.55%
4.54%
Information Technology Services
Technology
Link
India
Bengaluru