Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Execs use responsible AI to drive growth, prevent risks

By CIO Dive by By CIO Dive
August 19, 2025
Home Enterprise IT
Share on FacebookShare on Twitter


This audio is auto-generated. Please let us know if you have feedback.

Dive Brief:

  • Business leaders see responsible AI as a lever to mitigate deployment risks, prevent further fallout and drive business growth, according to an Infosys report published Thursday. The company surveyed 1,500 senior executives.
  • Almost all respondents – 95% — experienced at least one type of “problematic incident” from their use of enterprise AI, primarily resulting in direct financial loss to the business. The average company reported financial losses of about $800,000 over two years, Infosys found.
  • More than three-quarters of senior business leaders view responsible AI practices as leading to positive business outcomes. A small minority — 7% — feel that responsible AI practices hold back growth. On average, business leaders believe they are underinvesting in responsible practices by around 30%.

Dive Insight:

Enterprises rushed into AI deployment plans while the hype haze was thick. Now that the risks are clearer, business leaders are looking for ways to remediate. 

The definition of responsible AI can vary from organization to organization but often centers on fairness, transparency, accountability, privacy, security and the reliability of systems. While beefing up AI governance provides CIOs with a path forward, not all enterprises have embarked on that route. 

“A lot of organizations have not yet set up a robust, responsible AI program,” Traci Gusher, AI and data leader at EY Americas, told CIO Dive. Some CIOs are still trying to decipher how to mitigate bias where possible, prevent model drift and protect applications from security threats, Gusher said. 

The stakes are getting higher. At a time when business leaders across industries are exploring AI agents that can complete tasks without human intervention, a lack of sound governance amplifies potential risks.

“As a result, senior leaders are saying, ‘I don’t think we have the company policies in place to go big using agentic,’” Gusher said. 

Forward-thinking organizations are already figuring out how to best protect the enterprise from AI agent-driven risks. 

“Multiagent systems and agents collaborating with one another … it’s going to come with big governance challenges,” said Stijn Christiaens, co-founder and chief data citizen at Collibra, a data governance platform provider. 

Standards authorities are beginning to address the challenge. The National Institute of Standards and Technology included single agent and multiagent use cases as potential subjects for its forthcoming series of Control Overlays for Securing AI Systems. On Thursday, NIST requested public feedback to inform upcoming guidance and created a Slack channel to collect sentiment for the development of the overlays.

“NIST right now is what most organizations are basing their AI programs and governance on,” Gusher said. “It’s the best example we have of what feels and looks right.”



Source link

By CIO Dive

By CIO Dive

Next Post
Why Your Security Culture is Critical to Mitigating Cyber Risk

Why Your Security Culture is Critical to Mitigating Cyber Risk

Recommended.

Current SaaS delivery model a risk management nightmare, says CISO | Computer Weekly

Current SaaS delivery model a risk management nightmare, says CISO | Computer Weekly

April 30, 2025
Salesforce Q1 Earnings: Benioff Says Informatica Deal A Win For ‘Data Transformation’

Salesforce Q1 Earnings: Benioff Says Informatica Deal A Win For ‘Data Transformation’

May 29, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio