Dive Brief:
- Securing AI has become a top priority for CIOs, according to a Logicalis report published Monday. The report, which surveyed more than 1,000 CIOs globally, found more than a quarter see AI as a significant source of risk, placing it nearly on par with traditional threats such as malware, ransomware and phishing.
- Employee misuse of AI is compounding concerns, with 57% of CIOs saying staff are putting data security at risk. Despite the mounting risk, AI governance measures remain limited, with just 37% of organizations saying they have visibility into the AI tools in use.
- The challenges posed by the advent of AI are significant enough that nearly half of respondents in the Logicalis report said they wish AI had “not been invented.”
Dive Insight:
While traditional threats remain the dominant concern for CIOs, AI is being increasingly cited as a risk as business leaders grapple with critical issues such as shadow AI, app sprawl and lack of oversight.
Security teams, already strained, are losing ground in the face of increased blind spots, with more than one-third reporting a reduced ability to detect breaches and worsening incident response times.
At the same time, internal misuse of AI is introducing new workforce challenges. Two-thirds of respondents say employee training on AI risk management is insufficient, while 94% of CIOs report a cybersecurity skills shortage.
While a renewed focus on upskilling and expenditure on post-breach remediation is rising in tandem with the threat, there is a growing need for more to be done to shift response from reactive to preventative.
“AI is a powerful force in cybersecurity, but without the right skills and governance, it can create more vulnerabilities than protection,” said Bob Bailkoski, CEO of Logicalis Group, in the report. “CIOs have the challenging task of defending their organisations against AI-driven threats, but also from the risks posed by the very AI tools meant to safeguard them.”
In response, Bailkoski suggests CIOs integrate governance and transparency into AI initiatives from their inception to ensure long-term stability.
The findings align with broader industry concerns. Recent reporting from Cloud Security Alliance and Thales similarly highlighted how unstructured data and poorly governed AI pipelines are expanding the enterprise attack surface, with 68% of companies surveyed saying a majority of their data remains unprotected.
Reliable data, AI-specific upskilling and improved governance were similarly identified as key tools in mitigating risk.
Initiatives such as Project Glasswing, Anthropic’s recently announced effort to identify and remedy software vulnerabilities, further underscore a growing push to scale governance alongside tech adoption.
The initiative was launched alongside partners including AWS, Apple, Broadcom, Cisco, Google and others who will use Anthropic’s Claude Mythos Preview model to identify and fix vulnerabilities.







