Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

AI security: Balancing innovation with protection | Computer Weekly

By Computer Weekly by By Computer Weekly
June 3, 2025
Home Uncategorized
Share on FacebookShare on Twitter


Remember the scramble for USB blockers because staff kept plugging in mysterious flash drives? Or the sudden surge in blocking cloud storage because employees were sharing sensitive documents through personal Dropbox accounts? Today, we face a similar scenario with unauthorised AI use, but this time, the stakes are potentially higher.

The challenge isn’t just about data leakage anymore, although that remains a significant concern. We’re now navigating territory where AI systems can be compromised, manipulated, or even “gamed” to influence business decisions. While widespread malicious AI manipulation is not widely evident, the potential for such attacks exists and grows with our increasing reliance on these systems. As Bruce Schneier aptly questioned at the RSA Conference earlier this year, “Did your chatbot recommend a particular airline or hotel because it’s the best deal for you, or because the AI company got a kickback?”

Just as shadow IT emerged from employees seeking efficient solutions to daily challenges, unauthorised AI use stems from the same human desire to work smarter, not harder. When the marketing team feeds corporate data into ChatGPT, their intent is not malicious, they’re simply trying to write better copy faster. Similarly, developers using unofficial coding assistants are often attempting to meet tight deadlines. However, each interaction with an unauthorised and unvetted AI system introduces potential exposure points for sensitive data.

The real risk lies in the potent combination of two factors – the ease with which employees can access powerful AI tools, and the implicit trust many place in AI-generated outputs. We must address both. While the possibility of AI system compromise might seem remote, the bigger immediate risk comes from employees making decisions based on AI-generated content without proper verification. Think of AI as an exceptionally confident intern. It’s helpful and full of suggestions but requiring oversight and verification.

Forward-thinking organisations are moving beyond simple restriction policies. Instead, they’re developing frameworks that embrace AI’s value while incorporating necessary and appropriate safeguards. This involves providing secure, authorised AI tools that meet employee needs while implementing verification processes for AI-generated outputs. It’s about fostering a culture of healthy scepticism and encouraging employees to trust but verify, regardless of how authoritative an AI system might seem.

Education plays a crucial role, but not through fear-based training about AI risks. Instead, organisations need to help employees understand the context of AI use – how these systems work, their limitations, and the critical importance of verification. This includes teaching simple and practical verification techniques and establishing clear escalation pathways for when AI outputs seem suspicious or unusual.

The most effective approach combines secure tools with smart processes. Organisations should provide vetted and approved AI platforms, while establishing clear guidelines for data handling and output verification. This isn’t about stifling innovation – it’s about enabling it safely. When employees understand both the capabilities and constraints of AI systems, they are better equipped to use them responsibly.

Looking ahead, the organisations that will succeed in securing their AI initiatives aren’t those with the strictest policies – they’re those that best understand and work with human behaviour. Just as we learned to secure cloud storage by providing viable alternatives to personal Dropbox accounts, we’ll secure AI by empowering employees with the right tools while maintaining organisational security.

Ultimately, AI security is about more than protecting systems – it’s about safeguarding decision-making processes. Every AI-generated output should be evaluated through the lens of business context and common sense. By fostering a culture where verification is routine and questions are encouraged, organisations can harness AI’s benefits while mitigating its risks.

Like brakes on an F1 car that enables it to drive faster, security isn’t about hindering work:  it’s about facilitating it safely. We must never forget that human judgement remains our most valuable defence against manipulation and compromise. 

Javvad Malik is lead security awareness advocate at KnowBe4



Source link

By Computer Weekly

By Computer Weekly

Next Post
Ekinops completes the acquisition of Olfeo and becomes a network cybersecurity player

Ekinops completes the acquisition of Olfeo and becomes a network cybersecurity player

Recommended.

JLR tentatively restarts production, following £1.5bn government backing | Computer Weekly

JLR tentatively restarts production, following £1.5bn government backing | Computer Weekly

September 29, 2025
ZTE again makes prestigious CDP A List for leading climate action, reinforcing global climate leadership

ZTE again makes prestigious CDP A List for leading climate action, reinforcing global climate leadership

February 24, 2025

Trending.

Spirit of openness helps banks get serious about stopping scams | Computer Weekly

Spirit of openness helps banks get serious about stopping scams | Computer Weekly

April 10, 2025
Weibo Publishes 2025 Environmental, Social and Governance Report

Weibo Publishes 2025 Environmental, Social and Governance Report

April 28, 2026
It Takes 2 Minutes to Hack the EU’s New Age-Verification App

It Takes 2 Minutes to Hack the EU’s New Age-Verification App

April 18, 2026
Chunghwa Telecom 2025 Form 20-F filed with the U.S. SEC

Chunghwa Telecom 2025 Form 20-F filed with the U.S. SEC

April 15, 2026
2025 Wired, WLAN Gartner Magic Quadrant: Cisco Drops To Challenger, NaaS Specialists Join

2025 Wired, WLAN Gartner Magic Quadrant: Cisco Drops To Challenger, NaaS Specialists Join

July 14, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio