Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

AI security: Balancing innovation with protection | Computer Weekly

By Computer Weekly by By Computer Weekly
June 3, 2025
Home Uncategorized
Share on FacebookShare on Twitter


Remember the scramble for USB blockers because staff kept plugging in mysterious flash drives? Or the sudden surge in blocking cloud storage because employees were sharing sensitive documents through personal Dropbox accounts? Today, we face a similar scenario with unauthorised AI use, but this time, the stakes are potentially higher.

The challenge isn’t just about data leakage anymore, although that remains a significant concern. We’re now navigating territory where AI systems can be compromised, manipulated, or even “gamed” to influence business decisions. While widespread malicious AI manipulation is not widely evident, the potential for such attacks exists and grows with our increasing reliance on these systems. As Bruce Schneier aptly questioned at the RSA Conference earlier this year, “Did your chatbot recommend a particular airline or hotel because it’s the best deal for you, or because the AI company got a kickback?”

Just as shadow IT emerged from employees seeking efficient solutions to daily challenges, unauthorised AI use stems from the same human desire to work smarter, not harder. When the marketing team feeds corporate data into ChatGPT, their intent is not malicious, they’re simply trying to write better copy faster. Similarly, developers using unofficial coding assistants are often attempting to meet tight deadlines. However, each interaction with an unauthorised and unvetted AI system introduces potential exposure points for sensitive data.

The real risk lies in the potent combination of two factors – the ease with which employees can access powerful AI tools, and the implicit trust many place in AI-generated outputs. We must address both. While the possibility of AI system compromise might seem remote, the bigger immediate risk comes from employees making decisions based on AI-generated content without proper verification. Think of AI as an exceptionally confident intern. It’s helpful and full of suggestions but requiring oversight and verification.

Forward-thinking organisations are moving beyond simple restriction policies. Instead, they’re developing frameworks that embrace AI’s value while incorporating necessary and appropriate safeguards. This involves providing secure, authorised AI tools that meet employee needs while implementing verification processes for AI-generated outputs. It’s about fostering a culture of healthy scepticism and encouraging employees to trust but verify, regardless of how authoritative an AI system might seem.

Education plays a crucial role, but not through fear-based training about AI risks. Instead, organisations need to help employees understand the context of AI use – how these systems work, their limitations, and the critical importance of verification. This includes teaching simple and practical verification techniques and establishing clear escalation pathways for when AI outputs seem suspicious or unusual.

The most effective approach combines secure tools with smart processes. Organisations should provide vetted and approved AI platforms, while establishing clear guidelines for data handling and output verification. This isn’t about stifling innovation – it’s about enabling it safely. When employees understand both the capabilities and constraints of AI systems, they are better equipped to use them responsibly.

Looking ahead, the organisations that will succeed in securing their AI initiatives aren’t those with the strictest policies – they’re those that best understand and work with human behaviour. Just as we learned to secure cloud storage by providing viable alternatives to personal Dropbox accounts, we’ll secure AI by empowering employees with the right tools while maintaining organisational security.

Ultimately, AI security is about more than protecting systems – it’s about safeguarding decision-making processes. Every AI-generated output should be evaluated through the lens of business context and common sense. By fostering a culture where verification is routine and questions are encouraged, organisations can harness AI’s benefits while mitigating its risks.

Like brakes on an F1 car that enables it to drive faster, security isn’t about hindering work:  it’s about facilitating it safely. We must never forget that human judgement remains our most valuable defence against manipulation and compromise. 

Javvad Malik is lead security awareness advocate at KnowBe4



Source link

By Computer Weekly

By Computer Weekly

Next Post
Ekinops completes the acquisition of Olfeo and becomes a network cybersecurity player

Ekinops completes the acquisition of Olfeo and becomes a network cybersecurity player

Recommended.

Stocks making the biggest moves midday: Nvidia, Ultragenyx Pharmaceutical, Newmont, DigitalBridge & more

Stocks making the biggest moves midday: Nvidia, Ultragenyx Pharmaceutical, Newmont, DigitalBridge & more

December 29, 2025
Five Companies That Came To Win This Week

Five Companies That Came To Win This Week

July 11, 2025

Trending.

Google Sues 25 Chinese Entities Over BADBOX 2.0 Botnet Affecting 10M Android Devices

Google Sues 25 Chinese Entities Over BADBOX 2.0 Botnet Affecting 10M Android Devices

July 18, 2025
Stocks making the biggest moves premarket: Salesforce, American Eagle, Hewlett Packard Enterprise and more

Stocks making the biggest moves premarket: Salesforce, American Eagle, Hewlett Packard Enterprise and more

September 4, 2025
Stocks making the biggest moves after hours: AppLovin, Arm Holdings, Flutter Entertainment, Fortinet and more

Stocks making the biggest moves after hours: AppLovin, Arm Holdings, Flutter Entertainment, Fortinet and more

May 7, 2025
Warning: WinRAR Vulnerability CVE-2025-6218 Under Active Attack by Multiple Threat Groups

Warning: WinRAR Vulnerability CVE-2025-6218 Under Active Attack by Multiple Threat Groups

December 10, 2025
Risky shadow AI use remains widespread

Risky shadow AI use remains widespread

January 6, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio