Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Better governance is required for AI agents | Computer Weekly

By Computer Weekly by By Computer Weekly
June 30, 2025
Home Uncategorized
Share on FacebookShare on Twitter


AI agents are one of the most widely deployed types of GenAI initiative in organisations today. There are many good reasons for their popularity, but they can also pose a real threat to IT security.

That’s why CISOs need to be keeping a close eye on every AI agent deployed in their organisation. These might be outward-facing agents, such as chatbots designed to help customers track their orders or consult their purchase histories. Or, they might be internal agents that are designed for specific tasks – such as walking new recruits through an onboarding process, or helping financial staff spot anomalies that could indicate fraudulent activity.

Thanks to recent advances in AI, and natural language processing (NLP) in particular, these agents have become extremely adept at responding to user messages in ways that closely mimic human conversation. But in order to perform at their best and provide highly tailored and accurate responses, they must not only handle personal information and other sensitive data, but also be closely integrated with internal company systems, those of external partners, third-party data sources, not to mention the wider internet.

Whichever way you look at it, all this makes AI agents an organisational vulnerability hotspot.

Managing emerging risks 

So how might AI agents pose a risk to your organisation? For a start, they might inadvertently be given access, during their development, to internal data that they simply shouldn’t be sharing. Instead, they should only have access to essential data and share it with those authorised to see it, across secure communication channels and with comprehensive data management mechanisms in place.

Additionally, agents could be based on underlying AI and machine learning models containing vulnerabilities. If exploited by hackers, these could lead to remote code execution and unauthorised data access.

In other words, vulnerable agents might be lured into interactions with hackers in ways that lead to profound risks. The responses delivered by an agent, for example, could be manipulated by malicious inputs that interfere with its behaviour. A prompt injection of this kind can direct the underlying language model to ignore previous rules and directions and adopt new, harmful ones. Similarly, malicious inputs might also be used by hackers to launch attacks on underlying databases and web services.

The message to my fellow CISOs and security professionals should be clear: rigorous assessment and real-time monitoring is as essential to AI and GenAI initiatives, especially agents handling interactions with customers, employees and partners, as it is to any other form of corporate IT.

Don’t let AI agents become your blind spot 

I’d suggest that the best place to start might be with a comprehensive audit of existing AI and GenAI assets, including agents. This should provide an exhaustive inventory of every example to be found within the organisation, along with a list of data sources for each one and the application programming interfaces (APIs) and integrations associated with it.

Does an agent interface with HR, accounting or inventory systems, for example? Is third-party data involved in the underlying model that powers their interactions, or data scraped from the Internet? Who is interacting with the agent? What types of conversation is the agent authorised to have with different types of users, or they to have with the agent?

It should go without saying that where organisations are building their own, new AI applications from the ground up, CISOs and their teams should work directly with the AI team from the earliest stages, to ensure that privacy, security and compliance objectives are rigorously applied. 

Post-deployment, the IT security team should have search, observability and security technologies in place to continuously monitor an agent’s activities and performance. These should be used to spot anomalies in traffic flows, user behaviours and the types of information shared – and to halt those exchanges abruptly where there are grounds for suspicion.

Comprehensive logging doesn’t just enable IT security teams to detect abuse, fraud and data breaches, but also find the fastest and most effective remediations. Without it, agents could be engaging in regular interactions with wrong-doers, leading to long-term data exfiltration or exposure.

A new frontline for security and governance

Finally, CISOs and their teams must keep an eye out for so-called shadow AI. Just as we saw employees adopt software-as-a-service tools often aimed at consumers rather than organisations in order to get work done, many are now taking a maverick, unauthorised approach to adopting AI-enabled tools without the sanction or oversight of the organisational IT team.

The onus is on IT security teams to detect and expose shadow AI wherever it emerges. That means identifying unauthorised tools, assessing the security risks they pose, and taking swift action. If the risks clearly outweigh the productivity benefits, those tools should be blocked. Where possible, teams should also guide employees toward safer, sanctioned alternatives that meet the organisation’s security standards.

Finally, it’s important to caution that just because interacting with an AI agent may feel like a regular human conversation, agents don’t have the human ability to exercise discretion, judgement, caution or conscience in those interactions. That’s why clear governance is essential, and users must also be aware that anything shared with an agent could be stored, surfaced, or exposed in ways they didn’t intend. 



Source link

By Computer Weekly

By Computer Weekly

Next Post
QCY Crossky C50 Delivers Immersive Sound and All-Day Comfort in a Stylish Open-Ear Design

QCY Crossky C50 Delivers Immersive Sound and All-Day Comfort in a Stylish Open-Ear Design

Recommended.

Huawei puts scenario-based solutions in the spotlight for its Europe Enterprise Roadshow 2025

Huawei puts scenario-based solutions in the spotlight for its Europe Enterprise Roadshow 2025

April 2, 2025
Nomiso Celebrates Four Years of Innovation and Launches Smart Agent Framework

Nomiso Celebrates Four Years of Innovation and Launches Smart Agent Framework

May 5, 2025

Trending.

VIDIZMO Earns Microsoft Solutions Partner Designations for All Three Areas of Azure, Solidifying its Expertise in Delivering AI Solutions

VIDIZMO Earns Microsoft Solutions Partner Designations for All Three Areas of Azure, Solidifying its Expertise in Delivering AI Solutions

June 28, 2025
Tilson Continues to Perform for Clients; Shares Substantial Progress in Chapter 11 Process

Tilson Continues to Perform for Clients; Shares Substantial Progress in Chapter 11 Process

June 27, 2025
OneClik Malware Targets Energy Sector Using Microsoft ClickOnce and Golang Backdoors

OneClik Malware Targets Energy Sector Using Microsoft ClickOnce and Golang Backdoors

June 27, 2025
DHS Warns Pro-Iranian Hackers Likely to Target U.S. Networks After Iranian Nuclear Strikes

DHS Warns Pro-Iranian Hackers Likely to Target U.S. Networks After Iranian Nuclear Strikes

June 23, 2025
Le nombre d’utilisateurs de la 5G-A atteint les dix millions en Chine : Huawei présente le développement de la 5G-A et la valeur de l’IA basée sur des scénarios

Le nombre d’utilisateurs de la 5G-A atteint les dix millions en Chine : Huawei présente le développement de la 5G-A et la valeur de l’IA basée sur des scénarios

June 27, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio