Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

RSAC rewind: Agentic AI, governance gaps and insider threats | Computer Weekly

By Computer Weekly by By Computer Weekly
May 29, 2025
Home Uncategorized
Share on FacebookShare on Twitter


This year’s RSAC Conference drew record numbers of nearly 44,000 attendees, 730 speakers, 650 exhibitors and 400 media members. And as one of those who attended and spoke with countless organizations, partners and CISO peers, I can safely say that practically every single person there had something to say about the use of or abuse of artificial intelligence (AI) in cyber security.

We all expected AI to dominate the discussion. But we didn’t anticipate how deeply it would embed into every company update or overview, strategy session, customer conversation and even hallway and happy hour chats. As is often the case, the line between reality and hype can quickly blur. In an attempt to provide a sense of clarity at his particular moment in time, here is a breakdown of three key topic points at the conference:

Full-blown AI adoption in cyber security, whether we’re ready for it, or not

We have unofficially transitioned from a proof-of-concept phase to aggressive implementation. In fact, 90%  of organiaations are either currently adopting generative AI for security, or are planning to do so, according to research from the Cloud Security Alliance (CSA). The vast majority of IT and security professionals feel that these technologies can improve their skill sets and support their roles, while freeing them up for more rewarding, valuable assignments.

On the flip side, cyber criminals are also making abundant use of this ever-evolving innovation – to the point in which AI-enhanced malware ranks as a top risk for enterprise leaders, according to Gartner. This sets up a modern-day Spy vs. Spy scenario in which the good guys and bad guys battle it out in a technology arms race, with the stakes getting increasingly higher and the precarious potential for unleashed, harmful AI growing more likely.

The term “agentic AI,” for example, loomed large on the minds of many conference attendees. Simply defined, this refers to AI systems that act autonomously to pursue goals and solve problems without constant human guidance or oversight. It is difficult, however, to determine whether the concept signals genuine innovation or just repackaged marketing speak.

For now, security leaders should focus on the users and ask to what extent are they taking part in Shadow AI, and how are they deploying AI applications? In our own research, we’ve found that most generative AI (GenAI) usage in the enterprise (72%) is currently attributed to shadow IT. 

We know that AI left alone will transition swiftly in the direction of any and all forms of usage. It’s already starting to resemble the rapidly expanding universe of cloud adoption of years past. Transforming into this level of AI ubiquity requires deeper questions – and answers – about integration, accountability and governance. Which brings us to our next conference topic point.

Gaps in enterprise AI governance

Too often, AI governance committees are narrowly fixated on privacy and security concerns, rather than broader considerations such as legal liability, licensing exposure, cost and technology overlap rationalisation and appropriate use. As a result, organizations are approving AI tools without conducting full risk evaluations, including intellectual property and third-party risks such as code contributions.

For now, leaders seem to prioritise safe operation using local models, outright blocks, incident response and detection, along with other short-term use cases. But they must shift from this approach to a state of broader, enterprise-focused AI planning that is guided by strategic, organisational goals, and not merely functional execution.

Proliferating insider threats

These threats, of course, are older than cyber security itself. Think of the embezzling finance employee in the 1950s, or the factory worker who surreptitiously slipped company property in his pocket. There was plenty of chatter onsite about the widespread scam in which top tech firms in the US have been tricked into hiring remote IT workers who happen to be North Korean cyber operatives.

This speaks to the need for closer alignment among HR, legal and security teams to detect forged employment documents and eliminate hiring platform vulnerabilities. Unfortunately, there aren’t enough ongoing conversations about these emerging threats, with HR, legal, and security teams more likely to collaborate on compliance requirements and reactive, after-the-fact incident investigations.

Throughout its existence, the RSAC Conference has reflected the present state of cyber security, with impactful trends and challenges conveyed amid the cacophony of booths, presentations, demonstrations and conversations. This most recent conference has proved no exception, especially when it comes to new patterns in AI and insider threats.

That said, a consistent thread has emerged over the years: The need for proactive accountability, guidance and governance.

With this, security leaders won’t entirely mitigate the damaging outcomes of AI or ill-willed insiders. But they’ll take major steps in containing them. Hopefully in a few months, when we arrive at Black Hat, we’ll be talking more about how organizations are now able to more consistently and successfully do that.

James Robinson is chief information security officer at secure access service edge (SASE) and zero-trust specialist Netskope.



Source link

By Computer Weekly

By Computer Weekly

Next Post
Zero-trust is redefining cyber security in 2025 | Computer Weekly

Zero-trust is redefining cyber security in 2025 | Computer Weekly

Recommended.

Three Reasons Why the Browser is Best for Stopping Phishing Attacks

Three Reasons Why the Browser is Best for Stopping Phishing Attacks

April 23, 2025
Coveo Reports First Quarter Fiscal 2026 Financial Results

Coveo Reports First Quarter Fiscal 2026 Financial Results

August 1, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio