Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Why AI agents are triggering a rethink of enterprise identity | Computer Weekly

By Computer Weekly by By Computer Weekly
April 28, 2026
Home Uncategorized
Share on FacebookShare on Twitter


We understand many organisations are still in the early stages of AI maturity, focusing on governance and basic controls around new technologies. One of the biggest challenges in this journey is integrating automation and AI securely into existing enterprise systems. As AI-driven attack surfaces expand, identity becomes a foundational control for securing automation and, critically, for limiting blast radius when things go wrong. Mistakes will happen; the goal of modern identity design is to ensure the impact is contained and recoverable.

The rapid rise of AI agents is pushing identity controls away from a “bouncer at the door” analogy and toward continuous, context‑aware evaluation, throughout your systems and processes. Traditionally, once a user or service authenticated and received a token, that token could be replayed freely until expiry, sometimes for hours or days, without the platform rechecking whether anything important had changed about the subject’s standing. This model no longer holds.

AI is not just adding a new user type to identity and access management (IAM), it is forcing organisations to redesign identity as a continuous control plane for humans, workloads, and agents alike.

In a continuous evaluation model, a valid token is still necessary but not sufficient alone. When a token is presented, centrally defined policies should confirm that the subject and its context still meet all the requirements at that moment. These checks can include whether the identity is still active, it has been flagged as high risk, the IP or location has changed unexpectedly, whether device posture has degraded, or whether new threat intelligence suggests compromise. Evaluating these signals at the edge can significantly reduce the window of identity abuse. This approach applies equally to human users, machine workloads, and these emerging hybrid identities created by agentic AI acting either autonomously or on behalf of a user (human in the loop).

To address this, enterprises need to treat users, machine workloads, and large language model (LLM)‑driven agents as first‑class identities, governed under a unified zero‑trust model. That means least privilege by default, short lived credentials, explicit delegation, and end‑to‑end auditability rather than allowing agents to become convenient but ungoverned circumventions around established controls.

So, what does evolving world of identity look like in practice?

Centralised identity remains the starting point, think your Entra tenant. The next step is edge verification and continuous validation throughout the lifetime of a session or workflow. This becomes especially important for long‑running agentic processes: if an agent runs a large task for hours, or continuously, what happens if the underlying account is locked, its risk posture changes, or its permissions should be reduced mid‑execution?

Currently emerging concepts separates claims, authentication, authorisation, and ongoing assurance. We already see this pattern in federated standards. For non‑human identity, it means explicit workload identities instead of long lived static secrets. For authorisation, it means externalising fine‑grained policy from applications into policy‑as‑code, because classic role-based access control (RBAC) alone does not scale to modern Software-as-a-Service (SaaS) sprawl, complex resource graphs, and dynamic entitlements. Identity is treated as a living entity with continuously monitored “vital signs,” rather than a directory entry revisited only during periodic reviews.

AI agents make this shift inevitable. When an agent acts, organisations need clear answers to fundamental questions: did the agent act autonomously, or was it instructed by a human? If a human initiated the action, is the agent operating with its own service identity or with explicitly delegated user permissions (on behalf of)? What happens when an agent holds broader permissions than the requesting user to complete a workflow, and how do you prevent that from becoming a persistent privilege escalation path?

A cleaner architectural pattern is to treat the human user, the agent runtime, the downstream tool or application programming interface (API), and any delegated token as separate but linked identities – a chain of identity. The LLM itself is typically a component in that chain, not the final authority. This model allows organisations to express who initiated an action, what runtime executed it, what permissions were delegated, what resource the token was intended for, and whether access can be evaluated and revoked while any workflow is still running.

In this model, RBAC still has a place, but it is no longer enough on its own. Modern authorisation increasingly relies on context, attributes, relationships, and external policy engines. Clear distinctions between delegation and impersonation ensure agents act with explicit, time‑bound authority rather than implicit trust.

Ultimately, AI agents are pushing that turn in identity from a onetime checkpoint into a continuous control loop. This evolution aligns closely with zero‑trust principles and newer identity standards designed to propagate changes across users, workloads, devices, sessions, and applications in near real time. Organisations that adopt this model will be better positioned to scale AI safely, without sacrificing security, compliance, or user experience.

Jacob Connell is AI and automation engineer at Quorum Cyber.



Source link

By Computer Weekly

By Computer Weekly

Next Post
GeoComm Indoor Mapping Integration with Zetron Help Strengthen Iowa Public Safety Response

GeoComm Indoor Mapping Integration with Zetron Help Strengthen Iowa Public Safety Response

Recommended.

Top CIO trends to watch in 2025

Top CIO trends to watch in 2025

January 31, 2025
Tuesday’s big stock stories: What’s likely to move the market in the next trading session

Tuesday’s big stock stories: What’s likely to move the market in the next trading session

September 15, 2025

Trending.

Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers

Microsoft Details Cookie-Controlled PHP Web Shells Persisting via Cron on Linux Servers

April 3, 2026
China-Linked TA416 Targets European Governments with PlugX and OAuth-Based Phishing

China-Linked TA416 Targets European Governments with PlugX and OAuth-Based Phishing

April 3, 2026
Viettel Marks 20 Years of Global Expansion, Overseas Revenue Up 25%

Viettel Marks 20 Years of Global Expansion, Overseas Revenue Up 25%

April 3, 2026
守正笃行:IBM 张榕解码 AI 时代的组织变革与人才之道

守正笃行:IBM 张榕解码 AI 时代的组织变革与人才之道

April 3, 2026
New SparkCat Variant in iOS, Android Apps Steals Crypto Wallet Recovery Phrase Images

New SparkCat Variant in iOS, Android Apps Steals Crypto Wallet Recovery Phrase Images

April 3, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio