Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Answers to key questions about AI in IT security | Computer Weekly

By Computer Weekly by By Computer Weekly
February 3, 2026
Home Uncategorized
Share on FacebookShare on Twitter


The buzz around agentic artificial intelligence (AI) systems, AI agents, autonomous security operations centres, and everything in between is louder than ever, sparking conversations across industries.

IT providers are hyping capabilities – some that are available here and now, and many more that are anticipated sometime in the future. Discerning the difference between current and future capabilities is confusing for many. To bring a little clarity, below is a breakdown of common questions Forrester is asked about generative AI (GenAI) and how it relates to security.

What is generative AI?

Generative AI is a type of artificial intelligence that is incredibly good at identifying the next most likely token in a complex sequence. This is one reason why it handles human language so well, and why other, earlier iterations of machine learning did not.

Human language is extremely complex. GenAI can mimic the qualities of its training data, and most of the most popular models on the market are trained on a lot of human language.

In security tools, we see three common use cases for generative AI:

  1. Content creation (creating incident summaries, converting query languages).
  2. Knowledge articulation (chatbots for threat research, product documentation).
  3. Behaviour modelling (triage and investigation agents).

What are GenAI chatbots most useful for in security?

AI chatbots such as Claude, Gemini, ChatGPT – or the security equivalents, including Microsoft Security Copilot, Google Gemini, Charlotte AI, and Purple AI – are powered by large language models (LLMs). As such, they can respond to open-ended questions, create nuanced language, provide contextually aware replies and adapt to topics, especially security topics, without needing explicit programming for each scenario.

While this capability is novel and unique, practitioners don’t often use it. When they do, it’s especially useful for asking questions about product documentation or researching particular threats or vulnerabilities. Outside of this, however, there’s not a lot of reason to go to the chatbot, so it doesn’t get used.

What is considered table stakes for GenAI capabilities in security tools?

Outside of the chatbot use case, there are a few common ways that GenAI is implemented in security tools today. In most cases, they are directly integrated into the analyst experience. Most often, this looks like:

  • Summarisation: providing a summary of alerts, vulnerabilities and risks.
  • Report writing: writing up reports on threat intelligence, incidents, the latest risks, and so on.
  • Code writing: generating patches, exploits, queries or other code.
  • Script analysis: understanding and explaining code or a script.
  • Language translation: translating between natural languages, query languages or code.

What are AI agents used for in security?

The past year and a half has seen a true step change in GenAI use cases for security. The introduction of AI agents, particularly for triage and investigation, is paving the way for major changes to how practitioners work.

AI agents are narrowly focused tools that follow strict instructions to carry out specific tasks. The agent is limited in what it can do, and it reacts to defined triggers, such as receiving a specific alert or indicator of compromise to evaluate.

It’s very important to note that invoking AI in a function is not the same thing as an AI agent. For example, if a supplier has a feature in its product that builds an incident summary using generative AI, that is not necessarily an AI agent. It could simply be an invocation of an LLM in a particular function.

The specific focus, task, its ability to manage state (perform multiple steps while maintaining memory), and encapsulation are what differentiates an AI agent from an invocation in a function.

There are many AI agents on the market today, like those from CrowdStrike, ReliaQuest, Intezer and Red Canary, among others. These AI agents are task agents – they accomplish specific tasks, often within the incident response process. Task agents are very good at doing one particular thing because they are trained on specific data and are given a series of prompts that are tested and validated to ensure that they accomplish the correct task each time.

For example, a triage agent for phishing may have built-in prompts that tell it to evaluate any email it is provided by extracting all indicators of compromise, checking them for reputation, and then providing a verdict and summary of its findings.

Through thorough training, rigorous testing and iterative improvement of the prompts used, early data shows that triage agents like these have been very successful at resolving false positives automatically (in specific cases).

Importantly, the combination of use case-specific (triage, investigation, and so on) and domain-specific (endpoint, identity, email, and so on) task agents must come before trying to solve bigger problems, like building an AI that can complete the entire incident response lifecycle.

It’s a lot like the transition we faced when moving to the cloud – instead of building a monolith, building microservices resulted in a more scalable, reliable and accurate result. Similarly, task agents that are specific to the use case they accomplish and the domain they are built for result in better outcomes. This leads us to the next phase: agentic AI.

What is agentic AI used for in security tools?

Agentic AI is a system of AI task agents working together and communicating to accomplish a broader goal. The agents communicate via agent-to-agent interactions.

An agentic system for security operations could look like a combination of triage agents, investigation agents and response agents. For example, an agentic system could orchestrate a phishing triage agent to validate a true positive phishing attack, then work with an endpoint triage agent and an endpoint investigation agent to verify that the phishing attack landed on an endpoint and escalated privileges.

From there, the agents can provide context to an endpoint response agent, which will then provide the analyst with the information they need to make an informed decision for response.

Don’t trust the hype: This is a work in progress and far from ready today

While agentic systems may sound like a panacea, right now, security tools are not able to deliver across use cases and domains. Most are limited to a handful of use cases and a handful of domains (if that), and many of these capabilities are not generally available.

And even those that do still have limitations. Getting the right data at the right time to do triage and investigation well is still difficult. Getting Model Context Protocol (MCP) servers to work together well and securely is difficult, and far from seamless. Furthermore, ensuring that AI agents deliver trusted and accurate output consistently is not a solved problem – it is very difficult to ensure the quality of a non-deterministic system at scale.


Allie Mellon is a principal analyst at Forrester.



Source link

By Computer Weekly

By Computer Weekly

Next Post
NetDragon Leads Government-Backed AI Push to Build Thailand’s Future Workforce

NetDragon Leads Government-Backed AI Push to Build Thailand's Future Workforce

Recommended.

AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

AWS CEO Matt Garman Wants to Reassert Amazon’s Cloud Dominance in the AI Era

December 2, 2025
Five HPE GreenLake Game Changers: A Look At Pay-Per-Use Cloud Service Improvements

Five HPE GreenLake Game Changers: A Look At Pay-Per-Use Cloud Service Improvements

January 16, 2025

Trending.

Google Sues 25 Chinese Entities Over BADBOX 2.0 Botnet Affecting 10M Android Devices

Google Sues 25 Chinese Entities Over BADBOX 2.0 Botnet Affecting 10M Android Devices

July 18, 2025
Stocks making the biggest moves premarket: Salesforce, American Eagle, Hewlett Packard Enterprise and more

Stocks making the biggest moves premarket: Salesforce, American Eagle, Hewlett Packard Enterprise and more

September 4, 2025
Wesco Declares Quarterly Dividend on Common Stock

Wesco Declares Quarterly Dividend on Common Stock

December 1, 2025
⚡ THN Weekly Recap: New Attacks, Old Tricks, Bigger Impact

⚡ THN Weekly Recap: New Attacks, Old Tricks, Bigger Impact

March 10, 2025
Bloody Wolf Targets Uzbekistan, Russia Using NetSupport RAT in Spear-Phishing Campaign

Bloody Wolf Targets Uzbekistan, Russia Using NetSupport RAT in Spear-Phishing Campaign

February 9, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio