Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Terrorist potential of generative AI ‘purely theoretical’ | Computer Weekly

By Computer Weekly by By Computer Weekly
July 17, 2025
Home Uncategorized
Share on FacebookShare on Twitter


Generative artificial intelligence (GenAI) systems could assist terrorists in disseminating propaganda and preparing for attacks, according to the UK’s terror advisor, but the level of the threat remains “purely theoretical” without further evidence of its use in practice.

In his latest annual report, Jonathan Hall, the government’s independent reviewer of terrorism legislation, warned that while GenAI systems have the potential to be exploited by terrorists, how effective the technology will be in this context, and what to do about it, is currently an “open question”.

Commenting on the potential for GenAI to be deployed in service of a terror group’s propaganda activities, for example, Hall explained how it could be used to significantly speed up its production and amplify its dissemination, enabling terrorists to create easily sharable images, narratives and forms of messaging with far fewer resources or constraints.

However, he also noted that terrorists “flooding” the information environment with AI-generated content is not a given, and that take-up by groups could be varied as a result of its potential to undermine their messaging.

“Depending on the importance of authenticity, the very possibility that text or image has been AI-generated may undermine the message. Reams of spam-like propaganda may prove a turn-off,” he said, adding that some terror groups like Al-Queda, which “place a premium on authentic messages from senior leaders”, may avoid it and be reluctant to delegate propaganda functions to a bot.

“Conversely, it may be boom time for extreme right-wing forums, anti-Semites and conspiracy theorists who revel in creative nastiness.”

Similarly, on the technology’s potential to be used in attack planning, Hall said that while it has the potential to be of assistance, it is an open question as to how helpful current generative AI systems will be to terror groups in practice.

“In principle, GenAI is available to research key events and locations for targeting purposes, suggest methods of circumventing security and provide tradecraft on using or adapting weapons or terrorist cell-structure,” he said.

“Access to a suitable chatbot could dispense with the need to download online instructional material and make complex instructions more accessible … [while] GenAI could provide technical advice on avoiding surveillance or making knife-strikes more lethal, rather than relying on a specialist human contact.”

However, he added that “gains may be incremental rather than dramatic” and likely more relevant to lone attackers than organised groups.

Hall further added that while GenAI could be used to “extend attack methodology” – for example, via the identification and synthesis of harmful biological or chemical agents – this would also require the attacker to have prior expertise, skills and access to labs or equipment.

“GenAI’s effectiveness here has been doubted,” he said.

A similar point was made in the first International AI safety report, which was created by a global cohort of nearly 100 artificial intelligence experts in the wake of the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in 2023.

It said that while new AI models can create step-by-step guides for creating pathogens and toxins that surpass PhD-level expertise, potentially lowering the barriers to developing biological or chemical weapons, it remains a “technically complex” process, meaning the “practical utility for novices remains uncertain”.

A further risk identified by Hall is the use of AI in the process of online radicalisation via chatbots, where he said the one-to-one interactions between the human and machine could create “a closed loop of terrorist radicalisation … most relevantly for lonely and unhappy individuals already disposed towards nihilism or looking for extreme answers and lacking real-world or online counterbalance”.

However, he noted that even if a model has no guardrails and has been trained on data “sympathetic to terrorist narratives”, the outputs will depend largely on what the user asks it.

Potential solutions?

In terms of legal solutions, Hall highlighted the difficulty of preventing GenAI from being used to assist terrorism, noting that “upstream liability” for those involved in the development of these systems is limited, as models can be used so broadly for many different, unpredictable purposes.

Instead, he suggested introducing “tools-based liability”, which would target AI tools specifically designed to aid terrorist activities.

Hall said while the government should consider legislating against the creation or possession of computer programs designed to stir up racial or religious hatred, he acknowledged that it would be difficult to prove that programs were specifically designed for this purpose.

He added that while developers could be prosecuted under UK terror laws if they did indeed create a terrorism-specific AI model or chatbot, “it seems unlikely that GenAI tools will be created specifically for generating novel forms of terrorist propaganda – it is far more likely that the capabilities of powerful general models will be harnessed”.

“I can foresee immense difficulties in proving that a chatbot [or GenAI model] was designed to produce narrow terrorism content. The better course would be an offence of making … a computer program specifically designed to stir up hatred on the grounds of race, religion or sexuality.”

In his reflections, Hall acknowledged that it remains to be seen exactly how AI will be used by terrorists and that the situation remains “purely theoretical”.

“Some will say, plausibly, that there is nothing new to see. GenAI is just another form of technology and, as such, it will be exploited by terrorists, like vans,” he said. “Without evidence that the current legislative framework is inadequate, there is no basis for adapting or extending it to deal with purely theoretical use cases. Indeed, the absence of GenAI-enabled attacks could suggest the whole issue is overblown.”

Hall added that even if some form of regulation is needed to avoid future harms, it could be argued that criminal liability is the least suitable option, especially given the political imperative to harness AI as a force for economic growth and other public benefits.

“Alternatives to criminal liability include transparency reporting, voluntary industry standards, third-party auditing, suspicious activity reporting, licensing, bespoke solutions like AI-watermarking, restrictions on advertising, forms of civil liability, and regulatory obligations,” he said.

While Hall expressed uncertainty around the extent to which terror groups would adopt generative AI, he concluded that the most likely effect of the technology was a general “social degradation” promoted by the spread of online disinformation.

“Although remote from bombs, shootings or blunt-force attacks, poisonous misrepresentations about government motives or against target demographics could lay the foundations for polarisation, hostility and eventual real-world terrorist violence,” he said. “But there is no role for terrorism legislation here because any link between GenAI-related content and eventual terrorism would be too indirect.”

While not covered in the report, Hall did acknowledge there could be further “indirect impacts” of GenAI on terrorism, as it could lead to widespread unemployment and create an unstable social environment “more conducive to terrorism”.



Source link

By Computer Weekly

By Computer Weekly

Next Post
Data Center Switch Front-End Networks Expand Rapidly to Keep Pace with AI Back-End Scale, According to Dell’Oro Group

Data Center Switch Front-End Networks Expand Rapidly to Keep Pace with AI Back-End Scale, According to Dell'Oro Group

Recommended.

Chunghwa Telecom Reports Un-Audited Consolidated Operating Results for the Second Quarter of 2025

Chunghwa Telecom Reports Un-Audited Consolidated Operating Results for the Second Quarter of 2025

August 5, 2025
Iridium Announces Release Date for First-Quarter 2025 Financial Results

Iridium Announces Release Date for First-Quarter 2025 Financial Results

March 31, 2025

Trending.

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

October 6, 2025
Cloud Computing on the Rise: Market Projected to Reach .6 Trillion by 2030

Cloud Computing on the Rise: Market Projected to Reach $1.6 Trillion by 2030

August 1, 2025
Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

July 14, 2025
The Ultimate MSP Guide to Structuring and Selling vCISO Services

The Ultimate MSP Guide to Structuring and Selling vCISO Services

February 19, 2025
Translators’ Voices: China shares technological achievements with the world for mutual benefit

Translators’ Voices: China shares technological achievements with the world for mutual benefit

June 3, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio