Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

How CISOs can adapt cyber strategies for the age of AI | Computer Weekly

By Computer Weekly by By Computer Weekly
August 11, 2025
Home Uncategorized
Share on FacebookShare on Twitter


The age of artificial intelligence, and in particular, generative AI, has arrived with remarkable speed. Enterprises are embedding AI across functions: from customer service bots and document summarisation engines to AI-driven threat detection and decision support tools.

But as adoption accelerates, CISOs are now facing a new class of digital asset in the form of the AI model, that merges intellectual property, data infrastructure, critical business logic and potential attack surface into one complex, evolving entity.

Traditional security measures may no longer be enough to cope in this new reality. In order to safeguard enterprise operations, reputation and data integrity in an AI-first world, security leaders may need to rethink their cyber security strategies.

‘Living digital assets’

First and foremost, AI systems and GenAI models should be treated as living digital assets. Unlike static data or fixed infrastructure, these models continuously evolve through retraining, fine-tuning and exposure to new prompts and data inputs.

This means that a model’s behaviour, decision-making logic and potential vulnerabilities can shift over time, often in opaque ways.

CISOs must therefore apply a mindset of continuous governance, scrutiny and adaptation. AI security is not simply a subset of data security or application security; it is its own domain requiring purpose-built governance, monitoring and incident response capabilities.

A critical step is redefining how organisations classify data within the AI lifecycle.

Traditionally, data security policies have focused on protecting structured data at rest, in transit or in use. However, with AI, model inputs, such as user prompts or retrieved knowledge and outputs, such as generated content or recommendations, must also be treated as critical assets.

Not only do these inputs and outputs carry the risk of data leakage, they can also be manipulated in ways that poison models, skew outputs or expose sensitive internal logic. Applying classification labels, access controls and audit trails across training data, inference pipelines and generated results is therefore essential to managing these risks.

Supply chain risk management

The security perimeter also expands when enterprises rely on third-party AI tools or APIs. Supply chain risk management needs a fresh lens when AI models are developed externally or sourced from open platforms.

Vendor assessments must go beyond the usual checklist of encryption standards and breach history. Instead, they should require visibility into training data sources, model update mechanisms and security testing results. CISOs should push vendors to demonstrate adherence to secure AI development practices, including bias mitigation, adversarial robustness and provenance tracking.

Without this due diligence, organisations risk importing opaque black boxes that may behave unpredictably; or worse, maliciously, under adversarial pressure.

Internally, establishing a governance framework that defines acceptable AI use is paramount. Enterprises should determine who can use AI, for what purposes and under which constraints.

These policies should be backed by technical controls, from access gating and API usage restrictions to logging and monitoring. Procurement and development teams should also adopt explainability and transparency as core requirements. More broadly, it is simply not enough for an AI system to perform well; stakeholders must understand how and why it reaches its conclusions, particularly when these conclusions influence high-stakes decisions.

Turning to zero-trust

From an infrastructure standpoint, CISOs that embed zero-trust principles into the architecture supporting AI systems will help future-proof operations.

This means segmenting development environments, enforcing least-privilege access to model weights and inference endpoints and continuously verifying both human and machine identities throughout the AI pipeline.

Many AI workloads, especially those trained on sensitive internal data, are attractive targets for espionage, insider threats and exfiltration. Identity-aware access control and real-time monitoring can help ensure that only authorised and authenticated actors can interact with critical AI resources.

AI-safe training

One of the most significant emerging vulnerabilities lies in the end-user interaction with GenAI tools. While these tools promise productivity gains and innovation, they can also become conduits for data loss, hallucinated outputs as well as the basis for social engineering. Employees may unknowingly paste sensitive information into public AI chatbots or act on flawed AI-generated advice without understanding its limitations.

CISOs should help counter this with comprehensive training programmes that go beyond generic cyber security awareness. Staff should be educated on AI-specific threats such as prompt injection attacks, model bias and synthetic identity creation. They must also be taught to verify AI outputs and avoid blind trust in machine-generated content.

Incident response

Organisations can also extend their own incident response by integrating AI threat scenarios into their incident response playbooks.

Responding to a data breach caused by prompt leakage or an AI hallucination that misinforms decision-making requires different protocols than a conventional malware incident, so tabletop exercises should be updated to include simulations of model manipulation, adversarial input attacks and the theft of AI models or training datasets, for example.

Preparedness is key: if AI systems are central to business operations, then threats to those systems must be treated with equal urgency as those targeting networks or endpoints.

Enterprise-approved platforms

In parallel, organisations should implement technical safeguards to limit the use of public GenAI tools in sensitive contexts. Whether through web filtering, browser restrictions or policy enforcement, businesses must guide employees towards enterprise-approved AI platforms that have been vetted for compliance, security and data residency. Shadow AI, or the unauthorised use of GenAI tools, poses a growing risk and must be tackled with the same rigour as shadow IT.

Insider threat

Finally, insider threat management must evolve. AI development teams often possess elevated access to sensitive datasets and proprietary model architectures.

These privileges, if abused, could lead to significant intellectual property theft or inadvertent exposure. Behavioural analytics, strong activity monitoring and enforced separation of duties are vital to reducing this risk. As AI becomes more deeply embedded into the business, the human risks surrounding its development and deployment cannot be overlooked.

In the AI era, the role of the CISO is undergoing profound change. While safeguarding systems and data are of course core to the role, now security leaders must help their organisations ensure that AI itself is trustworthy, resilient and aligned with organisational values.

This requires a shift in both mindset and strategy, recognising AI not just as a tool, but as a strategic asset that must be secured, governed and respected. Only then can enterprises harness the full potential of AI safely, confidently and responsibly.

Martin Riley is chief technology officer at Bridewell Consulting.



Source link

By Computer Weekly

By Computer Weekly

Next Post
⚡ Weekly Recap: BadCam Attack, WinRAR 0-Day, EDR Killer, NVIDIA Flaws, Ransomware Attacks & More

⚡ Weekly Recap: BadCam Attack, WinRAR 0-Day, EDR Killer, NVIDIA Flaws, Ransomware Attacks & More

Recommended.

Coveo Announces Date of Fiscal First Quarter 2026 Conference Call

Coveo Announces Date of Fiscal First Quarter 2026 Conference Call

July 10, 2025
SatSure et Dhruva Space concluent une alliance stratégique pour fournir des solutions complètes d’observation de la Terre en tant que service (EOaaS)

SatSure et Dhruva Space concluent une alliance stratégique pour fournir des solutions complètes d’observation de la Terre en tant que service (EOaaS)

July 1, 2025

Trending.

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

October 6, 2025
Cloud Computing on the Rise: Market Projected to Reach .6 Trillion by 2030

Cloud Computing on the Rise: Market Projected to Reach $1.6 Trillion by 2030

August 1, 2025
Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

July 14, 2025
The Ultimate MSP Guide to Structuring and Selling vCISO Services

The Ultimate MSP Guide to Structuring and Selling vCISO Services

February 19, 2025
Translators’ Voices: China shares technological achievements with the world for mutual benefit

Translators’ Voices: China shares technological achievements with the world for mutual benefit

June 3, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio