Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

How to Steer AI Adoption: A CISO Guide

The Hacker News by The Hacker News
February 12, 2025
Home Cybersecurity
Share on FacebookShare on Twitter


Feb 12, 2025The Hacker NewsAI Security / Data Protection

CISOs are finding themselves more involved in AI teams, often leading the cross-functional effort and AI strategy. But there aren’t many resources to guide them on what their role should look like or what they should bring to these meetings.

We’ve pulled together a framework for security leaders to help push AI teams and committees further in their AI adoption—providing them with the necessary visibility and guardrails to succeed. Meet the CLEAR framework.

If security teams want to play a pivotal role in their organization’s AI journey, they should adopt the five steps of CLEAR to show immediate value to AI committees and leadership:

  • C – Create an AI asset inventory
  • L – Learn what users are doing
  • E – Enforce your AI policy
  • A – Apply AI use cases
  • R – Reuse existing frameworks

If you’re looking for a solution to help take advantage of GenAI securely, check out Harmonic Security.

Alright, let’s break down the CLEAR framework.

Create an AI Asset Inventory

A foundational requirement across regulatory and best-practice frameworks—including the EU AI Act, ISO 42001, and NIST AI RMF—is maintaining an AI asset inventory.

Despite its importance, organizations struggle with manual, unsustainable methods of tracking AI tools.

Security teams can take six key approaches to improve AI asset visibility:

  1. Procurement-Based Tracking – Effective for monitoring new AI acquisitions but fails to detect AI features added to existing tools.
  2. Manual Log Gathering – Analyzing network traffic and logs can help identify AI-related activity, though it falls short for SaaS-based AI.
  3. Cloud Security and DLP – Solutions like CASB and Netskope offer some visibility, but enforcing policies remains a challenge.
  4. Identity and OAuth – Reviewing access logs from providers like Okta or Entra can help track AI application usage.
  5. Extending Existing Inventories – Classifying AI tools based on risk ensures alignment with enterprise governance, but adoption moves quickly.
  6. Specialized Tooling – Continuous monitoring tools detect AI usage, including personal and free accounts, ensuring comprehensive oversight. Includes the likes of Harmonic Security.

Learn: Shift to Proactive Identification of AI Use Cases

Security teams should proactively identify AI applications that employees are using instead of blocking them outright—users will find workarounds otherwise.

By tracking why employees turn to AI tools, security leaders can recommend safer, compliant alternatives that align with organizational policies. This insight is invaluable in AI team discussions.

Second, once you know how employees are using AI, you can give better training. These training programs are going to become increasingly important amid the rollout of the EU AI Act, which mandates that organizations provide AI literacy programs:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems…”

Enforce an AI Policy

Most organizations have implemented AI policies, yet enforcement remains a challenge. Many organizations opt to simply issue AI policies and hope employees follow the guidance. While this approach avoids friction, it provides little enforcement or visibility, leaving organizations exposed to potential security and compliance risks.

Typically, security teams take one of two approaches:

  1. Secure Browser Controls – Some organizations route AI traffic through a secure browser to monitor and manage usage. This approach covers most generative AI traffic but has drawbacks—it often restricts copy-paste functionality, driving users to alternative devices or browsers to bypass controls.
  2. DLP or CASB Solutions – Others leverage existing Data Loss Prevention (DLP) or Cloud Access Security Broker (CASB) investments to enforce AI policies. These solutions can help track and regulate AI tool usage, but traditional regex-based methods often generate excessive noise. Additionally, site categorization databases used for blocking are frequently outdated, leading to inconsistent enforcement.

Striking the right balance between control and usability is key to successful AI policy enforcement.

And if you need help building a GenAI policy, check out our free generator: GenAI Usage Policy Generator.

Apply AI Use Cases for Security

Most of this discussion is about securing AI, but let’s not forget that the AI team also wants to hear about cool, impactful AI use cases across the business. What better way to show you care about the AI journey than to actually implement them yourself?

AI use cases for security are still in their infancy, but security teams are already seeing some benefits for detection and response, DLP, and email security. Documenting these and bringing these use cases to AI team meetings can be powerful – especially referencing KPIs for productivity and efficiency gains.

Reuse Existing Frameworks

Instead of reinventing governance structures, security teams can integrate AI oversight into existing frameworks like NIST AI RMF and ISO 42001.

A practical example is NIST CSF 2.0, which now includes the “Govern” function, covering: Organizational AI risk management strategies Cybersecurity supply chain considerations AI-related roles, responsibilities, and policies Given this expanded scope, NIST CSF 2.0 offers a robust foundation for AI security governance.

Take a Leading Role in AI Governance for Your Company

Security teams have a unique opportunity to take a leading role in AI governance by remembering CLEAR:

  • Creating AI asset inventories
  • Learning user behaviors
  • Enforcing policies through training
  • Applying AI use cases for security
  • Reusing existing frameworks

By following these steps, CISOs can demonstrate value to AI teams and play a crucial role in their organization’s AI strategy.

To learn more about overcoming GenAI adoption barriers, check out Harmonic Security.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.





Source link

Tags: computer securitycyber attackscyber newscyber security newscyber security news todaycyber security updatescyber updatesdata breachhacker newshacking newshow to hackinformation securitynetwork securityransomware malwaresoftware vulnerabilitythe hacker news
The Hacker News

The Hacker News

Next Post
72% of executives plan to embrace AI in 2025, with a quarter targeting AI for specialized tasks like translation

72% of executives plan to embrace AI in 2025, with a quarter targeting AI for specialized tasks like translation

Recommended.

13,000 MikroTik Routers Hijacked by Botnet for Malspam and Cyberattacks

13,000 MikroTik Routers Hijacked by Botnet for Malspam and Cyberattacks

January 21, 2025
Delta Force-Mobilversion und neue Season Eclipse Vigil werden am 21. April veröffentlicht; neue Vorregistrierungsbelohnung freigeschaltet

Delta Force-Mobilversion und neue Season Eclipse Vigil werden am 21. April veröffentlicht; neue Vorregistrierungsbelohnung freigeschaltet

April 22, 2025

Trending.

VIDIZMO Earns Microsoft Solutions Partner Designations for All Three Areas of Azure, Solidifying its Expertise in Delivering AI Solutions

VIDIZMO Earns Microsoft Solutions Partner Designations for All Three Areas of Azure, Solidifying its Expertise in Delivering AI Solutions

June 28, 2025
Tilson Continues to Perform for Clients; Shares Substantial Progress in Chapter 11 Process

Tilson Continues to Perform for Clients; Shares Substantial Progress in Chapter 11 Process

June 27, 2025
OneClik Malware Targets Energy Sector Using Microsoft ClickOnce and Golang Backdoors

OneClik Malware Targets Energy Sector Using Microsoft ClickOnce and Golang Backdoors

June 27, 2025
DHS Warns Pro-Iranian Hackers Likely to Target U.S. Networks After Iranian Nuclear Strikes

DHS Warns Pro-Iranian Hackers Likely to Target U.S. Networks After Iranian Nuclear Strikes

June 23, 2025
Le nombre d’utilisateurs de la 5G-A atteint les dix millions en Chine : Huawei présente le développement de la 5G-A et la valeur de l’IA basée sur des scénarios

Le nombre d’utilisateurs de la 5G-A atteint les dix millions en Chine : Huawei présente le développement de la 5G-A et la valeur de l’IA basée sur des scénarios

June 27, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio