Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Proliferation of on-premise GenAI platforms is widening security risks | Computer Weekly

By Computer Weekly by By Computer Weekly
August 4, 2025
Home Uncategorized
Share on FacebookShare on Twitter


The three months to the end of May this year saw a 50% spike in the use of generative artificial intelligence (GenAI) platforms among enterprise end users, and while security teams work to facilitate the safe adoption of software-as-a-service (SaaS) AI frameworks such as Azure OpenAI, Amazon Bedrock and Google Vertex AI, the use of unsanctioned on-premise shadow AI now accounts for half of AI application adoption in the enterprise and is compounding security risks, according to a report.

The study, compiled by data protection and threat prevention platform supplier Netskope, examined the gathering shift among users to relying on on-premise GenAI platforms, which they are mostly using to build out their own AI agents and applications.

These platforms, which include tools such as Ollama, LM Studio and Ramalama, are now the fastest-growing category of shadow AI, due to their relative ease of use and flexibility, said Netskope. But, in using them to expedite their projects, employees are granting the platforms access to enterprise data stores and leaving the doors wide open to data leakage or outright theft.

“The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using GenAI platforms and where they are building and deploying them,” said Ray Canzanese, director of Netskope Threat Labs.

“Security teams don’t want to hamper employee end users’ innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP [data loss prevention] policies to incorporate real-time user coaching elements.”

Probably the most popular way to use GenAI locally is to deploy a large language model (LLM) interface, which enables interaction with various models from the same “store front”.

Ollama is the most popular of these frameworks by some margin. However, unlike the most widely used SaaS options, it does not include inbuilt authentication, which means users must go out of their way to deploy it behind a reverse proxy or a private access solution that is appropriately secured with fit-for-purpose authentication. This is not an easy ask for the average user.

Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place
Netskope report

Furthermore, while OpenAI, Bedrock, Vertex et al provide guardrails against model abuse, Ollama users must take steps themselves to prevent misuse.

Netskope said that while on-premise GenAI does have some benefits – for example, it can help organisations leverage pre-existing investment in GPU resources, or help them build tools that better interact with their other on-premise systems and datasets – these may well be outweighed by the fact that in using them, organisations bear sole responsibility for the security of their GenAI infrastructure in a way that would not be happening with a SaaS-based option.

Netskope’s analysts are now tracking approximately 1,550 distinct GenAI SaaS applications, which its customers can easily identify by running focused searches for unapproved apps and personal logins within its platform for activity classed as “generative AI”. Another way to track usage is to monitor who is accessing AI marketplaces such as Hugging Face.

Besides identifying the use of such tools, IT and security leaders should consider formulating and enforcing policies that restrict employee access to approved services, blocking unapproved ones, implementing DLP to account for data sharing in GenAI tools, and adopting real-time user coaching to nudge users towards approved tools and sensible practice.

Adopting continuous monitoring of GenAI use and conducting an inventory of local GenAI infrastructure against frameworks provided by the likes of NIST, OWASP and Mitre is also advisable.

“Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place,” warned the report’s authors.



Source link

By Computer Weekly

By Computer Weekly

Next Post
⚡ Weekly Recap: VPN 0-Day, Encryption Backdoor, AI Malware, macOS Flaw, ATM Hack & More

⚡ Weekly Recap: VPN 0-Day, Encryption Backdoor, AI Malware, macOS Flaw, ATM Hack & More

Recommended.

CoinEx Wraps Up TOKEN2049 Singapore 2025 Participation as Gold Sponsor, Driving Web3 Growth

CoinEx Wraps Up TOKEN2049 Singapore 2025 Participation as Gold Sponsor, Driving Web3 Growth

October 10, 2025
HackerOne CEO: ‘We’re Bringing Offensive Security Into The Development Process, Not Just After The Fact’

HackerOne CEO: ‘We’re Bringing Offensive Security Into The Development Process, Not Just After The Fact’

August 14, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio