Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection

The Hacker News by The Hacker News
December 26, 2025
Home Cybersecurity
Share on FacebookShare on Twitter


Dec 26, 2025Ravie LakshmananAI Security / DevSecOps

A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection.

LangChain Core (i.e., langchain-core) is a core Python package that’s part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building applications powered by LLMs.

The vulnerability, tracked as CVE-2025-68664, carries a CVSS score of 9.3 out of 10.0. Security researcher Yarden Porat has been credited with reporting the vulnerability on December 4, 2025. It has been codenamed LangGrinch.

“A serialization injection vulnerability exists in LangChain’s dumps() and dumpd() functions,” the project maintainers said in an advisory. “The functions do not escape dictionaries with ‘lc’ keys when serializing free-form dictionaries.”

Cybersecurity

“The ‘lc’ key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data.”

According to Cyata researcher Porat, the crux of the problem has to do with the two functions failing to escape user-controlled dictionaries containing “lc” keys. The “lc” marker represents LangChain objects in the framework’s internal serialization format.

“So once an attacker is able to make a LangChain orchestration loop serialize and later deserialize content including an ‘lc’ key, they would instantiate an unsafe arbitrary object, potentially triggering many attacker-friendly paths,” Porat said.

This could have various outcomes, including secret extraction from environment variables when deserialization is performed with “secrets_from_env=True” (previously set by default), instantiating classes within pre-approved trusted namespaces, such as langchain_core, langchain, and langchain_community, and potentially even leading to arbitrary code execution via Jinja2 templates.

What’s more, the escaping bug enables the injection of LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata via prompt injection.

The patch released by LangChain introduces new restrictive defaults in load() and loads() by means of an allowlist parameter “allowed_objects” that allows users to specify which classes can be serialized/deserialized. In addition, Jinja2 templates are blocked by default, and the “secrets_from_env” option is now set to “False” to disable automatic secret loading from the environment.

The following versions of langchain-core are affected by CVE-2025-68664 –

  • >= 1.0.0, < 1.2.5 (Fixed in 1.2.5)
  • < 0.3.81 (Fixed in 0.3.81)

It’s worth noting that there exists a similar serialization injection flaw in LangChain.js that also stems from not properly escaping objects with “lc” keys, thereby enabling secret extraction and prompt injection. This vulnerability has been assigned the CVE identifier CVE-2025-68665 (CVSS score: 8.6).

Cybersecurity

It impacts the following npm packages –

  • @langchain/core >= 1.0.0, < 1.1.8 (Fixed in 1.1.8)
  • @langchain/core < 0.3.80 (Fixed in 0.3.80)
  • langchain >= 1.0.0, < 1.2.3 (Fixed in 1.2.3)
  • langchain < 0.3.37 (Fixed in 0.3.37)

In light of the criticality of the vulnerability, users are advised to update to a patched version as soon as possible for optimal protection.

“The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized/deserialized in streaming operations,” Porat said. “This is exactly the kind of ‘AI meets classic security’ intersection where organizations get caught off guard. LLM output is an untrusted input.”



Source link

Tags: computer securitycyber attackscyber newscyber security newscyber security news todaycyber security updatescyber updatesdata breachhacker newshacking newshow to hackinformation securitynetwork securityransomware malwaresoftware vulnerabilitythe hacker news
The Hacker News

The Hacker News

Next Post
Stocks making the biggest moves premarket: Nvidia, Micron, SanDisk & more

Stocks making the biggest moves premarket: Nvidia, Micron, SanDisk & more

Recommended.

Stocks making the biggest moves midday: Cloudflare, Klarna, Strategy, Energizer & more

Stocks making the biggest moves midday: Cloudflare, Klarna, Strategy, Energizer & more

November 18, 2025
Sopra Steria wurde von NelsonHall aufgrund seiner Kompetenz bei der Implementierung von GenAI zur Unterstützung der operativen Transformation von Unternehmen als „Leader” ausgezeichnet

Sopra Steria wurde von NelsonHall aufgrund seiner Kompetenz bei der Implementierung von GenAI zur Unterstützung der operativen Transformation von Unternehmen als „Leader” ausgezeichnet

November 28, 2025

Trending.

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

Chai AI Announces Upcoming Rollout of Apple and Google Age Verification APIs to Enhance Platform Safety

March 10, 2026
Huawei lanceert Next Generation FAN-oplossing

Huawei lanceert Next Generation FAN-oplossing

March 7, 2026
Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

Baidu Announces Fourth Quarter and Fiscal Year 2025 Results

February 26, 2026
Half of Google’s software development now AI-generated | Computer Weekly

Half of Google’s software development now AI-generated | Computer Weekly

February 5, 2026
Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

March 24, 2026

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio