Description

LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.

INFO

Published Date :

2026-03-31T02:01:49.320Z

Last Modified :

2026-03-31T18:04:59.283Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2026-34070 vulnerability.

Vendors Products
Langchain
  • Langchain
Langchain-ai
  • Langchain

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact