Description

LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.

INFO

Published Date :

2026-05-08T03:36:58.648Z

Last Modified :

2026-05-08T14:36:57.479Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2026-42203 vulnerability.

Vendors Products
Berriai
  • Litellm
REFERENCES

Here, you will find a curated list of external links that provide in-depth information to CVE-2026-42203.

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Attack Requirements
Privileges Required
User Interaction
VS Confidentiality
VS Integrity
VS Availability
SS Confidentiality
SS Integrity
SS Availability