Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.
INFO
Published Date :
2025-11-21T01:21:29.546Z
Last Modified :
2025-11-21T01:21:29.546Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2025-62426 vulnerability.
No data.
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2025-62426.
CVSS Vulnerability Scoring System
Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact