Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.19.0, a Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue. This vulnerability is fixed in 0.19.0.
INFO
Published Date :
2026-04-06T15:40:03.448Z
Last Modified :
2026-04-07T14:17:12.597Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2026-34756 vulnerability.
| Vendors | Products |
|---|---|
| Vllm-project |
|
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2026-34756.