Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
INFO
Published Date :
2026-02-02T21:09:53.265Z
Last Modified :
2026-02-03T15:42:57.155Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2026-22778 vulnerability.
| Vendors | Products |
|---|---|
| Vllm |
|
| Vllm-project |
|
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2026-22778.
CVSS Vulnerability Scoring System
Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact