Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
INFO
Published Date :
2025-04-30T00:24:53.750Z
Last Modified :
2025-04-30T00:24:53.750Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2025-46560 vulnerability.
No data.
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2025-46560.