Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.7.0 to before 0.19.0, the VideoMediaIO.load_base64() method at vllm/multimodal/media/video.py splits video/jpeg data URLs by comma to extract individual JPEG frames, but does not enforce a frame count limit. The num_frames parameter (default: 32), which is enforced by the load_bytes() code path, is completely bypassed in the video/jpeg base64 path. An attacker can send a single API request containing thousands of comma-separated base64-encoded JPEG frames, causing the server to decode all frames into memory and crash with OOM. This vulnerability is fixed in 0.19.0.

INFO

Published Date :

2026-04-06T15:38:53.201Z

Last Modified :

2026-04-06T18:36:31.152Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2026-34755 vulnerability.

Vendors Products
Vllm-project
  • Vllm
REFERENCES

Here, you will find a curated list of external links that provide in-depth information to CVE-2026-34755.

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact