Description
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
INFO
Published Date :
2025-12-01T22:45:42.566Z
Last Modified :
2025-12-02T14:14:58.324Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2025-66448 vulnerability.
| Vendors | Products |
|---|---|
| Vllm |
|
| Vllm-project |
|
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2025-66448.