Description
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
INFO
Published Date :
2026-03-26T23:56:53.579Z
Last Modified :
2026-03-27T13:52:33.526Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2026-27893 vulnerability.
| Vendors | Products |
|---|---|
| Vllm |
|
| Vllm-project |
|
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2026-27893.
CVSS Vulnerability Scoring System
Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact