Description

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.

INFO

Published Date :

2025-06-17T20:04:40.893Z

Last Modified :

2025-06-18T13:41:11.407Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2025-49847 vulnerability.

Vendors Products
Ggml
  • Llama.cpp
REFERENCES

Here, you will find a curated list of external links that provide in-depth information to CVE-2025-49847.

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact