Description
llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.
INFO
Published Date :
2026-03-24T00:01:40.989Z
Last Modified :
2026-03-25T03:55:51.679Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2026-33298 vulnerability.
| Vendors | Products |
|---|---|
| Ggml |
|
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2026-33298.