Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
INFO
Published Date :
2026-04-01T16:59:59.967Z
Last Modified :
2026-04-02T03:56:11.820Z
Source :
GitHub_M
AFFECTED PRODUCTS
The following products are affected by CVE-2026-34159 vulnerability.
| Vendors | Products |
|---|---|
| Ggml |
|
REFERENCES
Here, you will find a curated list of external links that provide in-depth information to CVE-2026-34159.