Description

llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146.

INFO

Published Date :

2026-03-12T16:39:37.463Z

Last Modified :

2026-03-14T03:55:24.463Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2026-27940 vulnerability.

Vendors Products
Ggml
  • Llama.cpp
REFERENCES

Here, you will find a curated list of external links that provide in-depth information to CVE-2026-27940.

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact