Description

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.

INFO

Published Date :

2025-07-10T19:32:45.296Z

Last Modified :

2025-07-10T20:31:07.240Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2025-53630 vulnerability.

No data.

REFERENCES

Here, you will find a curated list of external links that provide in-depth information to CVE-2025-53630.

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Attack Requirements
Privileges Required
User Interaction
VS Confidentiality
VS Integrity
VS Availability
SS Confidentiality
SS Integrity
SS Availability