Description

llama.cpp is an inference of several LLM models in C/C++. In commits 55d4206c8 and prior, the n_discard parameter is parsed directly from JSON input in the llama.cpp server's completion endpoints without validation to ensure it's non-negative. When a negative value is supplied and the context fills up, llama_memory_seq_rm/add receives a reversed range and negative offset, causing out-of-bounds memory writes in the token evaluation loop. This deterministic memory corruption can crash the process or enable remote code execution (RCE). There is no fix at the time of publication.

INFO

Published Date :

2026-01-07T23:37:59.886Z

Last Modified :

2026-01-08T19:15:28.709Z

Source :

GitHub_M
AFFECTED PRODUCTS

The following products are affected by CVE-2026-21869 vulnerability.

Vendors Products
Ggml
  • Llama.cpp
REFERENCES

Here, you will find a curated list of external links that provide in-depth information to CVE-2026-21869.

CVSS Vulnerability Scoring System

Detailed values of each vector for above chart.
Attack Vector
Attack Complexity
Privileges Required
User Interaction
Scope
Confidentiality Impact
Integrity Impact
Availability Impact