CVE-2026-33298

Publication date 24 March 2026

Last updated 25 March 2026


Ubuntu priority

Cvss 3 Severity Score

7.8 · High

Score breakdown

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.

Status

Package Ubuntu Release Status
llama.cpp 26.04 LTS resolute
Needs evaluation
25.10 questing
Needs evaluation
24.04 LTS noble Not in release
22.04 LTS jammy Not in release

Severity score breakdown

Parameter Value
Base score 7.8 · High
Attack vector Local
Attack complexity Low
Privileges required None
User interaction Required
Scope Unchanged
Confidentiality High
Integrity impact High
Availability impact High
Vector CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Access our resources on patching vulnerabilities