5 Spring AI CVEs Disclosed April 27, 2026: Roundup and EOL Risk
Vector store injection, cross-tenant memory exfiltration, and a tighter Spring Boot 3.5 EOL window for Spring AI teams

On April 27, 2026, the Spring security team disclosed five new vulnerabilities in Spring AI, including two High severity CVEs in vector store handling. All five affect Spring AI 1.0.x and 1.1.x and are fixed in 1.0.6 and 1.1.5 respectively. The disclosure pattern matters: these are not infrastructure CVEs in Spring Boot. They are AI-specific bugs in the components Spring AI exists to provide, and they land roughly six weeks before Spring Boot 3.5 (which both Spring AI 1.0 and 1.1 target) reaches end-of-life on June 30, 2026.
The five CVEs:
This post is a continuation of our March 18 analysis of CVE-2026-22729 and CVE-2026-22730. At that time we framed the two CVEs as a preview of what would happen to Spring AI teams once Spring Boot 3.5 reached EOL and the April 27 batch is part of that trend.
CVE-2026-40967: VectorStore FilterExpression Converter injection (High, CVSS 8.6)
What it is
Spring AI's FilterExpressionConverter implementations translate filter expression objects into the native query language of each vector store backend. In several converters, keys and values are not properly escaped before being interpolated into the generated query. An attacker who can supply a filter expression can therefore alter the structure of the query that runs against the vector store.
CVSS vector and what it means
NVD scores this CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:L. The PR:N component is the part to flag: this is exploitable without authentication when filter expressions are accepted from anonymous clients. The AC:L component indicates no special access conditions. Confidentiality impact is High; integrity and availability are Low.
Who is affected
Applications that use a Spring AI VectorStore implementation and pass user-supplied input through to filterExpression on a query. If your filter expressions are constructed entirely server-side from trusted constants, this vulnerability does not apply. If any component of the filter is derived from a request body, query string, or other user input, you are in scope.
This pattern is common in production RAG (retrieval-augmented generation) deployments where the application narrows results based on tenant ID, user permissions, document classification, or other request-scoped attributes.
CVE-2026-40978: SQL Injection in CosmosDBVectorStore.doDelete() (High, CVSS 8.8)
What it is
A SQL injection vulnerability in Spring AI's CosmosDBVectorStore.doDelete() implementation. Document IDs passed to delete operations are interpolated into a SQL query without sufficient sanitization, allowing an attacker who controls the document ID input to execute arbitrary SQL against the underlying Cosmos DB.
CVSS vector and what it means
NVD scores this CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H. The PR:L component matters: this requires that the attacker have low privileges, not zero. In practice this means the vulnerability is exploitable from any authenticated context where the application accepts user-supplied document IDs in a delete operation. All three impact metrics are High, reflecting that successful SQL injection in a vector store typically allows reading, modifying, and destroying its contents.
Who is affected
Applications that use CosmosDBVectorStore and route user-supplied input as document IDs into delete calls. Applications that mint document IDs server-side (for example, UUIDs assigned at ingest time and never exposed to users) are not affected by this vector even if they otherwise use CosmosDBVectorStore.
CVE-2026-40966: VectorStoreChatMemoryAdvisor cross-tenant memory exfiltration (Medium, CVSS 5.9)
What it is
The VectorStoreChatMemoryAdvisor uses conversationId values to scope the memory retrieved for a given conversation. The conversation ID is interpolated into the underlying filter expression without adequate isolation, allowing an attacker who supplies a crafted conversationId to inject filter logic and read chat history that belongs to other conversations.
In practical terms, "memory" here can include whatever the application has stored in its conversational memory: user messages, model responses, and any data the application has pushed into chat memory. In multi-tenant AI applications, this can mean exfiltration of conversations that include sensitive content, secrets, or credentials that other users entered into their own sessions.
CVSS vector and what it means
NVD scores this CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N. The AC:H component reflects that successful exploitation requires conditions the attacker cannot fully control (the attacker has to know enough about the structure of the stored memory to craft an effective injection). Confidentiality impact is High; integrity and availability are not affected.
Who is affected
Applications that use VectorStoreChatMemoryAdvisor and accept user-supplied input as the conversationId. The conversation ID is often derived from a session, a JWT claim, or a request header in well-designed applications, but in any application where a user can influence the conversation ID directly (for example, via a query parameter or request body field), this vulnerability is in scope.
CVE-2026-40980: OOM via attacker-controlled PDF (Medium, CVSS 6.5)
What it is
A specially crafted PDF can cause ForkPDFLayoutTextStripper to allocate unreasonable amounts of memory during text extraction. This is a denial-of-service vulnerability: an attacker who can submit a PDF to the application can exhaust JVM heap, crashing the process or starving co-located workloads.
CVSS vector and what it means
NVD scores this CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H. The PR:L component indicates this is exploitable from a low-privilege authenticated context. Availability impact is High; confidentiality and integrity are unaffected.
Who is affected
Applications that use ForkPDFLayoutTextStripper (a DocumentReader implementation in Spring AI) and pass user-supplied PDFs through it. Document ingestion endpoints are the typical exposure: any feature that lets a user upload a PDF for indexing into a vector store, summarization, or other AI processing.
CVE-2026-40979: ONNX model cache in world-writable /tmp directory (Medium, CVSS 6.1)
What it is
TransformersEmbeddingModel caches downloaded ONNX models to a default location under /tmp with predictable paths and permissions that allow other users on the same machine to read or replace cached files. In a shared environment, this means another user can read the application's ONNX model (a confidentiality issue) or substitute a malicious model that the application then loads (an integrity issue).
CVSS vector and what it means
NVD scores this CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:L/I:H/A:N. The AV:L component is the key constraint: this requires local access, not network access. PR:L indicates an authenticated user on the host. This is principally a concern in shared hosting environments, multi-tenant Linux hosts, or container configurations where /tmp is shared across containers.
Who is affected
Applications that use TransformersEmbeddingModel with the default cache location enabled, running in environments where untrusted users have shell access to the same host. Most modern deployments (per-application containers with isolated filesystems) are not affected, but legacy shared-host deployments are.
Affected versions and remediation
All five CVEs share the same affected versions and fix versions per the Spring security advisory:
If you are on Spring AI 1.0.x, upgrade to 1.0.6. If you are on 1.1.x, upgrade to 1.1.5. Both fix releases were published April 27, 2026 and patch all five CVEs in a single update.
The new angle: AI-specific vulnerability classes are arriving in batches
The previous post on CVE-2026-22729 and CVE-2026-22730 framed those two CVEs as a preview. Six weeks later, the April 27 disclosure is part of what that preview anticipated, but with a meaningful shift in the type of vulnerability landing.
CVE-2026-22729 and CVE-2026-22730 were broadly recognizable Spring vulnerability shapes. The April 27 batch reflects a different category mix:
- Vector store query injection (CVE-2026-40967, CVE-2026-40978). Filter expressions and document IDs are the AI-era equivalent of the SQL parameter and the LDAP query string. They are user-influenced inputs passed into a query language, and the injection patterns that have been mature for relational databases for two decades are showing up freshly in vector store backends.
- Cross-tenant memory exfiltration via conversation IDs (CVE-2026-40966). Chat memory is a relatively new abstraction. Treating conversationId as a filter key, with the same care given to any other tenancy-relevant identifier, is a discipline still being formalized across the Spring AI ecosystem.
- Model artifact integrity (CVE-2026-40979). Model files are executable artifacts in the sense that the application loads and runs them. Caching them in world-writable locations is a supply-chain-shaped problem rather than a traditional application bug.
- Resource exhaustion via document parsers (CVE-2026-40980). Document ingestion pipelines are a defining component of RAG applications. They also accept arbitrary attacker-controlled input by design, which makes parsers a recurring DoS surface.
This is the trajectory readers should expect through the rest of 2026. The Spring security team has been publishing more Spring AI advisories than in any prior period (see also Spring CVEs Surge in 2026: 30 Vulnerabilities in Two Months), and the volume reflects the rate at which Spring AI's surface area is being explored by researchers. Bank on more disclosures of similar shape between now and the end of the year.
How this connects back to the Spring Boot 3.5 EOL
The previous post made the math explicit: Spring Boot 3.5, Spring AI 1.0, and Spring AI 1.1 all reach end-of-life on June 30, 2026. Spring AI 2.0 (which targets Spring Boot 4) is expected in the May 2026 timeframe based on the Spring AI 2.0.0 GitHub milestone, which leaves a roughly one-month window for major-version migration before OSS patches stop arriving for the 1.x line.
The April 27 batch tightens that window in two specific ways:
First, the immediate upgrade burden is non-trivial. Teams that were previously planning to do their next 1.0.x or 1.1.x upgrade as part of the migration to 2.0 now have an interim patch step they need to take in the next few weeks, on top of the larger 2.0 migration that is still pending.
Second, the post-June scenario is no longer hypothetical. The previous post warned about what would happen when CVEs continued landing on the 1.x line after community support ended. The April 27 disclosure (and a continued cadence of CVEs in the months to follow) makes the post-June risk very concrete: each new advisory after June 30 will not have a 1.0.x or 1.1.x fix from the OSS project. Scanners will continue to flag findings, security teams will continue to ask questions, and "we are working on the upgrade" will continue to age.
For our Spring Boot 3.5 EOL migration calculator we modeled the typical enterprise migration timeline at 3 to 9 months for Spring Boot 3.5 to Spring Boot 4 with Spring AI 2.0. For most teams, that exceeds the OSS support window.
Mitigation guidance
Taking action
The April 27 disclosure is a clear signal. The Spring AI 1.x line is healthy and well-supported today. The Spring security team is doing exactly what mature open source security teams do: identifying issues, publishing advisories, and shipping fixes. None of that changes the fundamental constraint that Spring Boot 3.5 (and with it, Spring AI 1.0 and 1.1) will reach OSS end-of-life on June 30, 2026, and the rate of Spring AI CVE disclosures in 2026 makes the post-June scenario more concrete than ever.
If your team is on Spring AI 1.0.x or 1.1.x, the immediate action is the 1.0.6 or 1.1.5 upgrade. The strategic question is what happens in July. If your migration to Spring AI 2.0 will not be production-ready before June 30, NES for Spring provides a supported path. The point is not to defer the 2.0 migration indefinitely. It is to give your team the room to do that migration on a timeline that fits your release process and risk posture, rather than one dictated by an upstream EOL date.
For the broader Spring EOL picture, including Spring Framework, Spring Security, and Spring Boot version timelines, see the HeroDevs Spring EOL Hub.


