CVE-2026-22741
This Vulnerability has been fixed in the Never-Ending Support (NES) version offered by HeroDevs.
Overview
Spring Framework is the foundational framework of the Spring ecosystem, providing comprehensive support for developing Java enterprise applications. Both of its web stacks, Spring MVC (Servlet) and Spring WebFlux (reactive), include a resource chain that can serve pre-encoded static resources (for example, gzip or brotli variants) based on the client's Accept-Encoding header, with an optional in-memory cache to avoid resolving the encoded variant on every request.
A low-severity vulnerability (CVE-2026-22741) has been identified in Spring Framework. The Accept-Encoding request header is parsed by two different routines in the same pipeline: CachingResourceResolver.getContentCodingKey uses a strict token parser to build the cache key, while EncodedResourceResolver.resolveResourceInternal uses a loose substring match to decide which pre-encoded variant to serve. Because the two routines disagree on how the header is interpreted, an attacker can craft a header that the strict parser canonicalizes to one value while the loose parser selects a different encoding. The result is a cache entry that is keyed as if for a benign client but stores a response encoded under a different scheme. Subsequent legitimate clients whose headers canonicalize to the same key receive bytes with the wrong Content-Encoding and the browser cannot decode them, breaking the frontend.
Per OWASP: "The Denial of Service (DoS) attack is focused on making a resource (site, application, server) unavailable for the purpose it was designed. There are many ways to make a service unavailable for legitimate users by manipulating network packets, programming, logical, or resources handling vulnerabilities, among others." In this case, an attacker exploits inconsistent parsing of an HTTP header to poison the server-side static resource cache and deny service to legitimate users of the frontend.
The CVSS v3.1 base score for this vulnerability is 3.1 (Low) with vector AV:N/AC:H/PR:N/UI:R/S:U/C:N/I:N/A:L. The attack is network-accessible but has high attack complexity (the cache must be empty and the attacker must win a race against the first legitimate request) and requires user interaction from a subsequent victim, and its impact is confined to a low availability loss with no confidentiality or integrity impact.
The desynchronization was introduced in Spring Framework 4.1.7 (May 2015) when CachingResourceResolver.computeKey began appending a `+encoding=gzip` suffix to the cache key using a case-sensitive encoding.contains("gzip") check, while the existing GzipResourceResolver.isGzipAccepted used a case-insensitive value.toLowerCase().contains("gzip") check. The two parsers disagreed on headers such as Accept-Encoding: GZIP, and the same class of desynchronization was carried forward when Spring 5.1.0 restructured gzip-only handling into the more general EncodedResourceResolver, this time as a tokenization-strictness mismatch between CachingResourceResolver.getContentCodingKey and EncodedResourceResolver.resolveResourceInternal. Both flavors of the bug persist in every subsequent release up to the listed fix versions. This issue affects versions 4.1.7 through 4.3.30, 5.0.0 through 5.3.47, 6.0.0 through 6.0.23, 6.1.0 through 6.1.26, 6.2.0 through 6.2.17, and 7.0.0 through 7.0.6 of Spring Framework.
Details
Module Info
- Product: Spring Framework
- Affected packages: spring-webmvc, spring-webflux
- Affected versions: >=4.1.7 <=4.3.30, >=5.0.0 <=5.3.47, >=6.0.0 <=6.0.23, >=6.1.0 <=6.1.26, >=6.2.0 <=6.2.17, >=7.0.0 <=7.0.6
- GitHub repository: https://github.com/spring-projects/spring-framework
- Published packages: https://central.sonatype.com/artifact/org.springframework/spring-webmvc
- Package manager: Maven
- Fixed in:
- Spring Framework 6.2.18, 7.0.7 (OSS)
- Spring Framework 4.3.x, 5.3.x, 6.1.x (NES)
Vulnerability Info
The vulnerability lives in two parallel pairs of classes that mirror each other between the Servlet and reactive stacks: CachingResourceResolver and EncodedResourceResolver in org.springframework.web.servlet.resource (Spring MVC) and in org.springframework.web.reactive.resource (Spring WebFlux). On the pre-5.1 Spring MVC line the same bug exists between CachingResourceResolver and the narrower GzipResourceResolver (the gzip-only predecessor of EncodedResourceResolver); the reactive package did not exist before Spring 5.0, so that flavor is Spring MVC only.
On Spring Framework 5.1 and later, an application becomes vulnerable when all of the following are true:
- The application is using Spring MVC or Spring WebFlux
- The application is configuring the resource chain support with caching enabled
- The application adds support for encoded resources resolution
- The resource cache must be empty when the attacker has access to the application
On Spring Framework 4.1.7 through 5.0.x, the preconditions are similar but narrower, because only Spring MVC exists and only gzip-encoded resources are supported:
- The application is using Spring MVC
- The application is configuring the resource chain support with caching enabled (CachingResourceResolver)
- The application is serving gzip-encoded resources via GzipResourceResolver
- The resource cache must be empty when the attacker has access to the application
When those conditions are met, the CachingResourceResolver wraps the EncodedResourceResolver in the resource chain. On each request, the caching resolver first computes a cache key from the request path and the accepted content codings, and on a cache miss it delegates to the encoded resolver to pick and return a pre-encoded resource, which is then stored under that key.
Prior to the fix, the two resolvers parsed Accept-Encoding independently and inconsistently. The caching resolver used a strict, token-oriented parse:
private String getContentCodingKey(HttpServletRequest request) {
String header = request.getHeader(HttpHeaders.ACCEPT_ENCODING);
if (!StringUtils.hasText(header)) {
return null;
}
return Arrays.stream(StringUtils.tokenizeToStringArray(header, ","))
.map(token -> {
int index = token.indexOf(';');
return (index >= 0 ? token.substring(0, index) : token).trim().toLowerCase(Locale.ROOT);
})
.filter(this.contentCodings::contains)
.sorted()
.collect(Collectors.joining(","));
}
The encoded resolver used a loose substring containment check on the entire lowercased header:
String acceptEncoding = getAcceptEncoding(request);
if (acceptEncoding == null) {
return resource;
}
for (String coding : this.contentCodings) {
if (acceptEncoding.contains(coding)) {
// ... serve pre-encoded resource with that coding
}
}
private String getAcceptEncoding(HttpServletRequest request) {
String header = request.getHeader(HttpHeaders.ACCEPT_ENCODING);
return (header != null ? header.toLowerCase(Locale.ROOT) : null);
}
acceptEncoding.contains(coding) does not honor token boundaries, quality values, or ;q=0 explicit rejections. An attacker can craft a header such as gzip;q=0, br that the strict parser (cache-key side) canonicalizes to br while the loose parser (resolver side) still matches the substring gzip and serves the gzip-encoded resource. The resolved resource is then stored in the cache under the strict br-only key. A subsequent legitimate client whose Accept-Encoding header also canonicalizes to br looks up that cache slot and receives gzip-encoded bytes advertised as br, which the browser cannot decode, producing a broken page.
The "resource cache must be empty when the attacker has access" precondition reflects the race: the attacker must win the first uncached lookup for a given key, because once a legitimate response is cached the attacker cannot overwrite it.
The fix unifies the two parsers by introducing a single parseAcceptEncoding static helper on EncodedResourceResolver, which both resolvers now call:
static List<String> parseAcceptEncoding(HttpServletRequest request) {
String header = request.getHeader("Accept-Encoding");
if (!StringUtils.hasText(header)) {
return Collections.emptyList();
}
header = header.toLowerCase(Locale.ROOT);
return Arrays.stream(StringUtils.tokenizeToStringArray(header, ","))
.map(token -> {
int index = token.indexOf(';');
return (index >= 0 ? token.substring(0, index) : token).trim();
})
.toList();
}
CachingResourceResolver.getContentCodingKey now builds its key from the token list returned by parseAcceptEncoding, and EncodedResourceResolver.resolveResourceInternal now iterates the same token list and checks full-token membership with contentCodings.contains(acceptedCoding) rather than a substring match. The private getAcceptEncoding helpers are removed. Because both sides now derive their decision from the same parsed tokens, the desynchronization that enabled the poisoning is eliminated. Equivalent changes are applied to the reactive variants that take a ServerWebExchange.
On the pre-5.1 Spring MVC line the bug takes a narrower form: CachingResourceResolver.computeKey performs a case-sensitive encoding.contains("gzip") check, while GzipResourceResolver.isGzipAccepted performs a case-insensitive value.toLowerCase().contains("gzip") check. A header such as Accept-Encoding: GZIP matches on the resolver side but not on the cache-key side, causing gzip-encoded bytes to be stored under a cache key that omits the +encoding=gzip suffix. A subsequent benign request whose cache key resolves to the same slot receives gzipped bytes without the Content-Encoding: gzip response header. The NES patch for these older lines aligns the two parsers by having both sides use the same case-insensitive token-based parse.
Mitigation
Only recent versions of Spring Framework receive community support and updates. Older versions have no publicly available fixes for this vulnerability.
Users of the affected components should apply one of the following mitigations:
- Upgrade to a currently supported version of Spring Framework.
- Leverage a commercial support partner like HeroDevs for post-EOL security support through Never-Ending Support (NES) for Spring Framework.
Credits
- Yuki Matsuhashi (finder)