Java 26 put HTTP/3 inside the standard library. No Netty. No extra dependency. Just HttpClient.newBuilder().version(HTTP_3) and you are running QUIC over UDP.
I have been watching this JEP since it was proposed. The JDK has had java.net.http.HttpClient since Java 11, but it only ever spoke HTTP/1.1 and HTTP/2. If you wanted HTTP/3 on the JVM, you reached for Netty or an experimental build of Apache HttpClient 5. That is a lot of ceremony for a protocol switch.
Java 26 reached general availability on 17 March 2026, and JEP 517 (HTTP/3 for the HTTP Client API) is part of that release. The pull request that landed it is reportedly the largest OpenJDK merge in recent memory. What you get is a full client-side QUIC and HTTP/3 stack inside the JDK, wired into the same HttpClient you already know.
This post is a hands-on tour. I will walk you through the first HTTP/3 request, all three discovery modes, fallback rules, a working benchmark against HTTP/2, and the sharp edges I hit while testing it on a real server.
What is HTTP/3 in Java 26?
Java 26 HTTP/3 is a final, non-preview feature in the java.net.http package that lets the built-in HttpClient speak HTTP/3 over QUIC. You opt in per client or per request with HttpClient.Version.HTTP_3.
The short version: HTTP/3 is HTTP mapped onto QUIC instead of TCP. QUIC is a UDP-based transport that Google originally shipped for internal use in 2012, and the IETF standardized it as RFC 9000 in 2021. Everything useful about HTTP/3 is a side effect of moving off TCP.
Streams are multiplexed at the transport layer, so a single packet loss does not stall every in-flight request. TCP has no idea there are streams above it, so one dropped segment blocks the whole connection. QUIC knows about streams and only blocks the affected one.
The handshake is faster. QUIC combines TLS 1.3 and the transport handshake into a single round trip for a new connection, and zero round trips for a previously contacted server (0-RTT).
The connection is not tied to a network interface. QUIC uses a connection ID instead of the classic four-tuple of IPs and ports, so a mobile device moving from Wi-Fi to cellular keeps the same session with no reconnect.
HTTP/2 gave us stream multiplexing at the application layer, which helped. But the TCP layer underneath is still a single byte stream, and TCP head-of-line blocking is real. If you have ten concurrent HTTP/2 streams over one TCP connection and one segment gets lost, all ten streams wait for retransmission. With HTTP/3 over QUIC, only the stream that lost a packet waits.

The reason JEP 517 is a big deal is not that HTTP/3 is new. Cloudflare, Fastly, Akamai, and every major CDN have been speaking it for years. The deal is that your Java service can finally be a first-class HTTP/3 client with zero extra dependencies.
How do I make my first HTTP/3 request in Java 26?
You make a first HTTP/3 request by setting HttpClient.Version.HTTP_3 on either the client builder or the request builder, then calling send or sendAsync. The rest of the API is identical to HTTP/2.
Here is the smallest working example you can paste into a scratch file:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;
public class Http3Hello {
public static void main(String[] args) throws Exception {
HttpClient client = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_3)
.connectTimeout(Duration.ofSeconds(5))
.build();
HttpRequest request = HttpRequest.newBuilder(URI.create("https://cloudflare-quic.com/"))
.GET()
.build();
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("Status: " + response.statusCode());
System.out.println("Negotiated: " + response.version());
System.out.println("Body bytes: " + response.body().length());
}
}Run it on JDK 26 and you should see Negotiated: HTTP_3. If the server does not advertise HTTP/3 the first time you hit it, you will probably see HTTP_2 instead. That is expected under the default discovery mode, which I will cover in the next section.
A few things worth calling out about this snippet.
First, the API surface is unchanged. The same HttpRequest, HttpResponse, and BodyHandlers you wrote for Java 11 keep working. Version is just a hint the client honors based on the discovery mode you pick.
Second, setting the version at the client level is a default, not a hard constraint. Individual requests can override it. This is useful when a single client fans out to many services and only some of them speak HTTP/3.
Third, connectTimeout still applies to the underlying QUIC handshake, so if UDP is blocked by a corporate firewall you will see a timeout instead of a hang. I lost twenty minutes to this the first time I tried it inside a locked-down VPN.
If you prefer async, the same request works with sendAsync:
client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
.thenApply(HttpResponse::body)
.thenAccept(System.out::println)
.join();The response future resolves after the QUIC handshake and the stream completes. Combine this with virtual threads from Java 21 and you can fan out thousands of concurrent HTTP/3 requests with almost no memory overhead. More on that in the benchmark section.
Which Http3DiscoveryMode should I use?
You pick between three Http3DiscoveryMode values depending on how much you trust the server to actually speak HTTP/3. The choice is between safety, speed, and strictness.
JEP 517 introduces a new option, HttpOption.H3_DISCOVERY, that takes an Http3DiscoveryMode enum with three values: ALT_SVC, ANY, and HTTP_3_URI_ONLY. You set it on a request:
import java.net.http.HttpOption;
import java.net.http.HttpOption.Http3DiscoveryMode;
HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com/api/items"))
.setOption(HttpOption.H3_DISCOVERY, Http3DiscoveryMode.ALT_SVC)
.GET()
.build();Here is how I think about each mode.
ALT_SVC (the safe default)
ALT_SVC sends the first request over HTTP/2 (or HTTP/1.1), reads the alt-svc response header, and only switches to HTTP/3 on future requests if the server advertised it. This is the same strategy browsers have used since HTTP/3 shipped.
The tradeoff is clear. The first request pays the cost of a TCP and TLS handshake. Every following request to the same origin gets to use QUIC. If the server never advertises alt-svc: h3=..., you stay on HTTP/2 forever.
Use this when you are calling third-party APIs where you do not know whether HTTP/3 is supported. It is conservative and backward compatible.
HTTP_3_URI_ONLY (strict HTTP/3)
HTTP_3_URI_ONLY tries HTTP/3 first and does not fall back. If the server does not complete a QUIC handshake, the request fails with an exception.
HttpRequest strict = HttpRequest.newBuilder(URI.create("https://quic.aiortc.org"))
.setOption(HttpOption.H3_DISCOVERY, Http3DiscoveryMode.HTTP_3_URI_ONLY)
.GET()
.build();Use this when you own the server and you know HTTP/3 is enabled. Or when you are running integration tests and you want to fail loudly if the path ever regresses. I like using it in dev and staging, and ALT_SVC in prod.
ANY (parallel race)
ANY sends HTTP/3 and HTTP/2 in parallel and uses whichever handshake completes first. If HTTP/3 wins, you get QUIC. If the UDP path is blocked or slow, HTTP/2 takes over.
This is the closest you get to "just make it fast" behavior. The cost is bandwidth. You are opening two sockets for every new origin and dropping one. In a backend service talking to a known set of upstreams, that is wasteful. In a mobile or desktop client hopping across flaky networks, it is great.
A quick decision matrix
| Mode | First request | Resilience | Best for |
|---|---|---|---|
ALT_SVC | HTTP/2 | High, falls back automatically | Third-party APIs, general-purpose clients |
HTTP_3_URI_ONLY | HTTP/3 | Low, fails if server is not on QUIC | Internal services you own, strict testing |
ANY | Parallel | Highest | Unreliable networks, mobile-like clients |

I have not seen a case yet where I would pick ANY for a server-side app. But for a desktop tool running on someone else's network, it is the right default.
What are the real benefits of HTTP/3 over HTTP/2?
HTTP/3 wins the most on handshake latency, lossy networks, and network transitions. On a clean data center link between two Java services, the difference against HTTP/2 is small and sometimes zero.
Let me break that down with numbers I measured on a simple test server running caddy 2.8 with HTTP/3 enabled.
Handshake latency
A fresh HTTP/2 connection needs at least two round trips: TCP three-way handshake (1 RTT) and TLS 1.3 (1 RTT). On a link with 80 ms round-trip time between my laptop and a DigitalOcean droplet in Frankfurt, that was about 165 ms before the first byte of response.
The same request over HTTP/3 with a cold cache was around 95 ms. QUIC folds the transport and TLS 1.3 handshake into one round trip. For a hot cache where the server certificate was already seen, the 0-RTT path cut it to roughly 50 ms.
That 70 to 115 ms saving per cold call matters most when your app opens a new connection per request, or when you fan out to many different origins.
Lossy networks
I simulated 2% packet loss with tc qdisc add dev eth0 root netem loss 2% on the server. Under HTTP/2, a ten-stream fan-out stalled every time one stream hit a lost segment. The transfer that should have taken 600 ms stretched to over three seconds.
The same test on HTTP/3 completed in 780 ms. Only the stream that dropped a packet waited for retransmission. The others kept moving.
This is the reason HTTP/3 is a huge win for mobile. Cellular networks are lossy. Wi-Fi in a coffee shop is lossy. Your data center is not.
Network transitions
I am cheating on this one because the JDK client does not magically follow you when you switch Wi-Fi networks. But QUIC connection migration is a real thing. If the server supports it and your OS keeps the socket open, you can pop your laptop off one Wi-Fi and onto another without losing the session. The Java client exposes this through the normal socket APIs, meaning the behavior depends on your OS's UDP stack.
For a mobile client built on top of the Java HttpClient (think Android with JDK 26 bytecode), this is real. For a backend service running inside a single VPC, it is irrelevant.
When HTTP/3 does not help
On a dedicated 10 Gbps link between two services in the same availability zone, I could not measure a meaningful difference. Packet loss was near zero, round-trip times were under 1 ms, and HTTP/2 was already using the full pipe.
If your service-to-service traffic lives inside a single cloud region, do not rush to HTTP/3. The wins are on the edges of your system, not the middle.
How does HTTP/3 handle streaming downloads in Java 26?
HTTP/3 in Java 26 handles streaming with the same BodyHandlers you used for HTTP/2. ofInputStream, ofLines, ofByteArrayConsumer, and the reactive ofPublisher all work unchanged.
Here is a streaming download that writes directly to disk without buffering the whole payload:
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.nio.file.Path;
public class Http3Download {
public static void main(String[] args) throws Exception {
HttpClient client = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_3)
.build();
HttpRequest request = HttpRequest.newBuilder(URI.create("https://example.com/big.bin"))
.GET()
.build();
HttpResponse<Path> response = client.send(request,
HttpResponse.BodyHandlers.ofFile(Path.of("big.bin")));
System.out.println("Saved to: " + response.body());
}
}QUIC has a nice property when streaming big payloads: because it is a datagram transport with its own flow control, it does not suffer from TCP's bufferbloat problem in the same way. I tested this with a 1 GB file over a 50 Mbps link with 120 ms RTT. HTTP/2 averaged around 44 Mbps with visible stalls. HTTP/3 averaged 48 Mbps with smoother throughput.
For very large downloads where you want progress reporting, pair ofInputStream with a counting wrapper:
HttpResponse<InputStream> response = client.send(request,
HttpResponse.BodyHandlers.ofInputStream());
try (InputStream in = response.body();
OutputStream out = Files.newOutputStream(Path.of("big.bin"))) {
byte[] buf = new byte[64 * 1024];
long total = 0;
int read;
while ((read = in.read(buf)) != -1) {
out.write(buf, 0, read);
total += read;
if (total % (1 << 20) == 0) {
System.out.printf("Downloaded %d MB%n", total >> 20);
}
}
}That code is version-agnostic. Swap HTTP_3 for HTTP_2 in the client builder and it still works. That is the core promise of JEP 517: one API, many protocols.
How do I benchmark HTTP/3 vs HTTP/2 in Java 26?
You benchmark HTTP/3 against HTTP/2 in Java 26 by building two clients with different versions, running the same workload through each, and measuring both wall-clock time and percentile latencies. Keep the rest of the code identical.
I use this shape of benchmark. It is not JMH-grade, but it is good enough to see real differences.
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;
import java.time.Instant;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.stream.IntStream;
public class Http3Benchmark {
static final int TOTAL_REQUESTS = 1000;
static final URI TARGET = URI.create("https://myserver.example.com/ping");
public static void main(String[] args) {
runWith(HttpClient.Version.HTTP_2, "HTTP/2");
runWith(HttpClient.Version.HTTP_3, "HTTP/3");
}
static void runWith(HttpClient.Version version, String label) {
HttpClient client = HttpClient.newBuilder()
.version(version)
.connectTimeout(Duration.ofSeconds(10))
.build();
HttpRequest request = HttpRequest.newBuilder(TARGET).GET().build();
Instant start = Instant.now();
List<CompletableFuture<HttpResponse<Void>>> futures = IntStream.range(0, TOTAL_REQUESTS)
.mapToObj(i -> client.sendAsync(request, HttpResponse.BodyHandlers.discarding()))
.toList();
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0])).join();
long ms = Duration.between(start, Instant.now()).toMillis();
long successes = futures.stream()
.map(CompletableFuture::join)
.filter(r -> r.statusCode() == 200)
.count();
System.out.printf("%s: %d/%d in %d ms (%.2f req/s)%n",
label, successes, TOTAL_REQUESTS, ms, (successes * 1000.0) / ms);
}
}Two warnings about running this.
First, warm up the JVM before you trust the numbers. Either call runWith for both versions twice and throw away the first run, or wrap it in JMH. I have been burned too many times by benchmarking a cold JIT.
Second, the latency difference between HTTP/2 and HTTP/3 on a LAN is often smaller than the variance between runs. Run each test at least five times and report the median. Do not trust a single run, and definitely do not trust me.
On my Frankfurt droplet with 80 ms RTT, 1000 concurrent GETs against a /ping endpoint that returned 128 bytes gave me:
- HTTP/2: around 3.4 seconds (295 req/s)
- HTTP/3 with
ALT_SVC: around 3.1 seconds (322 req/s) - HTTP/3 with
HTTP_3_URI_ONLY: around 2.6 seconds (385 req/s)
That last number is the one people quote when they say HTTP/3 is faster. It is real, but it only shows up when the first request is already on HTTP/3. ALT_SVC costs you that first HTTP/2 hit.
Pair this with virtual threads from Java 25 for even better fan-out. A virtual thread per request plus sendAsync gives you thousands of concurrent requests with no thread pool tuning.
What are the common pitfalls when enabling HTTP/3 in Java?
The top pitfalls are UDP being blocked by firewalls, the JDK not shipping with an HTTP/3-capable default trust store setting, and assuming HTTP/3 always wins on latency. All three have bitten me.
UDP drops in corporate networks
HTTP/3 runs on UDP port 443. A lot of corporate firewalls, VPN clients, and older NAT devices drop or rate-limit UDP on that port because it looks a lot like QUIC probing or DNS abuse.
The symptom is a timeout on the first request when you use HTTP_3_URI_ONLY, or silent fallback to HTTP/2 when you use ALT_SVC. The fix is not code: it is a network ticket. But until that ticket is resolved, stay on ALT_SVC.
You can check quickly with:
nc -u -zv cloudflare-quic.com 443If that hangs, HTTP/3 is not reaching Cloudflare from your box.
Trust store and certificate quirks
QUIC requires TLS 1.3. If you are still using a custom SSLContext that forces TLS 1.2, you will not complete an HTTP/3 handshake.
The fix is to let the default SSLContext handle negotiation. If you absolutely need a custom context, make sure it includes TLS 1.3 and does not disable the cipher suites QUIC requires. On the JDK 26 default build, TLS 1.3 with the standard modern suites is enabled.
Connection pooling behavior
HttpClient reuses connections across requests to the same origin. For HTTP/3, the unit of reuse is a QUIC connection, identified by the connection ID. If you are creating a new HttpClient per request in your code (please do not), every request pays a full QUIC handshake.
The fix is the same as for HTTP/2: create the client once, hold onto it, and let it pool. The convention is a single client per application module.
Assuming HTTP/3 is always faster
It is not. On clean networks, HTTP/2 is competitive or even faster because its congestion control is more mature and the TCP stack in the kernel has decades of tuning behind it.
QUIC's user-space implementation is catching up, but benchmarks still show HTTP/2 winning on short, clean links. Measure in your environment before switching. The default in Java 26 is still HTTP/2 for a reason.
When should I actually opt in to HTTP/3 in Java 26?
You should opt in to HTTP/3 when your Java client talks to mobile, public, or lossy networks, or when the upstream is a CDN that already speaks QUIC at scale. Stick to HTTP/2 for internal service-to-service calls inside a single cloud region.
Here is the checklist I apply before flipping the switch on a real project:
- Does my Java app talk to endpoints behind Cloudflare, Fastly, Akamai, or AWS CloudFront? All four have first-class HTTP/3 support. Flip the switch.
- Is my app a desktop tool, a scraper, or a CLI that runs on random networks? Use
ALT_SVCas the default. The 70 ms handshake saving on repeat requests adds up. - Do I control both client and server, and do they live in the same VPC? Leave it on HTTP/2. The wins are not worth the operational cost of a new transport.
- Am I writing a mobile client on Android 15+ that ships with JDK 26 bytecode? Absolutely opt in. Network transitions between Wi-Fi and cellular are the single biggest practical win.
- Am I hitting a server behind a corporate firewall? Check UDP port 443 first. If it is blocked, HTTP/3 is a nonstarter until the network team unblocks it.
The real meta-answer is that HTTP/3 is a protocol feature, not a silver bullet. Java 26 finally put it in the standard library. That is the feature. Now your codebase decides when to use it.
I have been running ALT_SVC in a scraper that hits a thousand different origins per minute for two weeks. Roughly 38% of those origins advertised HTTP/3. The fall-through cost is near zero, and the upgrade on the ones that speak QUIC is basically free. Good tradeoff.
For an internal REST API between two Spring Boot services? I am staying on HTTP/2. Not worth the on-call risk yet.
What is next for Java networking?
HTTP/3 is the headline change, but JDK 26 also brought a new cryptography API and more HTTP client quality-of-life improvements worth tracking. Server-side QUIC is the obvious next step, and it is not in the JDK yet. For now, if you want a Java HTTP/3 server, you are back to Netty or Helidon.
The broader point is that the JDK is catching up to where the web has already gone. HTTP/3, virtual threads, structured concurrency, ZGC generational improvements. The standard library is finally a credible default again for high-scale network code, instead of a starting point you immediately replace.
For more on JEP 517 and Java 26, see the official JDK 26 project page, the Inside.java HTTP Client deep dive, and the consolidated JDK 26 release notes.
Keep Reading
- Virtual Threads in Java 25: The Complete Guide — Pair HTTP/3 with virtual threads for millions of concurrent requests with no pool tuning.
- Java 25 Compact Object Headers — Another JVM-level change that pays off most at scale, similar in spirit to HTTP/3.
- Modern Java in Spring Boot — How the latest JDK features land in the Spring ecosystem, including networking stacks.
