
HTTP/2 & HTTP/3: The Speed Revolution
Text to Binary (HTTP/2), TCP to UDP (HTTP/3). From single-file queueing to parallel processing. Google's QUIC protocol story.

Text to Binary (HTTP/2), TCP to UDP (HTTP/3). From single-file queueing to parallel processing. Google's QUIC protocol story.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

A comprehensive deep dive into client-side storage. From Cookies to IndexedDB and the Cache API. We explore security best practices for JWT storage (XSS vs CSRF), performance implications of synchronous APIs, and how to build offline-first applications using Service Workers.

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

While improving website performance, I saw recommendations saying "Enable HTTP/2 for faster speed." I thought, "I already use HTTP, what's different about HTTP/2?" I almost moved on, but then I opened Chrome DevTools.
In the Network tab, I watched the waterfall chart. Twenty files were downloading sequentially, one after another. It looked like a single bank teller serving 20 customers in a queue. "Why are 20 files queuing up to download? Can't they download simultaneously?" This became my starting point.
For the first time, I visually saw how browsers actually load web pages. I wondered why this waterfall chart stretched out like an actual waterfall. Later I learned this was HTTP/1.1's fundamental limitation.
I didn't expect HTTP version upgrades to be this complex.
Initially, I thought "newer version = faster" but the more I dug, the more different problems each version solved. The motivation wasn't clear. Just reading "theoretically faster" didn't make it real.
Then I heard this analogy and everything clicked:
"HTTP/1.1: 1-lane highway. One car (file) at a time. Slow car ahead = everyone behind slows. (Head-of-Line Blocking)
HTTP/2: Multi-lane highway (Multiplexing). One road (TCP connection), multiple lanes. HTML, CSS, JS can drive simultaneously.
HTTP/3: Tunnel (UDP) highway. Construction on one lane doesn't stop others. Lose one packet ≠ everything stops."
That was it! HTTP evolution history is about solving "how to transfer multiple files faster and more efficiently." After this analogy, I understood why HTTP/2 introduced Multiplexing and why HTTP/3 chose UDP.
HTTP/1.1's biggest problem: handling only one request at a time. Like a bank with only one teller open.
[Request Order]
1. index.html
2. style.css
3. script.js
4. image.png
[HTTP/1.1]
index.html (2s) ━━━━━━
style.css (1s) ━━
script.js (1s) ━━
image.png (3s) ━━━━━━
Total: 7s
Files download sequentially, one by one. Slow file ahead? Everyone waits. This is Head-of-Line Blocking. The request at the head of the line blocks all requests behind it.
This explained why my Chrome DevTools waterfall stretched so long.
Developers used all kinds of tricks to overcome this limitation. Looking back, it's funny, but these were "Best Practices" then.
Browsers typically allow 6 TCP connections per domain. So developers split resources across multiple domains.
Connection 1 (example.com): index.html
Connection 2 (static1.example.com): style.css
Connection 3 (static2.example.com): script.js
Connection 4 (cdn1.example.com): image1.png
Connection 5 (cdn2.example.com): image2.png
Connection 6 (cdn3.example.com): image3.png
With 3 domains, you could use up to 18 connections. But this was inefficient:
Loading 50 small icons required 50 HTTP requests. So developers combined 50 icons into one large image and used CSS background-position to slice it.
.icon-home {
background: url('sprites.png') 0 0;
}
.icon-user {
background: url('sprites.png') -20px 0;
}
Maintenance hell, but we had no choice.
Bundling 10 CSS files and 15 JS files into bundle.css and bundle.js. This is one reason bundlers like Webpack and Rollup emerged.
The problem: changing one file meant re-downloading the entire bundle. Cache efficiency was terrible.
All these workarounds became unnecessary after HTTP/2. I now understand why old development practices were so complex. Trying to work around protocol limitations at the application level exploded complexity.
HTTP/2 emerged in 2015, based on Google's SPDY protocol. The core goal: "Solve everything with one connection."
HTTP/2's biggest innovation. One TCP connection handles multiple requests and responses simultaneously.
[HTTP/2 Multiplexing]
One TCP connection:
index.html ━━━━━━
style.css ━━
script.js ━━
image.png ━━━━━━
Total: 3s (longest file)
How is this possible? By introducing Streams and Frames.
HTTP/2 splits data into small chunks (Frames). Each request/response is managed as a Stream.
[One TCP Connection]
┌─────────────────────────────────────┐
│ Frame 1 (Stream 1: index.html) │
│ Frame 2 (Stream 2: style.css) │
│ Frame 3 (Stream 1: index.html) │
│ Frame 4 (Stream 3: script.js) │
│ Frame 5 (Stream 2: style.css) │
│ Frame 6 (Stream 1: index.html) │
└─────────────────────────────────────┘
Server sends frames in alternating order. Client reassembles them using Stream IDs. Like tearing a letter into pieces where each piece is numbered for reassembly.
Not all streams are equally important. HTML should load quickly; banner ad images can wait.
HTTP/2 allows setting stream priorities:
[Stream Priority]
Stream 1 (index.html) : Priority 256 (highest)
Stream 2 (style.css) : Priority 220
Stream 3 (script.js) : Priority 220
Stream 4 (ad-banner.jpg) : Priority 2 (lowest)
When browser sets priorities, server sends important resources first.
Each stream has independent flow control. While a slow client processes one stream, other streams continue receiving data.
Stream 1: [Window Size: 65535 bytes] ✓ Ready
Stream 2: [Window Size: 0 bytes] ✗ Still processing
Stream 3: [Window Size: 32768 bytes] ✓ Half ready
All this happens in one TCP connection. I now understand why Domain Sharding became unnecessary.
HTTP/1.1 is text-based:
GET /index.html HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)
Accept: text/html,application/xhtml+xml
Accept-Encoding: gzip, deflate, br
Human-readable, but inefficient for computers to parse. Need to find line breaks (\r\n), separate header names and values, handle case sensitivity.
HTTP/2 switched to binary:
[HTTP/2 Binary Frame]
+-----------------------------------------------+
| Length (24) |
+---------------+---------------+---------------+
| Type (8) | Flags (8) |
+-+-------------+---------------+-------------------------------+
|R| Stream Identifier (31) |
+=+=============================================================+
| Frame Payload (0...) ...
+---------------------------------------------------------------+
Initially worried "won't debugging be hard if humans can't read it?" But Chrome DevTools interprets it, so no problem.
Server Push was theoretically very attractive.
[Client]
GET /index.html
[Server]
Here's index.html
(Oh, this page needs style.css too)
I'll push style.css too! (preemptively)
Server sends resources before client requests them. Beautiful idea to save RTTs.
Reality was different. Chrome completely removed Server Push support in 2022. Why?
Cache Problem: Server doesn't know client's cache state. Sending style.css again when it's already cached wastes bandwidth.
Priority Conflict: Resources server considers "important" might be low priority for client.
Complexity: Logic to decide which resources to push is complex. Can actually hurt performance.
103 Early Hints Works Better: Instead of pushing, server gives "you'll need these resources" hints. Browser requests them, checking cache and controlling priority.
HTTP/1.1 103 Early Hints
Link: </style.css>; rel=preload; as=style
Link: </script.js>; rel=preload; as=script
HTTP/1.1 200 OK
Content-Type: text/html
...
I accepted this: Good ideas don't always succeed in reality. Server Push was HTTP/2's failed experiment, but led to better alternatives (Early Hints).
HTTP headers repeat with every request. Especially with large cookies, headers can be several KB.
GET /page1.html
Host: example.com
User-Agent: Mozilla/5.0...
Cookie: session=abc123; user_id=456; preferences=...
Accept-Encoding: gzip, deflate, br
GET /page2.html
Host: example.com
User-Agent: Mozilla/5.0...
Cookie: session=abc123; user_id=456; preferences=...
Accept-Encoding: gzip, deflate, br
Nearly identical headers repeated.
HPACK uses Static Table + Dynamic Table for header compression.
Static Table: Predefined table of common headers
Index | Header Name | Header Value
------|-------------------|-------------
1 | :authority |
2 | :method | GET
3 | :method | POST
4 | :path | /
...
15 | accept-encoding | gzip, deflate
...
Dynamic Table: Stores headers encountered during connection
First request:
Host: example.com → Stored in Dynamic Table (Index 62)
User-Agent: Mozilla/5.0... → Stored (Index 63)
Second request:
Host: example.com → "Use Index 62" (2 bytes done!)
User-Agent: Mozilla/5.0... → "Use Index 63"
Actual compression effect:
[Before compression]
GET /api/users HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)...
Cookie: sessionId=abc123def456; userId=789; preferences=dark_mode,lang_ko
Accept: application/json
Accept-Encoding: gzip, deflate, br
→ ~250 bytes
[After HPACK]
82 86 84 41 8c f1 e3 c2 e5 f2 3a 6b a0 ab 90 f4 ff
→ ~30 bytes (88% reduction!)
I now understand why HTTP/2 is especially effective in mobile environments. Header compression saves significant bandwidth.
HTTP/2 seemed perfect, so why HTTP/3? HTTP/2 had a fundamental problem it couldn't solve: TCP itself.
HTTP/2 multiplexes at application level but still runs over TCP. TCP guarantees packet order, so one lost packet blocks all Streams.
[HTTP/2 over TCP]
Stream 1: HTML ━━━━━━
Stream 2: CSS ━━
Stream 3: JS ━━
Stream 4: Image ━━━━━━
[TCP Packet Level]
Packet 1: HTML chunk 1 ✓
Packet 2: CSS chunk 1 ✓
Packet 3: HTML chunk 2 ✗ Lost!
Packet 4: JS chunk 1 ✓ (arrived but waiting)
Packet 5: Image chunk 1 ✓ (arrived but waiting)
→ Packets 4, 5 blocked until Packet 3 retransmitted
HTTP/2 separated Streams, but TCP sees everything as one. Stream 1's lost packet blocks Streams 2, 3, 4. This is TCP's Head-of-Line Blocking.
TCP + TLS connection setup:
[TCP 3-way Handshake]
Client → Server: SYN
Server → Client: SYN-ACK
Client → Server: ACK
→ 1.5 RTT
[TLS 1.2 Handshake]
Client → Server: ClientHello
Server → Client: ServerHello, Certificate
Client → Server: KeyExchange, Finished
→ 1.5 RTT
Total: 3 RTT (~300ms @ 100ms latency)
Need to wait 300ms just to view one page. On mobile networks with higher latency, can exceed 500ms.
TCP identifies connections by (source IP, source port, dest IP, dest port). IP change = connection drops.
[Scenario: Watching YouTube on subway]
WiFi (192.168.1.100) → Video streaming
↓ Enter tunnel, WiFi drops
LTE (10.20.30.40) → IP changed!
→ TCP connection dropped → Reconnect → Buffering...
This is why YouTube buffers when entering subway tunnels.
Google boldly chose UDP. "But UDP is unreliable?" True. So they implemented reliability on top of UDP.
No OS kernel modification needed: TCP is implemented in OS kernel, requiring OS updates to modify. UDP can be implemented at application level.
No middlebox interference: Middleboxes like firewalls and NAT analyze and modify TCP packets. UDP is simpler with less interference.
Faster evolution: Protocol improvements only need application updates.
UDP does almost nothing. Just sends packets, no loss recovery, order guarantee, or congestion control. QUIC implemented all of this.
[QUIC Stack]
┌─────────────────────────────────┐
│ HTTP/3 (Application Layer) │
├─────────────────────────────────┤
│ QUIC (Transport Layer) │
│ - Reliability (retransmission) │
│ - Ordering (per Stream) │
│ - Congestion control │
│ - Encryption (TLS 1.3 built-in)│
├─────────────────────────────────┤
│ UDP (Simple packet delivery) │
└─────────────────────────────────┘
QUIC has built-in TLS 1.3 and supports 0-RTT connections.
[First connection - 1-RTT]
Client → Server: ClientHello (crypto negotiation)
Server → Client: ServerHello + encrypted response
→ 1 RTT
[Reconnection - 0-RTT]
Client → Server: Previous session ticket + encrypted HTTP request
Server → Client: Encrypted HTTP response
→ 0 RTT! Data in first packet
On reconnection, requests can be sent immediately. RTT completely eliminated.
But 0-RTT has a critical security issue: Replay Attacks.
[Replay Attack Scenario]
1. Alice → Server: "Transfer $100 from A to B" (0-RTT)
2. Attacker copies this packet
3. Attacker → Server: (resend same packet)
4. Server: "OK, $100 transferred" (again!)
→ $200 transferred
0-RTT packets are encrypted, but attackers can retransmit without knowing contents.
QUIC uses multiple defense mechanisms:
// Node.js 0-RTT scope configuration
const http3Server = require('http3');
http3Server.createServer({
allowEarlyData: true,
maxEarlyData: 16384, // Max bytes for 0-RTT
earlyDataCallback: (req) => {
// Only allow GET, HEAD in 0-RTT
if (req.method !== 'GET' && req.method !== 'HEAD') {
return false;
}
return true;
}
});
Summary: 0-RTT is a performance vs security tradeoff. Send safe requests (idempotent) via 0-RTT, important requests (transfers, payments) via 1-RTT.
Unlike TCP, QUIC provides independent ordering per Stream.
[QUIC Stream Independence]
Stream 1: HTML ━━━━━━ ✓
Stream 2: CSS ━━ ✓
Stream 3: JS ✗ Packet loss! → Retransmitting
Stream 4: Image ━━━━━━ ✓
→ Only Stream 3 stalls, Streams 1, 2, 4 continue
Stream 3's packet loss doesn't affect other streams. Completely solved TCP's Head-of-Line Blocking.
QUIC identifies connections by Connection ID. Even if IP or port changes, same Connection ID = maintained connection.
[QUIC Connection Migration]
WiFi (IP: 192.168.1.100, Connection ID: 0x1a2b3c4d)
→ Stream 1: Downloading video...
[WiFi drops, switch to LTE]
LTE (IP: 10.20.30.40, Connection ID: 0x1a2b3c4d)
→ Same Connection ID
→ Stream 1: Continue download (no interruption!)
Real scenario: Watching YouTube on smartphone in subway
[Traditional HTTP/2 over TCP]
Station (WiFi) → Video streaming
Enter tunnel → WiFi drops → TCP connection drops
In tunnel (LTE) → Reconnect (3 RTT) → Buffer 3s
→ User: "Why does it keep cutting!"
[HTTP/3 over QUIC]
Station (WiFi) → Video streaming
Enter tunnel → WiFi drops → Connection ID maintained
In tunnel (LTE) → Instantly resume
→ User: "Wait, it didn't drop?"
I was genuinely amazed when I understood this. "Connection maintained during network switching" is actually possible. Perfect protocol for the mobile era.
First, let's check which HTTP version your site uses.
Name Status Type Protocol
index.html 200 document h2
style.css 200 stylesheet h2
script.js 200 script h2
image.png 200 png h2
h2 is HTTP/2, h3 is HTTP/3.
# Test HTTP/2
curl -I --http2 https://example.com
# Test HTTP/3 (curl 7.72.0+)
curl -I --http3 https://example.com
# Detailed info
curl -I --http2 -v https://example.com 2>&1 | grep "ALPN"
# ALPN: server accepted h2 → HTTP/2
# ALPN: server accepted h3 → HTTP/3
My site uses Nginx. Enabling HTTP/2 takes one line.
server {
listen 443 ssl http2; # Enable http2
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# HTTP/2 Server Push (optional, not recommended)
# http2_push /style.css;
# http2_push /script.js;
location / {
root /var/www/html;
index index.html;
}
}
Note: http2_push is a failed feature as explained above. Don't use it.
After configuration, restart:
sudo nginx -t # Validate config
sudo systemctl reload nginx
Nginx 1.25.0+ experimentally supports HTTP/3. Requires --with-http_v3_module compile option.
server {
listen 443 ssl http2;
listen 443 quic reuseport; # Enable HTTP/3
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# Advertise HTTP/3 support (Alt-Svc header)
add_header Alt-Svc 'h3=":443"; ma=86400';
location / {
root /var/www/html;
index index.html;
}
}
Alt-Svc header tells browser "this server also supports HTTP/3". Browser tries HTTP/3 on next request.
My site uses Cloudflare. Cloudflare automatically supports HTTP/3. No configuration needed!
Cloudflare Dashboard:
# Check if Cloudflare supports HTTP/3
curl -I --http3 https://codemapo.com
# HTTP/3 200 ✓
const http2 = require('http2');
const fs = require('fs');
const server = http2.createSecureServer({
key: fs.readFileSync('/path/to/key.pem'),
cert: fs.readFileSync('/path/to/cert.pem')
});
server.on('stream', (stream, headers) => {
const path = headers[':path'];
console.log(`Stream ID ${stream.id}: ${path}`);
if (path === '/') {
stream.respond({
'content-type': 'text/html',
':status': 200
});
stream.end('<h1>HTTP/2 Server</h1>');
} else if (path === '/data') {
stream.respond({
'content-type': 'application/json',
':status': 200
});
stream.end(JSON.stringify({ message: 'Multiplexing works!' }));
}
});
server.listen(8443, () => {
console.log('HTTP/2 server running on https://localhost:8443');
});
Opening https://localhost:8443 in browser serves via HTTP/2. Check h2 in DevTools.
My actual measurements (Next.js site, 20 files, Cloudflare CDN).
| Protocol | Load Time | Notes |
|---|---|---|
| HTTP/1.1 | 2.8s | 20 files sequential (6 connections) |
| HTTP/2 | 1.2s | Multiplexing effect (1 connection) |
| HTTP/3 | 1.0s | 0-RTT reconnect, Stream independence |
HTTP/1.1: Staircase (sequential)
index.html ━━━━━━
style1.css ━━
style2.css ━━
script.js ━━━━
image1.png ━━━
HTTP/2: Parallel loading
index.html ━━━━━━
style1.css ━━
style2.css ━━
script.js ━━━━
image1.png ━━━
image2.png ━━━
...
HTTP/3: Parallel + fast reconnect
(Revisit from cache)
0-RTT connection (instant) → All files parallel
This difference hit home. Numbers alone didn't show it, but the waterfall chart made it crystal clear.
Summarizing HTTP evolution's core:
Problem: Head-of-Line Blocking, TCP connection per file Solution: Multiplexing (all files in one connection), Binary Framing (efficient parsing), HPACK (header compression) Effect: 2-3x performance improvement, no more Domain Sharding/Sprite Sheets needed
Problem: TCP's Head-of-Line Blocking, slow connection setup, no Connection Migration Solution: QUIC (UDP-based), Stream independence, 0-RTT connection, Connection ID Effect: Especially fast in mobile environments, no drops during network switching
Browsers and CDNs automatically choose optimal protocol. We just use it.
But if you're a developer, you should know:
Initially thought "HTTP is HTTP, right?" Now I always check HTTP/2 and HTTP/3 support when optimizing web performance. This was it: Protocol evolution wasn't just about "being faster" — it changed web development practices themselves. No more forced file concatenation or domain splitting. Protocols got smarter, development got simpler.