
Cloudflare Workers: Running Code at the Edge
Server in the US meant slow responses for Korean users. Cloudflare Workers runs code at 300+ edge locations, cutting latency dramatically.

Server in the US meant slow responses for Korean users. Cloudflare Workers runs code at 300+ edge locations, cutting latency dramatically.
Solving server waste at dawn and crashes at lunch. Understanding Auto Scaling vs Serverless through 'Taxi Dispatch' and 'Pizza Delivery' analogies. Plus, cost-saving tips using Spot Instances.

Optimizing by gut feeling made my app slower. Learn to use Performance profiler to find real bottlenecks and fix what matters.

Text to Binary (HTTP/2), TCP to UDP (HTTP/3). From single-file queueing to parallel processing. Google's QUIC protocol story.

From HTML parsing to DOM, CSSOM, Render Tree, Layout, Paint, and Composite. Mastering the Critical Rendering Path (CRP), Preload Scanner, Reflow vs Repaint, and requestAnimationFrame.

I launched a small SaaS. The server lived in AWS Virginia. American users were happy, but when a friend in Korea tried it, they asked, "Why is this so slow?" Of course it was—round trip from Seoul to Virginia took over 200ms.
The traditional solution is running multiple regions. Seoul, Tokyo, Frankfurt... but you'd need to manage servers in each region, sync databases, build deployment pipelines. Overkill for a small project.
That's when I discovered Cloudflare Workers. "Running code at the edge" sounded abstract at first. After using it, I completely understood. This was it. Code runs closest to the user.
The best analogy for edge computing is a coffee franchise.
Traditional servers are like having only headquarters. A customer in Seoul has to call the US headquarters to order. Round trip takes forever.
Edge computing is a franchise with 300+ locations worldwide. Seoul customers order from Gangnam, Tokyo customers from Shibuya. Each location serves the same menu (code), but physically closer means faster response.
Cloudflare already operated global data centers for their CDN. Workers let that same infrastructure run dynamic code, not just static files.
AWS Lambda is serverless too. What makes Workers special? The execution environment is fundamentally different.
Lambda is container-based. A request comes in, it spins up a container, initializes the Node.js runtime, loads your code. Cold starts take hundreds of milliseconds. Maintaining warm state requires constant traffic.
Workers use V8 Isolates. Like how Chrome creates isolated JavaScript environments for each tab, Workers run each request in an isolated V8 context. Much lighter than containers. Cold starts under 5ms.
I felt this difference viscerally. An API I built on Lambda sometimes took over a second on first request. Moved to Workers, it was always under 50ms. Completely different user experience.
// Basic Workers structure
export default {
async fetch(request, env, ctx) {
return new Response('Hello from the edge!', {
headers: { 'Content-Type': 'text/plain' },
});
},
};
That's it. Just a fetch handler. No Express, no framework. Just Web Standard APIs.
The Workers API follows web standards. fetch, Request, Response, URL—same as browser APIs. Moving away from Node.js felt limiting at first, but I quickly saw the benefits.
env object.
export default {
async fetch(request, env, ctx) {
const apiKey = env.API_KEY; // From wrangler.toml
const db = env.DB; // D1 database binding
const kv = env.CACHE; // KV namespace binding
return new Response(`API Key: ${apiKey}`);
},
};
Async processing uses ctx.waitUntil(). You can run background tasks even after sending a response. Perfect for logging or analytics.
export default {
async fetch(request, env, ctx) {
// Return response immediately
const response = new Response('Done!');
// Run async task after response
ctx.waitUntil(
fetch('https://analytics.example.com/log', {
method: 'POST',
body: JSON.stringify({ path: request.url }),
})
);
return response;
},
};
Workers alone can't store state. Each request is isolated. So Cloudflare provides KV (Key-Value) storage.
KV's key characteristic is eventual consistency. Write data and it replicates to edge locations worldwide—can take up to 60 seconds. Not ideal for data requiring immediate consistency, but perfect for read-heavy, write-rarely scenarios.
I used it for API response caching. Instead of calling external APIs, read cached values from KV. Responses became 10x faster.
export default {
async fetch(request, env, ctx) {
const cache = env.CACHE; // KV namespace
const cacheKey = new URL(request.url).pathname;
// Check cache
let data = await cache.get(cacheKey, { type: 'json' });
if (!data) {
// Cache miss: call external API
const response = await fetch('https://api.example.com/data');
data = await response.json();
// Store in cache (1 hour TTL)
ctx.waitUntil(
cache.put(cacheKey, JSON.stringify(data), {
expirationTtl: 3600,
})
);
}
return new Response(JSON.stringify(data), {
headers: { 'Content-Type': 'application/json' },
});
},
};
KV works for simple key-value storage, but falls short for relational data. So Cloudflare built D1. SQLite running at the edge.
D1 keeps a central database and maintains read-only replicas at each edge. Writes go central, but reads happen from the nearest edge. Perfect for read-heavy applications.
export default {
async fetch(request, env, ctx) {
const db = env.DB; // D1 binding
// Execute SQL query
const { results } = await db.prepare(
'SELECT * FROM users WHERE email = ?'
).bind('user@example.com').all();
return new Response(JSON.stringify(results), {
headers: { 'Content-Type': 'application/json' },
});
},
};
In my project, I stored user preferences in D1. Fast reads from anywhere in the world.
AWS S3 is great, but egress costs are brutal. Every time you serve data, you pay per GB. High traffic means exploding costs.
Cloudflare R2 provides S3-compatible APIs while charging zero for egress. You only pay for storage. Game changer for services serving lots of images or videos.
export default {
async fetch(request, env, ctx) {
const bucket = env.MY_BUCKET; // R2 binding
const url = new URL(request.url);
const key = url.pathname.slice(1); // '/image.png' -> 'image.png'
// Get object from R2
const object = await bucket.get(key);
if (!object) {
return new Response('Not Found', { status: 404 });
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata.contentType,
'Cache-Control': 'public, max-age=31536000',
},
});
},
};
To develop Workers, you need the Wrangler CLI. It handles local dev servers, deployment, environment variable management.
# Create new project
npm create cloudflare@latest my-worker
# Run local dev server
npx wrangler dev
# Deploy
npx wrangler deploy
All configuration lives in wrangler.toml.
name = "my-worker"
main = "src/index.js"
compatibility_date = "2026-01-29"
# KV binding
[[kv_namespaces]]
binding = "CACHE"
id = "abcd1234"
# D1 binding
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "xyz789"
# R2 binding
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "my-bucket"
# Environment variables
[vars]
ENVIRONMENT = "production"
The local development experience is excellent. wrangler dev behaves almost identically to the actual edge environment.
I've discovered killer use cases for Workers.
1. API proxy and cachingWrap external APIs with Workers to reduce regional latency and cache responses. Makes slow APIs fast.
2. A/B testingSplit users into groups at the edge and serve different pages or API responses. No server code changes needed.
export default {
async fetch(request, env, ctx) {
const variant = Math.random() < 0.5 ? 'A' : 'B';
return new Response(`You're in variant ${variant}`, {
headers: {
'Set-Cookie': `variant=${variant}; Path=/; Max-Age=86400`,
},
});
},
};
3. Authentication gateway
Validate all requests at the edge first. Invalid requests never reach your origin servers. Improves both security and performance.
4. Image optimizationResize, convert to WebP, compress uploaded images in Workers. Powerful when combined with Cloudflare Images.
5. Redirects and URL rewritingHandle short URL services, legacy URL redirects, country-specific content routing at the edge.
Workers aren't magic. You need to know the constraints.
CPU time limits: Free plan gets 10ms per request, paid gets 50ms. Heavy computation (image processing, encryption) is challenging.
Memory limits: 128MB max. Loading large files into memory will fail.
Node.js compatibility: Doesn't fully support Node.js APIs. No fs, child_process. Web standard APIs only.
Cold starts are fast but not zero: V8 Isolates beat containers, but still take a few milliseconds. If microseconds matter, it's insufficient.
Debugging challenges: Edge environment isn't identical to local. Production-only bugs are hard to reproduce.
These constraints make Workers ideal for lightweight tasks (routing, caching, auth). Complex business logic still belongs on traditional servers.
Workers pricing is generous.
Free plan:Enough for personal projects and side projects. My blog API has run on the free plan for six months.
Paid plan ($5/month):Pricing stays reasonable as traffic grows. Much cheaper than Lambda.
Before using Workers, I thought "edge computing" was advanced optimization. Something only large-scale services needed.
But using it changed my perspective. The edge isn't complicated—it's simpler. No worrying about regions, configuring load balancers, managing scaling. Deploy code and you're done. Users worldwide get fast responses.
Especially for small teams, the edge is a game changer. Focus on product instead of infrastructure operations. My solo-built SaaS serves users globally with consistent speed because of Workers.
Cloudflare Workers is the next stage of serverless. Lambda brought "no server management," Workers brings "no region thinking." Write code, deploy it, let the world use it. This is the future.