
S3 + CloudFront: The Gold Standard for Static File Hosting
Serving static files from your server works at small scale, but traffic spikes can bring it down. S3 + CloudFront separates static delivery from your app server for any scale.

Serving static files from your server works at small scale, but traffic spikes can bring it down. S3 + CloudFront separates static delivery from your app server for any scale.
How to deploy without shutting down servers. Differences between Rolling, Canary, and Blue-Green. Deep dive into Database Rollback strategies, Online Schema Changes, AWS CodeDeploy integration, and Feature Toggles.

Solving server waste at dawn and crashes at lunch. Understanding Auto Scaling vs Serverless through 'Taxi Dispatch' and 'Pizza Delivery' analogies. Plus, cost-saving tips using Spot Instances.

Why physical distance kills speed. Consistent Hashing, Edge Computing, Cache Purge strategies, and how CDNs defend against DDoS attacks.

No managing EC2. Pay per execution. Event-driven architecture using AWS Lambda, S3, and DynamoDB. Cold Start mitigation patterns.

Serving images straight from an application server seems fine at small scale. Throw some images in the /public folder of a Next.js app, reference them with <img src="/images/product.png" />, and ship it.
But when traffic spikes, that approach breaks down fast. Image requests flood in—hundreds per second. CPU hits 90%. Real API requests start timing out. The application server chokes on image delivery. When a post goes viral or an unexpected traffic spike hits, static file serving alone can take down your server—there are well-documented cases of this pattern.
That's when it clicks: static files shouldn't touch the application server. It's like a restaurant where the host who takes reservations also delivers food—inefficient and breaks under load. Separate concerns. S3 stores files. CloudFront delivers them globally. Your server does actual work.
Migrating to S3 + CloudFront solves the problem cleanly. This combo became the gold standard for good reason.
S3 is storage at scale. Dump files in, AWS guarantees 99.999999999% (11 nines) durability. That means "stop worrying about losing files." Unlimited capacity. Pay only for what you store.
CloudFront is a global delivery network. 400+ edge locations worldwide. Users in Seoul download from Seoul. Users in New York download from New York. No round-trips to your origin server halfway across the planet.
Together, they create "infinite scale, globally fast, zero server involvement" static hosting. Traffic 10x? Your server doesn't notice. CloudFront handles it.
The biggest mistake when creating an S3 bucket is making it public. "But don't I need public access to share files?" No.
# Create bucket
aws s3 mb s3://my-static-files --region us-east-1
# Block ALL public access (CRITICAL)
aws s3api put-public-access-block \
--bucket my-static-files \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
Keep the bucket completely private. Only CloudFront should access it. That's what Origin Access Control (OAC) does.
AWS is pushing OAC over the old Origin Access Identity (OAI). OAC is more secure and supports SSE-KMS encryption.
// S3 Bucket Policy - CloudFront OAC only
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFrontServicePrincipal",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-static-files/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "arn:aws:cloudfront::123456789012:distribution/EXAMPLEDISTID"
}
}
}
]
}
This policy blocks direct S3 URL access. Only CloudFront can fetch files. Security aside, this saves money. Direct S3 data transfer costs 3x more than CloudFront.
When creating a CloudFront distribution, cache strategy is critical. Get it wrong and files don't update, or costs explode.
// Next.js file upload with Cache-Control headers
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3Client = new S3Client({ region: "us-east-1" });
async function uploadToS3(file: File, key: string) {
const buffer = Buffer.from(await file.arrayBuffer());
// Cache strategy varies by file type
const cacheControl = key.match(/\.(jpg|png|webp|svg)$/)
? "public, max-age=31536000, immutable" // 1 year, images never change
: "public, max-age=3600"; // 1 hour, HTML/JSON may update
await s3Client.send(
new PutObjectCommand({
Bucket: "my-static-files",
Key: key,
Body: buffer,
ContentType: file.type,
CacheControl: cacheControl,
})
);
}
immutable is the magic keyword. Hash image filenames like logo-a3d5f2.png. When the file changes, the name changes. Browsers trust it's immutable and cache forever. Zero server requests.
CloudFront also needs cache behavior configuration.
# CloudFront config via CloudFormation
CacheBehavior:
PathPattern: "images/*"
TargetOriginId: S3Origin
ViewerProtocolPolicy: redirect-to-https
CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6 # CachingOptimized
Compress: true
AllowedMethods:
- GET
- HEAD
- OPTIONS
The CachingOptimized policy handles query strings, headers, and cookies intelligently. Perfect for static files.
The default d111111abcdef8.cloudfront.net domain is ugly. For a custom domain like cdn.mysite.com, you need Route 53 and ACM (AWS Certificate Manager).
Critical gotcha: ACM certificates for CloudFront must be in us-east-1 (Virginia). CloudFront is global and only recognizes us-east-1 certificates. Create it in another region and it won't appear.
# Request ACM certificate (MUST be us-east-1)
aws acm request-certificate \
--domain-name cdn.mysite.com \
--validation-method DNS \
--region us-east-1
# Add CNAME record in Route 53 for DNS validation
# Add Alternate Domain Name (CNAME) in CloudFront distribution
# Point cdn.mysite.com to CloudFront distribution in Route 53
Now you have clean https://cdn.mysite.com/images/logo.png URLs. HTTPS is free. Certificate renewal is automatic.
Updated a file but CloudFront serves the old version? Invalidate the cache.
# CloudFront cache invalidation
aws cloudfront create-invalidation \
--distribution-id E1234567890ABC \
--paths "/images/logo.png" "/css/*"
Problem: it costs money. First 1,000 paths per month are free, then $0.005 per path. Wildcards like /images/* count as one path. But invalidating /* on every deploy drains money.
Solution: filename versioning. Change the filename when the file changes. No invalidation needed. Next.js and Vite do this automatically with build hashes.
// Manual hash for filenames (if needed)
import crypto from "crypto";
function getHashedFilename(filename: string, content: Buffer): string {
const hash = crypto.createHash("md5").update(content).digest("hex").slice(0, 8);
const ext = filename.split(".").pop();
const name = filename.replace(`.${ext}`, "");
return `${name}-${hash}.${ext}`;
}
// logo.png -> logo-a3d5f289.png
Now only invalidate HTML. Images, CSS, JS auto-update with new hashed filenames.
What's the actual cost? S3 + CloudFront gets more efficient as traffic scales.
Example: 10GB stored, 1TB transferred monthly:
Most cost comes from data transfer. Storage is basically free. That's why optimizing cache to reduce unnecessary transfers is crucial.
If AWS feels expensive, check out Cloudflare R2. S3-compatible API with zero egress (data transfer out) costs.
# R2 setup (Wrangler CLI)
npx wrangler r2 bucket create my-static-files
# Access R2 with S3 SDK
const s3Client = new S3Client({
region: "auto",
endpoint: "https://<account-id>.r2.cloudflarestorage.com",
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
1TB transfer: AWS costs $85, R2 costs $0. Massive difference. R2 does charge for read operations ($0.36 per million), but with high cache hit rates, it's negligible.
For high-traffic services, R2 wins. For avoiding AWS lock-in, even better.
Manually uploading to S3 and invalidating CloudFront is tedious. Automate with GitHub Actions.
# .github/workflows/deploy-static.yml
name: Deploy Static Assets
on:
push:
branches: [main]
paths:
- 'public/**'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Sync to S3
run: |
aws s3 sync ./public s3://my-static-files \
--cache-control "public,max-age=31536000,immutable" \
--exclude "*.html" \
--delete
# HTML with shorter cache
aws s3 sync ./public s3://my-static-files \
--cache-control "public,max-age=3600" \
--exclude "*" \
--include "*.html"
- name: Invalidate CloudFront (HTML only)
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} \
--paths "/*.html" "/index.html"
Push to main → auto-sync to S3 → invalidate HTML only. Images and JS have hashed filenames, no invalidation needed.
Public bucket: Opening S3 bucket publicly and serving directly without CloudFront. Security risk and 3x transfer costs.
Full cache invalidation spam: Invalidating /* on every deploy. Cost explosion. Use filename versioning.
Wrong ACM region: Creating ACM certificate outside us-east-1. CloudFront won't see it. Always us-east-1.
Missing Cache-Control headers: Uploading to S3 without Cache-Control. CloudFront can't cache effectively.
S3 Website Endpoint as origin: Using bucket.s3-website-us-east-1.amazonaws.com breaks OAC. Must use REST API endpoint (bucket.s3.us-east-1.amazonaws.com).
Static file serving isn't your application server's job. Drop files in S3, let CloudFront distribute globally. Traffic 100x? Your server doesn't care.
Costs are reasonable. Tens of dollars monthly for near-infinite scale. For high traffic, Cloudflare R2 eliminates egress costs entirely.
The key is cache strategy. Hash filenames for immutability. Minimize invalidation costs. Do this right and the architecture is nearly perfect.
Don't wait for your server to crash. Move to S3 + CloudFront now.