
WebAssembly in Practice: Native Performance in the Browser
Why does Figma feel fast in the browser? Dig into what WebAssembly actually is, and walk through the full workflow of building a Rust WASM module and calling it from JavaScript.

Why does Figma feel fast in the browser? Dig into what WebAssembly actually is, and walk through the full workflow of building a Rust WASM module and calling it from JavaScript.
Tired of naming classes? Writing CSS directly inside HTML sounds ugly, but it became the world standard. Why?

Establishing TCP connection is expensive. Reuse it for multiple requests.

A comprehensive deep dive into client-side storage. From Cookies to IndexedDB and the Cache API. We explore security best practices for JWT storage (XSS vs CSRF), performance implications of synchronous APIs, and how to build offline-first applications using Service Workers.

HTTP is Walkie-Talkie (Over). WebSocket is Phone (Hello). The secret tech behind Chat and Stock Charts.

When I first heard "WebAssembly delivers near-native performance in the browser" back in 2017, I was skeptical. Then I used Figma and noticed it felt faster than Adobe XD. Then Google Earth Web ran smoother than I expected. "Oh, this is actually real," I thought.
Recently, Adobe shipped Photoshop Web powered by WASM. AutoCAD Web appeared. The line between "runs on desktop" and "runs in the browser" is dissolving.
So I dug in. What WASM actually is, how to build it, when to use it, and how to run Rust code in the browser from end to end.
There's a common misconception. "Coding in WebAssembly" is the wrong framing. WASM is a compilation target — an execution format.
Rust code → Rust compiler → WebAssembly binary (.wasm)
C/C++ code → Emscripten → WebAssembly binary (.wasm)
Go code → Go compiler → WebAssembly binary (.wasm)
AssemblyScript → asc → WebAssembly binary (.wasm)
A .wasm file is a low-level binary instruction set. Not machine code the CPU runs directly, but close enough that the browser's WASM runtime executes it at near-native speed.
JavaScript execution:
1. Download JS source
2. Parse (build AST)
3. Interpret or JIT compile
4. Optimize (hot paths after enough runs)
5. Execute
WebAssembly execution:
1. Download .wasm binary
2. Decode (already binary, no parsing)
3. JIT compile (no type inference needed)
4. Execute
WASM has type information decided at compile time. No runtime type inference, no deoptimization when a function is called with unexpected types. Execution speed is consistent and predictable.
Common misconceptions:
| Category | Specific Examples |
|---|---|
| Image/video processing | Filters, codecs, compression |
| Cryptography | SHA, AES, bcrypt, ECDSA |
| Physics simulation | Game engines, CAD, structural analysis |
| Audio processing | DSP, codecs, effects |
| Scientific computing | ML inference, statistics, numerical methods |
| Porting legacy code | Bringing C/C++ libraries to the web |
The key question: Is CPU-intensive computation taking so long it's blocking the UI? → Consider WASM. Can it be solved with JS optimization or Web Workers? → WASM not needed.
| Language | WASM Support | Memory Safety | GC | Binary Size |
|---|---|---|---|---|
| Rust | Excellent | Compile-time | None | Small |
| C/C++ | Good | Manual | None | Small |
| Go | Good | Runtime | Yes | Large |
| AssemblyScript | Good | None | Yes | Medium |
No garbage collector means small WASM binaries. And wasm-pack is a first-class tool for the Rust → WASM workflow.
# Install Rust if needed
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install wasm-pack
cargo install wasm-pack
# Add WASM compilation target
rustup target add wasm32-unknown-unknown
cargo new --lib image-processor
cd image-processor
Cargo.toml:
[package]
name = "image-processor"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib", "rlib"]
[dependencies]
wasm-bindgen = "0.2"
web-sys = { version = "0.3", features = ["console"] }
js-sys = "0.3"
[profile.release]
opt-level = 3
lto = true
// src/lib.rs
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn add(a: i32, b: i32) -> i32 {
a + b
}
#[wasm_bindgen]
pub fn fibonacci(n: u32) -> u64 {
if n <= 1 { return n as u64; }
let mut a: u64 = 0;
let mut b: u64 = 1;
for _ in 2..=n {
let c = a + b;
a = b;
b = c;
}
b
}
// Practical: grayscale image processing
#[wasm_bindgen]
pub fn grayscale(pixels: &mut [u8]) {
// pixels is RGBA: [R, G, B, A, R, G, B, A, ...]
for i in (0..pixels.len()).step_by(4) {
let r = pixels[i] as f32;
let g = pixels[i + 1] as f32;
let b = pixels[i + 2] as f32;
// Human visual perception weights
let gray = (0.299 * r + 0.587 * g + 0.114 * b) as u8;
pixels[i] = gray;
pixels[i + 1] = gray;
pixels[i + 2] = gray;
// Alpha unchanged
}
}
# For bundlers (webpack, vite)
wasm-pack build --target bundler
# For direct web use (no bundler)
wasm-pack build --target web
# Dev build (no optimization, fast compile)
wasm-pack build --dev --target bundler
Build output:
pkg/
image_processor_bg.wasm ← actual WASM binary
image_processor.js ← JS glue code
image_processor.d.ts ← TypeScript definitions
package.json
import init, { add, fibonacci, grayscale } from './pkg/image_processor.js';
async function main() {
// Load and compile the WASM binary
await init();
console.log(add(2, 3)); // 5
console.log(fibonacci(40)); // 102334155
}
main();
import init, { grayscale } from './pkg/image_processor.js';
async function applyGrayscale(imageElement) {
await init();
const canvas = document.createElement('canvas');
const ctx = canvas.getContext('2d');
canvas.width = imageElement.width;
canvas.height = imageElement.height;
ctx.drawImage(imageElement, 0, 0);
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
// wasm-bindgen automatically copies Uint8ClampedArray to WASM memory
grayscale(imageData.data);
ctx.putImageData(imageData, 0, 0);
return canvas.toDataURL();
}
import { useState, useCallback } from 'react';
let wasmLoaded = false;
let wasmFunctions: { grayscale: (pixels: Uint8ClampedArray) => void } | null = null;
async function loadWasm() {
if (!wasmLoaded) {
const module = await import('./pkg/image_processor.js');
await module.default();
wasmFunctions = { grayscale: module.grayscale };
wasmLoaded = true;
}
return wasmFunctions!;
}
function useImageProcessor() {
const [processing, setProcessing] = useState(false);
const processImage = useCallback(async (file: File) => {
setProcessing(true);
try {
const { grayscale } = await loadWasm();
const bitmap = await createImageBitmap(file);
const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
const ctx = canvas.getContext('2d')!;
ctx.drawImage(bitmap, 0, 0);
const imageData = ctx.getImageData(0, 0, bitmap.width, bitmap.height);
const start = performance.now();
grayscale(imageData.data);
console.log(`WASM time: ${performance.now() - start}ms`);
ctx.putImageData(imageData, 0, 0);
const blob = await canvas.convertToBlob();
return URL.createObjectURL(blob);
} finally {
setProcessing(false);
}
}, []);
return { processImage, processing };
}
WASM uses linear memory — a contiguous byte array.
const memory = new WebAssembly.Memory({
initial: 1, // 1 page = 64KB
maximum: 10,
});
const buffer = new Uint8Array(memory.buffer);
buffer[0] = 42; // Direct memory write
Copying data between JS ↔ WASM boundaries is expensive. For large data:
// Expose a pointer to WASM memory, let JS write directly
#[wasm_bindgen]
pub struct ImageBuffer {
data: Vec<u8>,
}
#[wasm_bindgen]
impl ImageBuffer {
pub fn new(size: usize) -> ImageBuffer {
ImageBuffer { data: vec![0u8; size] }
}
pub fn as_ptr(&self) -> *const u8 { self.data.as_ptr() }
pub fn as_mut_ptr(&mut self) -> *mut u8 { self.data.as_mut_ptr() }
pub fn len(&self) -> usize { self.data.len() }
pub fn process(&mut self) {
for byte in self.data.iter_mut() {
*byte = byte.wrapping_add(1);
}
}
}
const { memory } = await import('./pkg/image_processor_bg.wasm');
const buffer = ImageBuffer.new(1024 * 1024);
// Write directly into WASM memory — zero copy
const wasmMemory = new Uint8Array(memory.buffer, buffer.as_ptr(), buffer.len());
wasmMemory.set(inputData);
buffer.process();
| Task | JS (JIT-optimized) | WASM | Ratio |
|---|---|---|---|
| Grayscale filter (1080p) | ~25ms | ~10ms | ~2.5x |
| Fibonacci(1M iterations) | ~15ms | ~3ms | ~5x |
| AES-256 encryption (1MB) | ~45ms | ~8ms | ~5.6x |
| JSON parsing (10MB) | ~150ms | Slower | — |
| 100 DOM updates | ~10ms | Not possible | — |
WASM isn't "always faster." It wins convincingly on CPU-intensive numerical computation. For I/O or DOM work, it's irrelevant or slower.
Figma's rendering engine is written in C++ and compiled to WASM via Emscripten. Vector math, layout engine, and rendering all run in WASM. JavaScript handles only the UI controls and acts as the interface to the engine.
3D terrain rendering, satellite image processing, and physically-based atmospheric effects run in WASM — a web port of the C++ client codebase.
C/C++ codebase compiled to WASM via Emscripten, integrated with Canvas API for layers, masks, and filters in the browser.
// ffmpeg.wasm: video conversion in the browser
import { createFFmpeg, fetchFile } from '@ffmpeg/ffmpeg';
const ffmpeg = createFFmpeg({ log: true });
await ffmpeg.load();
await ffmpeg.run('-i', 'input.mp4', 'output.gif');
// TensorFlow.js WASM backend: accelerated ML inference
import '@tensorflow/tfjs-backend-wasm';
await tf.setBackend('wasm');
// Others: sqlite-wasm, pdfjs, sharp (Node.js)
Even WASM can block the main thread for heavy computations. Combine with Web Workers:
// worker.js
import init, { heavy_computation } from './pkg/my_wasm.js';
let initialized = false;
self.onmessage = async (e) => {
if (!initialized) {
await init();
initialized = true;
}
const { id, data } = e.data;
const result = heavy_computation(data);
self.postMessage({ id, result });
};
// main.js
const worker = new Worker('./worker.js', { type: 'module' });
function runWasmInWorker(data) {
return new Promise((resolve) => {
const id = Math.random();
worker.postMessage({ id, data });
worker.onmessage = (e) => {
if (e.data.id === id) resolve(e.data.result);
};
});
}
const result = await runWasmInWorker(largeDataset);
Is it CPU-intensive?
├── NO → Stick with JavaScript
└── YES → Is the UI blocking?
├── NO → Optimize your JS, it's fine
└── YES → Does a Web Worker solve it?
├── YES → Web Worker + JS
└── NO (complex number crunching, porting C/C++/Rust libs)
→ Reach for WASM
Use WASM for: Image/video/audio processing, cryptography, physics engines, ML inference, porting native libraries.
Skip WASM for: Business logic, API calls, form handling, routing, "it seems like it should be faster."
WebAssembly isn't a JavaScript killer. It complements JavaScript. JS still owns the DOM. WASM takes over the computations that push the CPU hard.
Key points:The next time you hit a performance wall in the browser, don't give up and say "this can't be done on the web." It probably can.