
Virtual DOM: The Real Reason React is Fast
Rebuilding a real house is expensive. Smart remodeling by checking blueprints (Virtual DOM) first.

Rebuilding a real house is expensive. Smart remodeling by checking blueprints (Virtual DOM) first.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

A comprehensive deep dive into client-side storage. From Cookies to IndexedDB and the Cache API. We explore security best practices for JWT storage (XSS vs CSRF), performance implications of synchronous APIs, and how to build offline-first applications using Service Workers.

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

When I first learned frontend development, a senior developer told me: "DOM manipulation is expensive, so minimize it." I didn't understand what that meant. Writing document.getElementById().innerHTML = "new text" in JavaScript seemed simple. What's expensive about changing a string? But as I used React and heard people say "Virtual DOM improves performance," I wanted to truly understand what this meant.
It came down to this: the browser's process of redrawing the screen is expensive. When you manipulate the DOM, the browser doesn't just change a value in memory—it reruns the entire rendering pipeline. Once I understood this rendering pipeline, the need for Virtual DOM clicked.
When a browser receives HTML, it goes through these steps:
Layout (Reflow) is the most expensive step. When one element's size or position changes, surrounding elements need recalculation. For example, if you add an item to the top of a list, all items below get pushed down. The browser must recalculate all their Y coordinates.
I understood this as "subway seat shifting". When someone squeezes in the middle, everyone behind shifts one seat over. Similarly, when one DOM element changes, surrounding elements are affected in a chain reaction.
Initially, "creating a fake DOM in memory" seemed odd. "Why not just modify the real DOM directly?" But observing how React works, I understood: Virtual DOM acts as a "buffer that batches changes for single processing".
I accepted it this way:
What React does:
It came down to this: minimizing unnecessary construction is the core. If the state changes 10 times but the final result matches the initial state, the Real DOM isn't touched at all. Everything happens in simulation on the Virtual DOM.
Reading React's official documentation, I properly understood Reconciliation. It's not just "comparing"—the efficient comparison algorithm is key.
When state or props change, React re-executes the component. The JSX in the return statement converts to a Virtual DOM object.
// Example: Counter component
function Counter() {
const [count, setCount] = React.useState(0);
return (
<div>
<h1>Count: {count}</h1>
<button onClick={() => setCount(count + 1)}>Increase</button>
</div>
);
}
// Virtual DOM when count is 0 (simplified)
{
type: 'div',
props: {},
children: [
{ type: 'h1', props: {}, children: ['Count: 0'] },
{ type: 'button', props: { onClick: [Function] }, children: ['Increase'] }
]
}
// When count becomes 1, new Virtual DOM
{
type: 'div',
props: {},
children: [
{ type: 'h1', props: {}, children: ['Count: 1'] }, // Only this changed!
{ type: 'button', props: { onClick: [Function] }, children: ['Increase'] }
]
}
This process is pure JavaScript object manipulation, so it's extremely fast. No browser involvement needed.
React compares the previous Virtual DOM with the new one. Perfectly comparing tree structures requires O(n³) time complexity (extremely slow). So React made two assumptions.
Assumption 1: Different element types produce completely different trees.// Previous
<div><span>Hello</span></div>
// New
<div><p>Hello</p></div>
// span changed to p, so remove span and create new p
Assumption 2: List items can be identified by key prop.
// Without keys
<ul>
<li>A</li>
<li>B</li>
</ul>
// Adding item C at the beginning
<ul>
<li>C</li> // React: "First changed from A to C? Remove A, create C."
<li>A</li> // "Second changed from B to A?"
<li>B</li> // "Third is new B."
</ul>
// Result: Inefficiently recreates everything
// With keys
<ul>
<li key="a">A</li>
<li key="b">B</li>
</ul>
// Adding C
<ul>
<li key="c">C</li> // React: "key='c' is new. Just add this."
<li key="a">A</li> // "key='a' unchanged. Leave it."
<li key="b">B</li> // "key='b' unchanged too."
</ul>
I understood this as "finding students by student ID". Names (content) can change, but student IDs (keys) are unique, so you can determine "this student is the same, that student is new" by key.
Only changes found by Diffing get applied to Real DOM. In the example above, only the <h1> tag's text changes from "Count: 0" to "Count: 1". The <div> and <button> remain untouched.
In this process, React uses Fiber Architecture (since React 16). Fiber breaks work into small units, allowing the browser to handle urgent tasks (like user input) first, then resume rendering. I thought I should study this separately later.
React optimizes automatically, but developers can give hints. I accepted these three approaches.
If props haven't changed, don't re-render the component.
const ExpensiveComponent = React.memo(({ data }) => {
console.log("Rendered!");
return <div>{data}</div>;
});
function Parent() {
const [count, setCount] = React.useState(0);
const data = "Fixed data";
return (
<div>
<button onClick={() => setCount(count + 1)}>Count: {count}</button>
<ExpensiveComponent data={data} />
{/* Even if parent re-renders, ExpensiveComponent won't if data didn't change */}
</div>
);
}
Don't recompute expensive calculations every time—cache the result.
function FilteredList({ items, filterText }) {
const filteredItems = React.useMemo(() => {
console.log("Filtering...");
return items.filter(item => item.includes(filterText));
}, [items, filterText]); // Recalculate only when items or filterText changes
return <ul>{filteredItems.map(item => <li key={item}>{item}</li>)}</ul>;
}
When passing functions as props, don't create new functions every time.
function Parent() {
const [count, setCount] = React.useState(0);
// Without useCallback, new function created each time, causing Child to re-render
const handleClick = React.useCallback(() => {
console.log("Clicked!");
}, []); // Empty dependency array means function never changes
return (
<div>
<button onClick={() => setCount(count + 1)}>Count: {count}</button>
<Child onClick={handleClick} />
</div>
);
}
const Child = React.memo(({ onClick }) => {
console.log("Child rendering");
return <button onClick={onClick}>Click me</button>;
});
While studying React, I became curious how other frameworks handle this.
Vue also uses Virtual DOM, but unlike React, it has a Reactivity System. Vue tracks which data affects which components. When data A changes, it knows precisely "only component X needs re-rendering." React, on the other hand, re-renders all child components when state changes, then filters through Diffing, so Vue can theoretically be more efficient.
Angular's Ivy renderer uses Incremental DOM. Unlike Virtual DOM, it doesn't keep two trees (old and new) in memory. Instead, it directly traverses the Real DOM and updates only necessary parts. This uses less memory. I understood this as "on-site construction"—instead of drawing two blueprints, you go on-site and directly say "change this, change that."
Svelte doesn't use Virtual DOM at all. During build time, it generates code like "when variable A changes, update DOM node B." No runtime overhead, making it extremely fast. I accepted this as "pre-assembled at factory and shipped". Instead of assembling (Diffing) on-site (browser), you create the finished product at the factory (build time) and send it.
Virtual DOM isn't omnipotent. The case that resonated with me is very large lists.
For example, if a table has 10,000 rows updating on scroll, Virtual DOM Diffing itself becomes burdensome. In such cases, use virtualization. Only render the 100 visible rows in the DOM, swapping top and bottom as you scroll (libraries like react-window, react-virtualized).
Another case is animations. Processing 60fps animations through Virtual DOM can be costly due to Diffing overhead. In these cases, using CSS animations or the Web Animations API directly is better.
Recently, React 18 introduced Concurrent Features. While not directly related to Virtual DOM, I understood it as an extension of rendering optimization.
This is possible because Fiber Architecture can break work into chunks and prioritize them. It came down to this: Virtual DOM exists not just "to be fast," but "to improve user experience" and continues evolving.
The core I accepted from studying Virtual DOM:
Ultimately, Virtual DOM isn't "absolutely fastest" but "a trade-off for maintaining code maintainability while achieving sufficiently fast performance". I understood it this way, and now when writing React code, it's clear "why I should use keys" and "why I shouldn't overuse useMemo."