
Dynamic Programming: Don't Repeat Yourself
Scary name, simple concept. It's just 'Memoization'. Solving Fibonacci without burning the CPU.

Scary name, simple concept. It's just 'Memoization'. Solving Fibonacci without burning the CPU.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

When I first heard "Dynamic Programming", I was genuinely scared. CS majors casually say "just solve it with DP", but I had zero clue what that meant. It sounded high-level, like it required complex math formulas. But when I finally dug into it, the truth behind this intimidating name was shockingly simple.
"Write down what you calculated, use it again later"That's it. Like memorizing multiplication tables in elementary school—calculate once, memo it, reuse it. Why did they give such a grandiose name to such a simple concept? The reason is hilarious. A mathematician named Richard Bellman in the 1950s deliberately chose a fancy name to secure research funding. "Programming" didn't even mean coding—it meant "planning". It was all marketing.
Once I learned this, I finally felt comfortable with DP. No need to be scared of the name. It's just smart recursion.
To understand DP, I started with Fibonacci. It's simple. F(n) = F(n-1) + F(n-2), just add the previous two numbers. Coding it recursively matches the definition perfectly.
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
console.log(fib(10)); // 55
Looked perfect. But when I ran fib(40), my computer froze. Well, not froze—it was calculating for several seconds. Why so slow?
I drew the recursion tree and found the answer. To calculate fib(5):
fib(5)
/ \
fib(4) fib(3)
/ \ / \
fib(3) fib(2) fib(2) fib(1)
/ \ / \ / \
fib(2) fib(1) ...
fib(3) gets calculated twice, fib(2) three times, fib(1) five times. Complete redundant calculation hell. This tree grows exponentially. Time complexity is O(2^n). For n=40, that's roughly 1 trillion calculations. I could hear my MacBook screaming.
"Why calculate the same thing over and over?" This question was the birth of DP for me.
The answer was stupidly simple. Write down what you calculated. If you calculated fib(3) once, write it down somewhere. Next time you need fib(3), don't recalculate—just look at what you wrote. This is Memoization.
const memo = {};
function fib(n) {
if (n <= 1) return n;
if (memo[n]) return memo[n]; // Already calculated? Return it
memo[n] = fib(n - 1) + fib(n - 2); // Calculate and store in memo
return memo[n];
}
console.log(fib(40)); // 102334155, calculated instantly
Running fib(40) with this code took less than 0.001 seconds. Felt like magic. It went from exponential time to linear time O(n). Because each number is calculated exactly once.
After understanding this, I defined DP like this:
"Don't solve the same problem twice. Write down the answer somewhere and reuse it."It's like how banks calculate interest—they don't recalculate from scratch every month, they take last month's balance and add to it. It's obvious, but when you write recursive code, you keep forgetting this.
I thought memoization was all there is to DP, but there's another way: Bottom-Up.
Go from top to bottom. Use recursion, but cache calculated values. The Fibonacci I wrote above is this approach.
const memo = {};
function fib(n) {
if (n <= 1) return n;
if (memo[n]) return memo[n];
memo[n] = fib(n - 1) + fib(n - 2);
return memo[n];
}
Pros: Code is intuitive. You can write it exactly as the recursive definition.
Cons: Recursion stack builds up, so if n is too large, you might hit stack overflow.
Build up from the smallest pieces. Use loops.
function fib(n) {
if (n <= 1) return n;
const dp = [0, 1];
for (let i = 2; i <= n; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[n];
}
Pros: No recursion overhead, no stack overflow worries. Usually faster. Cons: You have to fill from small to large, which can make some problems less intuitive to code.
At first, Top-Down felt easier to understand. As I got more comfortable, Bottom-Up felt cleaner. Choose whichever feels natural for the problem.
Not every problem can be solved with DP. Problems that can use DP have two properties.
When you break a big problem into smaller ones, the same small problem appears multiple times. Fibonacci is the poster child. To solve fib(5), you need fib(3) multiple times.
If the small problems are all independent, just use Divide & Conquer. Merge sort is a good example. When you split an array in half, left and right don't overlap. That's why merge sort isn't DP.
Combining optimal solutions to small problems gives the optimal solution to the big problem. For example, in shortest path problems, if the shortest path from A to C is A → B → C, then A → B must also be the shortest path. If A → B isn't shortest, then A → C won't be shortest either.
If both properties are satisfied, you can use DP.
Fibonacci alone wasn't enough for me, so I tackled the coin change problem. This is a classic DP problem that comes up in real work too.
Problem: Given coin denominations and a target amount, find the minimum number of coins needed. For example, if coins are [1, 5, 10, 25] and amount is 32, the minimum is 4 coins: 25 + 5 + 1 + 1.
Define dp[i] as "minimum coins needed to make amount i". Then:
dp[i] = min(dp[i - coin] + 1) for coin in coins
To find dp[32]:
dp[31] + 1dp[27] + 1dp[22] + 1dp[7] + 1Pick the minimum.
function coinChange(coins, amount) {
const dp = new Array(amount + 1).fill(Infinity);
dp[0] = 0; // 0 amount needs 0 coins
for (let i = 1; i <= amount; i++) {
for (let coin of coins) {
if (i - coin >= 0) {
dp[i] = Math.min(dp[i], dp[i - coin] + 1);
}
}
}
return dp[amount] === Infinity ? -1 : dp[amount];
}
console.log(coinChange([1, 5, 10, 25], 32)); // 4
console.log(coinChange([2, 5], 3)); // -1 (impossible)
The core here is the recurrence relation (점화식). dp[i] depends on previous states dp[i - coin]. We clearly define this relationship and build up from small values. Time complexity is O(amount × coins.length).
Solving this problem deepened my understanding of DP. DP is about defining "states" and expressing "state transitions" with recurrence relations. Once you nail that framework, complex problems unravel.
LCS finds the longest subsequence common to two strings. For example, the LCS of "ABCBDAB" and "BDCABA" is "BCBA" or "BDAB" (length 4).
Define dp[i][j] as "LCS length of first i characters of str1 and first j characters of str2". Recurrence relation:
if (str1[i-1] === str2[j-1]):
dp[i][j] = dp[i-1][j-1] + 1
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
If characters match, add 1 to previous state. If not, take the max of two paths.
function lcs(str1, str2) {
const m = str1.length;
const n = str2.length;
const dp = Array.from({ length: m + 1 }, () => Array(n + 1).fill(0));
for (let i = 1; i <= m; i++) {
for (let j = 1; j <= n; j++) {
if (str1[i - 1] === str2[j - 1]) {
dp[i][j] = dp[i - 1][j - 1] + 1;
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}
return dp[m][n];
}
console.log(lcs("ABCBDAB", "BDCABA")); // 4
This is a classic 2D DP example. Fill the dp table to get the final answer. Time complexity is O(m × n).
Solving this made me realize DP applies to strings, arrays, graphs, and more. DP isn't just about number crunching—it's a tool for managing state relationships.
Knapsack is the crown jewel of DP. Given a weight-limited bag and valuable items, find the maximum value you can carry.
Problem: Each item has weight and value. Bag capacity is W. Each item can be taken at most once (0/1 knapsack).
Define dp[i][w] as "maximum value with first i items and weight limit w". Recurrence:
if (weight[i] <= w):
dp[i][w] = max(dp[i-1][w], dp[i-1][w - weight[i]] + value[i])
else:
dp[i][w] = dp[i-1][w]
Either take the item or skip it, choose the max.
function knapsack(weights, values, capacity) {
const n = weights.length;
const dp = Array.from({ length: n + 1 }, () => Array(capacity + 1).fill(0));
for (let i = 1; i <= n; i++) {
for (let w = 1; w <= capacity; w++) {
if (weights[i - 1] <= w) {
dp[i][w] = Math.max(
dp[i - 1][w],
dp[i - 1][w - weights[i - 1]] + values[i - 1]
);
} else {
dp[i][w] = dp[i - 1][w];
}
}
}
return dp[n][capacity];
}
const weights = [2, 3, 4, 5];
const values = [3, 4, 5, 6];
const capacity = 8;
console.log(knapsack(weights, values, capacity)); // 10
Knapsack has many real-world applications: project selection within budget, server resource allocation, ad slot assignment—all variants of knapsack.
Large DP tables can consume too much memory. But many DP problems only need previous states. Fibonacci only needs dp[i-1] and dp[i-2]. No need to keep the entire array.
function fibOptimized(n) {
if (n <= 1) return n;
let prev2 = 0;
let prev1 = 1;
for (let i = 2; i <= n; i++) {
let current = prev1 + prev2;
prev2 = prev1;
prev1 = current;
}
return prev1;
}
This reduces space complexity from O(n) to O(1). Knapsack can be similarly optimized to a 1D array.
Seeing this technique made me think: "DP isn't just caching—it's the art of state management."
While studying DP, I got curious about similar-looking techniques.
Make the best choice at each step. In coin change, always use the largest coin—that's greedy. But if coins are [1, 3, 4] and amount is 6, greedy picks 4 + 1 + 1 = 3 coins, but the answer is 3 + 3 = 2 coins. Greedy doesn't always guarantee optimal. DP considers all cases to find the optimal.
Break the problem into independent subproblems and solve each. Merge sort and quicksort are classic examples. The difference from DP is subproblems don't overlap. Divide & conquer solves each subproblem once. DP solves overlapping subproblems multiple times, so it optimizes with caching.
Summary:
I wanted to develop intuition for "Is this a DP problem?" when looking at a problem. Here are some patterns I noticed.
Phrases like "minimum cost", "maximum profit", "optimal path"—suspect DP.
Questions like "number of ways to make N", "number of ways to split an array"—likely DP.
If you write recursion and see redundant calculations, it's DP.
n <= 1000 or so)DP usually has O(n^2) ~ O(n^3) complexity. Small constraints hint that DP is viable.
Studying DP taught me something. DP isn't a complex algorithm—it's the "power of remembering the past". Don't repeat the same mistake, keep using what you learned before. Life is similar. When you face the same problem, you recall past experience and make better choices. DP gives computers this ability.
At first, "Dynamic Programming" intimidated me. Now, the name feels friendly. Because I discovered the simple truth behind the grandiose name.
In the end, DP was this: Don't solve the same thing twice. Memo it and reuse it. That's all.