
Greedy Algorithm: Marshmallow Test Fail
Choosing immediate profit without thinking about future. Sometimes shortsightedness is the optimal solution.

Choosing immediate profit without thinking about future. Sometimes shortsightedness is the optimal solution.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

There's this famous psychology experiment. You put a marshmallow in front of a kid and say, "If you wait 15 minutes, I'll give you another one." Most kids can't resist. They eat it immediately. I probably would've done the same.
That's exactly what greedy algorithms do. They grab what looks best right now. They don't care about future consequences or whether a better choice is hiding somewhere. They just pick what seems optimal at this moment.
The surprising thing is, this shortsighted strategy sometimes gives you the optimal solution. Running a startup taught me that not every problem demands a "perfect plan." Sometimes the answer is just intuitively picking "whatever looks best right now."
I first experienced greedy during my convenience store part-time job. A customer bought something for 4,000 won and paid with 10,000 won. I needed to give 6,000 won in change, using the minimum number of bills/coins. The drawer had 5,000 won bills, 1,000 won bills, 500 won coins, and 100 won coins.
I didn't need to think. Just use the largest denomination possible first.
Done. Total 2 pieces. That's minimum.
This is greedy. "Use the largest denomination available right now." Greedily choose the maximum at every moment. And in this case, that's the correct answer.
Running a service taught me something. The assumption "it worked so far, so it'll keep working" is dangerous. Same with greedy.
Let's say coin denominations are 100 won, 400 won, 500 won, and we need to give 800 won in change.
Greedy approach:Total 4 coins.
But wait. Two 400 won coins make 800 won. 2 coins would've been enough. Greedy grabbed the 500 won greedily and ended up with a worse solution.
This is greedy's dilemma. It gets baited by the "largest denomination" in front of it and misses the global optimum. For cases like this, you need Dynamic Programming (DP) to consider all possibilities.
My PM at work often asks, "Can we solve this with greedy?" My answer is always the same. "If two conditions are satisfied, yes."
The locally optimal choice doesn't cause problems later. If using a larger denomination coin doesn't ruin your future options, greedy works.
In the change-making problem with 5000, 1000, 500, 100, larger denominations are multiples of smaller ones. So using larger ones first is always advantageous. But with 100, 400, 500, there's no multiple relationship, so greedy fails.
The optimal solution to the overall problem is composed of optimal solutions to subproblems. Optimal change for 6,000 won = use 5,000 won + optimal solution for remaining 1,000 won. If you can decompose recursively like this, greedy works.
The difference from DP is that DP "looks at all subproblems" while greedy "only looks at the current best." Greedy is much faster but has stricter conditions.
Greedy change-making code that works with standard currency denominations.
function makeChange(amount, denominations) {
// Sort largest first
const sorted = [...denominations].sort((a, b) => b - a);
const result = [];
let remaining = amount;
for (const coin of sorted) {
while (remaining >= coin) {
result.push(coin);
remaining -= coin;
}
}
return remaining === 0 ? result : null; // Return null if impossible
}
console.log(makeChange(6000, [5000, 1000, 500, 100]));
// [5000, 1000] → 2 pieces
console.log(makeChange(800, [500, 400, 100]));
// [500, 100, 100, 100] → 4 pieces (optimal is 400+400=2, greedy fails)
Time Complexity: O(n log n) (sorting) + O(amount / min_coin) (loop) Space Complexity: O(result size)
This code faithfully follows the greedy strategy of "use largest first." But it doesn't guarantee an optimal solution depending on the denominations.
Imagine our startup office has only one meeting room. Multiple teams submitted time slots, but overlapping meetings can't happen simultaneously. How do we assign the maximum number of meetings?
| Meeting | Start | End |
|---|---|---|
| A | 9:00 | 10:30 |
| B | 9:30 | 11:00 |
| C | 10:00 | 11:30 |
| D | 10:30 | 12:00 |
| E | 11:00 | 12:30 |
Intuition: "Assign meetings that end earliest first." The earlier a meeting ends, the more room you have for subsequent meetings.
function activitySelection(activities) {
// Sort by end time ascending
const sorted = [...activities].sort((a, b) => a.end - b.end);
const selected = [];
let lastEndTime = 0;
for (const activity of sorted) {
// Only select meetings that start after previous one ends
if (activity.start >= lastEndTime) {
selected.push(activity);
lastEndTime = activity.end;
}
}
return selected;
}
const meetings = [
{ name: 'A', start: 9, end: 10.5 },
{ name: 'B', start: 9.5, end: 11 },
{ name: 'C', start: 10, end: 11.5 },
{ name: 'D', start: 10.5, end: 12 },
{ name: 'E', start: 11, end: 12.5 }
];
console.log(activitySelection(meetings));
// [A, D, E] → 3 meetings assigned
Why is this optimal? Earlier end times leave more options for later choices. Even if you greedily choose "whichever ends earliest," you don't miss the global optimum. This is greedy's magic.
Time Complexity: O(n log n) (sorting) Space Complexity: O(n) (result array)
You're packing items into a knapsack with a weight limit. How do you maximize value?
Fractional Knapsack: You can split items. (e.g., gold dust) 0/1 Knapsack: You can only take items whole. (e.g., laptop)
Fractional works with greedy. Just pack items with highest value per weight (value/weight) first. It's like buying stocks - "buy highest return first."
0/1 fails with greedy. You need DP. Because taking a "high value item now" might leave you with bad weight combinations later.
Used in file compression. Frequent characters get short bit codes, rare characters get long bit codes. The greedy strategy of "merge the two nodes with lowest frequency" builds an optimal compression tree.
It's like packing a delivery box - frequently used items go in easy-to-reach spots, rarely used ones go deep inside.
Connect all vertices in a graph while minimizing total edge weight. Greedy strategy: Add edges from smallest weight, but skip if it creates a cycle.
Like connecting company branches with cables at minimum cost. Start with cheapest connections, but don't connect places that are already connected.
Find shortest distance from a starting point to each vertex. Greedy strategy: Visit the vertex with shortest known distance from current point.
Like navigation exploring "nearest intersection so far" first. Confirm closest points first and expand outward.
When a server needs to process multiple tasks, how do you minimize waiting time? Process shortest tasks first. (SJF, Shortest Job First)
At a cafe, if "iced americano" takes 10 seconds but "hand drip" takes 5 minutes, processing 10 americano orders first reduces total waiting time.
A technique to prove a greedy algorithm actually gives an optimal solution. "Assume optimal solution differs from greedy → replacing optimal with greedy doesn't make it worse → contradiction"
Meeting room example:
This way of showing "exchanging doesn't make it worse" is the Exchange Argument.
Comparing the three:
| Strategy | Characteristic | Examples |
|---|---|---|
| Greedy | Choose current best only, no looking back | Change-making, Activity selection, MST |
| DP | Solve all subproblems and combine | 0/1 Knapsack, LCS |
| Backtracking | Explore all cases with pruning | N-Queens, Sudoku |
Speed: Greedy > DP > Backtracking Optimal guarantee: Conditional (Greedy) < Guaranteed (DP, Backtracking)
Greedy is fastest but the problem must satisfy greedy conditions. DP is slower but certain. Backtracking is exhaustive search, even slower, but useful for complex constraints.
I experienced this running a cloud service. Server bandwidth was limited, but multiple API requests were coming in. Each request had different "processing time" and "importance."
Initially, I used a greedy strategy of "highest importance first." But when important requests took long, short requests piled up behind them and waiting time exploded.
Eventually, I mixed SJF (Shortest Job First) + Priority Queue. Short ones first, but anything above a certain importance gets priority. This is also a kind of "greedy heuristic." Not a perfect optimal solution, but when you need to decide in real-time, greedy strategies were useful.
Greedy looks like "failing the marshmallow test." It only looks at what's in front, doesn't think about the future. But if the problem structure allows greedy, it's the fastest and most elegant solution.
Running a company is similar. You can't perfectly plan every decision. Sometimes the answer is choosing "what's best right now" and moving fast. However, you need the eye to distinguish whether it's a problem where greedy works, or one where you need to carefully consider all cases.
Greedy algorithms teach "when you can afford to be greedy." If Greedy Choice Property and Optimal Substructure are satisfied, don't hesitate - grab the biggest first. For change, meeting rooms, network paths.
Next time you think "can't I just pick whatever looks best right now?", check if it's a problem where greedy works. Are coin denominations in multiple relationships? Does finishing early really help? If yes, just grab it.