
Flip-Flop: How Computers Remember
Current flows = 1, Stoops = 0. But how does it remember info? The birth of 1-bit memory.

Current flows = 1, Stoops = 0. But how does it remember info? The birth of 1-bit memory.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

When I studied logic gates (AND, OR, NOT), something felt incomplete. These gates only react when inputs arrive, and outputs vanish the moment inputs disappear. They were like goldfish circuits losing their memory every 3 seconds.
Yet computers store variables, remember program counters, and cache data. I write let count = 0 every day, but I never thought about how the value 0 actually "stays" inside the CPU. Electricity flows—so how do you "trap" information?
That question led me to flip-flops.
The most confusing part at first was that latches and flip-flops are different. Both are "1-bit memory circuits," so why two names?
I searched and found: "Latches are level-triggered, flip-flops are edge-triggered." Okay, but what does that mean? I kept digging: "Latches are transparent, flip-flops are opaque." What?
After wandering for a while, I finally understood it this way:
Latches are too sensitive for timing control. That's why real CPUs mostly use flip-flops that move in sync with clock signals. This clock signal is the CPU clock we know (3GHz = 3 billion signals per second).
The moment I understood that the core of flip-flops is the feedback circuit, everything clicked.
Normal circuits flow one way: input → logic gate → output. But flip-flops feed output back into input. Like a snake biting its own tail.
When I first saw this, I thought, "Oh, this is an infinite loop." In programming terms:
# Pseudocode
state = 0 # Initial state
while True: # Feedback loop
if set_signal:
state = 1
elif reset_signal:
state = 0
# If no signal, state stays unchanged
output = state # Output
Once state = 1, it stays 1 unless you send a reset_signal. This is memory. Even when external input disappears, the value keeps circulating inside the loop.
After accepting this principle, I understood: "Ah, memory is about trapping electricity inside a loop so it can't escape."
To understand flip-flops, you first need to see the SR Latch (Set-Reset Latch). It's the most primitive form of memory circuit.
Two NOR gates with outputs connected to each other's inputs. In ASCII:
S ----[NOR]---- Q
| ^
v |
R ----[NOR]---- Q'
Truth table:
| S | R | Q (next state) | Explanation |
|---|---|---|---|
| 0 | 0 | Q (hold) | Nothing happens. Holds existing value. |
| 1 | 0 | 1 | Set. Forces Q to 1. |
| 0 | 1 | 0 | Reset. Clears Q to 0. |
| 1 | 1 | X (forbidden) | Both 1 confuses the circuit. |
The key is when S=0, R=0, Q stays unchanged. This is memory. Even without external signals, the internal feedback loop keeps spinning and preserves the value.
I understood this through the light switch analogy. When you turn on a light switch (Set), the light stays on even when you release your hand. Until you turn it off (Reset). The switch's "position" is the 1-bit memory.
The SR latch has a problem when S=1, R=1. So in practice, the D Flip-Flop (Data/Delay Flip-Flop) is more common.
The D flip-flop has only one input: D (Data). And it stores the D value to Q only on the rising edge of the clock signal (0→1 transition).
D ----[D FF]---- Q
|
CLK ---+
| CLK rising edge | D | Q (next state) |
|---|---|---|
| ↑ (0→1) | 0 | 0 |
| ↑ (0→1) | 1 | 1 |
| - (other) | X | Q (hold) |
It accepts values only when the clock ticks. Like a delivery person knocking only at scheduled times. The rest of the time, even if someone rings the doorbell, the door won't open.
After understanding this, I grasped why CPUs move in sync with clock signals. All operations proceed in "tick, tick, tick" rhythm with the clock. A 3GHz CPU sends the "now!" signal 3 billion times per second.
In code analogy:
class DFlipFlop {
constructor() {
this.Q = 0; // Stored value
}
// Called only on clock rising edge
onClockRisingEdge(D) {
this.Q = D; // Store D value
}
getOutput() {
return this.Q; // Return stored value
}
}
// Usage example
const ff = new DFlipFlop();
ff.onClockRisingEdge(1); // Clock ↑, D=1 → Q=1
console.log(ff.getOutput()); // 1
// While clock isn't ticking, Q is maintained
console.log(ff.getOutput()); // Still 1
ff.onClockRisingEdge(0); // Clock ↑, D=0 → Q=0
console.log(ff.getOutput()); // 0
There are several flip-flop variants. At first I was annoyed: "Why so many types?" But I eventually realized: they changed the input method based on use case.
A version that solves the SR latch's "S=1, R=1 forbidden" problem.
| J | K | Q (next state) |
|---|---|---|
| 0 | 0 | Q (hold) |
| 1 | 0 | 1 |
| 0 | 1 | 0 |
| 1 | 1 | Q' (toggle) |
When J=1, K=1, Q toggles. That's the difference from SR latch. If 0, becomes 1; if 1, becomes 0.
A special case of JK. J and K tied together into a single input T.
| T | Q (next state) |
|---|---|
| 0 | Q (hold) |
| 1 | Q' (toggle) |
When T=1, value flips. Like a light switch toggling on/off each press.
Looking at the T flip-flop, I understood: "Ah, this is the foundation of counter circuits." Since it repeats 0→1→0→1 with each clock tick, it can count clocks.
# Building a 0~3 counter with T flip-flops
class TFlipFlop:
def __init__(self):
self.Q = 0
def toggle(self):
self.Q = 1 - self.Q # 0→1, 1→0
return self.Q
# 2-bit counter (repeats 0, 1, 2, 3)
bit0 = TFlipFlop() # LSB
bit1 = TFlipFlop() # MSB
for clock in range(8):
b0 = bit0.toggle() # bit0 toggles every time
if b0 == 0: # When bit0 transitions 1→0
b1 = bit1.toggle() # bit1 also toggles (carry)
else:
b1 = bit1.Q
count = b1 * 2 + b0
print(f"Clock {clock}: {count} (binary: {b1}{b0})")
Output:
Clock 0: 1 (binary: 01)
Clock 1: 2 (binary: 10)
Clock 2: 3 (binary: 11)
Clock 3: 0 (binary: 00)
...
This is the principle of the CPU's Program Counter (PC). As it executes instructions one by one, PC increments by 1—internally, T flip-flops cascade their toggles to count up.
The most crucial concept I absorbed while understanding flip-flops was edge-triggered.
Why do we need edge-triggered? If input D keeps changing while clock is 1, output Q keeps fluctuating too. The next circuit gets confused: "Is the value I just read correct?"
Edge-triggered captures the value at one exact moment (0→1 transition) and ignores input changes during the rest of the time. This is the core of synchronization.
I understood this through the camera shutter analogy. Level-triggered is like holding the shutter down continuously (blurry from shake), edge-triggered is capturing once at the instant you press (sharp).
One flip-flop stores 1 bit. To store 8 bits? Line up 8 flip-flops side by side.
D7 D6 D5 D4 D3 D2 D1 D0 ← Input (8 bits)
| | | | | | | |
[FF][FF][FF][FF][FF][FF][FF][FF] ← 8 D flip-flops
| | | | | | | |
Q7 Q6 Q5 Q4 Q3 Q2 Q1 Q0 ← Output (8 bits)
↑
Shared CLK (all same clock signal)
This is an 8-bit register. Registers like EAX, EBX inside CPUs are built this way.
Gather thousands or tens of thousands of registers? You get SRAM (Static RAM). CPU cache memory is made from SRAM.
| Aspect | SRAM | DRAM |
|---|---|---|
| Structure | Flip-flops (6 transistors) | Capacitor + transistor (1 each) |
| Speed | Fast | Slow |
| Power | High consumption | Low consumption |
| Price | Expensive | Cheap |
| Refresh | Not needed | Needed (capacitor leaks) |
| Use | CPU cache (L1, L2, L3) | Main memory (RAM) |
DRAM stores charge in capacitors, so the structure is simple and cheap. But capacitors leak charge over time, requiring refresh every few milliseconds. This is cumbersome and slow.
SRAM, built with flip-flops, maintains values permanently as long as power is supplied. No refresh needed, so it's fast. But it uses more transistors, making it expensive and power-hungry.
I accepted this through the notebook (SRAM) vs sticky note (DRAM) analogy. Notebooks last long but are heavy and expensive. Sticky notes are light and cheap but can fall off, requiring constant checking.
Seeing how flip-flops are used in CPUs makes it more tangible.
Stores the address of the currently executing instruction. A 16-bit PC consists of 16 flip-flops.
Instruction execution flow:
1. Read address from PC (e.g., 0x0100)
2. Fetch instruction from memory[0x0100]
3. Execute instruction
4. Increment PC by 1 (0x0101)
5. Loop back to step 1
Incrementing PC here is done by the T flip-flop counter we saw earlier.
Temporarily stores the instruction fetched from memory. For an 8-bit instruction, 8 D flip-flops.
Stores computation results. When an ADD instruction executes, the result goes into ACC.
All these registers are synchronized by the same clock signal. One operation happens per clock cycle.
After organizing this, I understood: "Ah, a CPU is a massive army of flip-flops dancing to the clock rhythm."
Flip-flop feedback circuits only work while electricity is continuously supplied. The moment power cuts, the feedback loop breaks, trapped electricity escapes, and stored information vanishes.
This is the essence of volatile memory.
Then why use volatile memory at all? Because of speed.
Non-volatile memory (flash memory) traps electrons inside insulators for storage. This process is slow. SRAM simply spins electrical signals around a loop—super fast.
| Storage Device | Access Speed | Volatile? |
|---|---|---|
| L1 Cache (SRAM) | ~1ns | Volatile |
| RAM (DRAM) | ~100ns | Volatile |
| SSD | ~100μs | Non-volatile |
| HDD | ~10ms | Non-volatile |
L1 cache is 100,000 times faster than SSD. We can't sacrifice this speed, so we tolerate the volatility drawback.
I understood this through the notepad vs stone tablet analogy. Writing on a notepad is fast but erases when wet (volatile). Carving on a stone tablet is permanent but time-consuming (non-volatile). Use each for its purpose.
After understanding flip-flops, things I took for granted looked different.
Redis stores data in memory (DRAM). By flip-flop principles, data vanishes when power cuts. That's why Redis periodically saves snapshots to disk (RDB) or uses AOF (Append-Only File) for backup.
Without this? The moment the server reboots, all session info disappears. All users get logged out.
When you declare let count = 0, the CPU allocates several flip-flops in a register to store 0. More variables means register shortage, forcing spillover to stack memory (DRAM). The moment you go from register→DRAM, speed drops 100x.
That's why optimized compilers try to keep variables in registers (register allocation optimization).
Faster CPU clock means flip-flops change state more times per second. Each state change consumes power. That's why high-performance CPUs generate heat and need coolers.
Mobile CPUs (ARM) lower clock speed to save power. Instead, they compensate performance by adding more cores.
Flip-flops started with the simple idea of feeding output back into input. This loop structure trapped electricity and created 1-bit memory.
These gather to form registers, cache, and RAM. Every variable we use, every piece of data, is ultimately trapped as electrical signals in flip-flops somewhere.
Cut power, the loop breaks, electricity escapes, memory vanishes. That's why we save to disk, use databases, and make backups.
After understanding this principle, I accepted that hardware and software aren't separate worlds—they're one system connected by electricity. Every line of code that executes changes the state of flip-flops somewhere. That's how computers "compute and remember."