
The Origins of Computer: Alan Turing & Turing Machine
How did a simple calculator start thinking logically? I explore the essence of computers through the Turing Machine and connect it to state management in my code.

How did a simple calculator start thinking logically? I explore the essence of computers through the Turing Machine and connect it to state management in my code.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

When I first started learning to code and building services, I thought of a computer as just a "super-fast calculator." I saw it as a diligent worker that processed complex Excel formulas in milliseconds and performed repetitive tasks without complaint.
But as I began operating my own service and writing more complex business logic, I started to feel that something was different. The code I was writing wasn't just about adding and subtracting numbers (Calculation).
"When a user clicks the 'Pay' button, if the stock is 0, show a 'Sold Out' alert; if stock exists, open the payment gateway; if payment succeeds, change status to 'Order Complete' and decrease stock by 1."
This process is not a simple operation. It involves judgment, conditions, and flow control. It's exactly like a human thinking, "Hmm, in this situation, I should do this."
How can this chunk of metal and silicon, through which only electrical signals (0s and 1s) flow, perform such 'logical thinking'? It's just a collection of switches that turn electricity on and off—so how does it assess situations and make decisions?
To find the answer to this fundamental question, I dug deep and met a genius who is considered the father of computer science and the ancestor of all developers: Alan Turing.
When I first heard the name "Turing Machine," I imagined a massive iron machine like a steam engine or the Enigma machine. Something with gears and levers displayed in a museum.
But the Turing Machine was not a machine that actually existed. It was an imaginary model (Mathematical Model) devised by Alan Turing in 1936 to solve the mathematical conundrum, "What does it mean to be computable?" It was a virtual machine that existed only in his mind.
What gives me goosebumps is that every modern computer we use today—smartphones, laptops, supercomputers, and even cloud servers—operates on the exact principles of this imaginary machine. It is incredible that a concept created in the 1930s is still valid nearly 100 years later.
What shocked me most when I understood the Turing Machine was its simplicity. I wondered, "Is the ancestor of the computers running AI really this simple?" A Turing Machine consists of exactly three elements:
The operating principle is simple to the point of being anticlimactic. The head reads one cell of the tape. Then it looks at the rule book.
"If my current state is 'A' and the number I just read is '0' → Rewrite the number to '1', move the head one space to the right, and change my state to 'B'."
That's it. That is really all there is to it.
It repeats these four actions infinitely. At first, I thought, "What can you possibly do with this?" It’s just changing a 0 to a 1 and moving sideways.
But when you gather tens of thousands, hundreds of millions of these simple rules, the story changes.
You can create a rule that adds 0 and 1. (Addition)
Repeating addition becomes multiplication.
Changing states based on conditions becomes if-else statements.
Returning to a specific state becomes for/while loops.
I realized that the complex web services, flashy games, and massive AI models we write are fundamentally just a massive repetition of a Turing Machine moving back and forth on a tape, flipping 0s and 1s.
After studying this theory, I looked at my code again, and things I hadn't seen before started to appear. In particular, the pattern of bugs caught my eye.
When I was developing the early version of my shopping mall service, I wrote the order processing logic very... intuitively(?).
// Pseudo-code of the messy code I actually wrote
if (user_clicks_order_button) {
if (stock > 0) {
call_payment_API();
if (payment_success) {
create_order_in_DB();
show_success_message();
} else {
show_error_message();
}
} else {
show_sold_out_message();
}
}
At first glance, it looks fine. But what if the user presses the back button during payment? What if payment succeeds but the network disconnects while saving to the DB? What if stock is decreased but payment fails?
As variables (isPaymentSuccess, hasStock, etc.) got tangled, I eventually faced the worst-case scenario: "Payment was made, but there is no order record." I pulled my hair out all night searching through logs, wondering, "Where on earth did the variable values get twisted?"
The core of the Turing Machine lies in the 'State'. The machine can preserve only one state at any given moment.
I decided to redesign my code not as a 'combination of variable values' but as a 'transition of states'. This is the Finite State Machine (FSM).
And I created a Transition Table.
Idle state → Go to Ordering state.Ordering state → Go to PaymentWait state.PaymentWait state → Go to Paid state.Ordering state? → Ignore it. (Because we are already ordering!)After defining states like this, the code became dramatically simpler. The code covered in if statements disappeared, and it became clear what the next action should be just by looking at the current state.
// If using a library like XState, it would look like this
const orderMachine = createMachine({
initial: 'idle',
states: {
idle: { on: { CLICK: 'ordering' } },
ordering: {
on: {
STOCK_OK: 'paymentWait',
NO_STOCK: 'soldOut'
}
},
paymentWait: {
on: {
PAY_SUCCESS: 'paid',
PAY_FAIL: 'error'
}
},
// ...
}
});
If I hadn't studied the Turing Machine, I would likely still be creating 10 boolean variables and writing terrible conditions like if (isLoading && !isError && isStockCheck). The Turing Machine taught me "how to break down complex logic into 'State' and 'Transitions'."
While proposing the Turing Machine, Turing also proved a very despairing theorem. It is the Halting Problem.
There is a tool that every developer dreams of. "A super program that, if I just feed my code into it, perfectly determines whether there are bugs or if it will fall into an infinite loop."
If such a thing existed, we wouldn't need to debug all night. We could just run this super program before deployment, and if it says "OK", we could go home with peace of mind.
But Turing proved it mathematically. "Such a program cannot exist."
It is logically impossible to perfectly determine in advance whether a specific program will run forever (infinite loop) or eventually stop (normal termination).
When I learned this fact, I felt a strange sense of comfort. As a junior developer, I used to blame myself, thinking, "I can't catch bugs because I lack skill. Experts must write perfect code without bugs, right?"
But according to Turing, perfect, flawless code is the domain of God. We are destined to inevitably fall into unpredictable states.
So, modern software engineering has evolved not towards writing 'bug-free code', but towards creating 'systems that quickly detect and recover when bugs occur'. We admit that servers can stop and set up Auto Scaling, and we admit that code can die and implement try-catch blocks and Circuit Breakers.
I learned humility: all we can do is create the 'best possible' defense, not 'perfection'.
Another of Turing's great achievements is the concept of the 'Universal Turing Machine'.
Previous machines had fixed purposes. Calculators only calculated, typewriters only typed. You couldn't type with a calculator. To change the function, you had to tear down and rebuild the machine (change the Hardware).
But Turing thought: "If we just change the rule book (Software) written on the tape, couldn't one machine become a calculator, then a typewriter, then a chess player?"
This is the beginning of the 'Stored-Program Concept' and the birth of Software.
When we use an iPhone, we don't need to tear apart the machine; just downloading an 'App' turns the phone into a game console, a bank terminal, or a TV. This is possible because the iPhone is a Universal Turing Machine. The hardware stays the same, and we only change the rules (code) written on the tape (memory).
The reason I can have a job as a developer is ultimately because Turing "liberated logic from hardware." Instead of holding a soldering iron and connecting circuits, we are recreating the machine by typing text on a keyboard.
When I approached the history of computers as a simple subject for memorization, it wasn't fun. 1936, Alan Turing... memorizing dates was meaningless.
But when I thought, "All of this is the root of the code I'm writing right now," I felt a thrill.
if statements I use came from the Turing Machine's rule table.variables I use came from the Turing Machine's tape.infinite loops I encounter are connected to Turing's Halting Problem.We are dancing on top of the machine that Alan Turing imagined 100 years ago. No matter how flashy the frameworks or AI tools we use, the essence hasn't changed.
"Receive Input, Change State, Produce Output."When my code gets tangled and gives me a headache, I take my eyes off the monitor and think about this essence for a moment. "What is the State of my program right now?" "If I receive this input, what state should it transition to?"
If you can answer these questions, any complex problem can be solved. Just like Turing did.