
What is OS: The Mediator Between Hardware and User
Without an OS, we'd have to calculate which sector of the hard drive to write 0s and 1s to. What Linux and Windows do for us.

Without an OS, we'd have to calculate which sector of the hard drive to write 0s and 1s to. What Linux and Windows do for us.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

When I first learned to code, I thought "OS is just Windows or Mac." Turn on the computer, see the desktop, click folders to see files, run programs and they work. That was all.
Then I tried deploying my first server and SSH'd into an AWS EC2 instance. My brain melted. A black screen with white blinking text—Linux terminal. No mouse, no folder icons, no Start button. "How is this an operating system? It's nothing like Windows!"
That's when I realized: What I knew was just the GUI (the graphical shell). The real essence of the OS was working invisibly, deep inside. From that moment, I committed to truly understanding what an OS is.
The concept that really clicked for me was imagining "a computer without an OS." This is called Bare Metal. Just CPU, memory, and hard drive sitting there alone.
How would you print "Hello World" on the screen here?
What takes one line in Python print("Hello") would take hundreds or thousands of lines of assembly code without an OS. That's when I understood: "Ah, so the OS does all this messy hardware control for us."
If I had to define OS in one sentence: "A government that manages limited hardware resources so multiple programs can share them efficiently." This analogy really resonated because governments also distribute limited resources (land, money, workforce) among citizens and businesses. The OS does exactly the same thing.
My MacBook has 8 CPU cores, but over 300 processes are running simultaneously. How can 8 cores handle 300 tasks?
The answer is Time Sharing. The OS allocates each process a very short time slice (usually milliseconds) on the CPU. It's so fast that we feel everything runs "simultaneously," but in reality, they're rapidly taking turns.
The first time I ran htop in the terminal, I was shocked.
$ htop
Watching CPU usage dance in real-time, processes appearing and disappearing—I realized "the OS is this busy every single moment." When Chrome eats 80% CPU, other programs inevitably slow down. Now I understood why.
Memory (RAM) is finite. My laptop has 16GB, but opening just 20 Chrome tabs triggers memory warnings. Why can't each program use memory freely?
The OS gives each process a Virtual Address Space. From the program's perspective, "I have memory starting from 0x00000000 all to myself," but in reality, the OS has mapped it to different locations in physical memory.
$ free -m
total used free shared buff/cache available
Mem: 16384 12000 1200 800 3184 3000
Swap: 8192 2048 6144
First time I saw this command, I wondered "What's Swap?" Turns out, when RAM runs out, the OS moves less-used data to the hard drive (Swap Out). It's slow but better than programs crashing. This is called Virtual Memory.
What happens if a program tries to access memory it wasn't allocated? The OS immediately shows "Segmentation Fault" and forcefully terminates the program. Just like a government cracking down on illegal occupation—that's how I understood it.
Physically, a hard drive is just a giant magnetic platter. Data is stored in mechanical units called tracks and sectors. But we locate files using paths like "My Documents/Projects/report.docx."
This magic is performed by the File System. The OS adds an abstraction layer of files and folders on top of raw disk.
$ ls -lh /var/log/
total 1.2G
-rw-r----- 1 root adm 52M Feb 6 10:23 syslog
-rw-r----- 1 root adm 120M Feb 5 23:59 syslog.1
drwxr-xr-x 2 root root 4.0K Jan 15 06:25 apt/
With the ls command, the OS digs through the disk's inode table and neatly presents file size, permissions, and modification time. Doing this manually would require studying file system formats (ext4, NTFS, APFS) and parsing binary structures. The thought alone is terrifying.
An OS consists of two main parts.
The kernel is core code that directly controls hardware. It writes values to CPU registers, manipulates memory controllers, and handles disk I/O. For security, regular user programs cannot access the kernel space.
Checking the Linux kernel version shows this:
$ uname -a
Linux my-server 5.15.0-89-generic #99-Ubuntu SMP x86_64 GNU/Linux
That 5.15.0-89-generic is the kernel version. It was fascinating to learn that code Linus Torvalds first wrote in 1991 continues to evolve today.
The shell is an interface that lets users communicate with the kernel. When we type commands like ls, cd, rm in the terminal, the shell interprets them and asks the kernel "Get me the file list," "Change directory," "Delete file."
There are many shell types: Bash, Zsh, Fish, PowerShell... Each has slightly different syntax and features, but ultimately they do the same thing: Interpret to the kernel.
OS wasn't always this smart. In the early 1950s, computers used Batch Processing. You'd write programs on punch cards, submit them in batches, and they'd execute sequentially. Even if a program took hours, you just had to wait.
In the 1960s, Multitasking emerged. Multiple programs loaded into memory sharing CPU time. Unix was born during this era, and most concepts we use today (processes, file systems, shells) were established then.
The 1980s-90s saw personal computers spread, bringing GUI-based OSes like Windows and macOS. Post-2000s, OSes diversified: Linux for servers, Android/iOS for mobile, RTOS for embedded systems.
Looking at this history, I accepted that "ultimately, OS continuously evolves with hardware."
For general users. Good GUI, strong multimedia support. From a developer's perspective:
I develop on MacBook but use Ubuntu Linux for servers. Initially confusing, but now I understand "they're both Unix-family, so commands are similar."
Ubuntu, CentOS, Debian, Alpine... All use the Linux kernel but differ in package managers and default configurations. I chose Ubuntu for AWS EC2 because "it's well-documented and has a large community."
Cars, robots, medical devices use RTOS (Real-Time OS). FreeRTOS, VxWorks, etc. The core is time guarantees: "Must respond within 10ms." Regular OSes say "will process eventually," but RTOS promises "will process by exactly when."
Android uses the Linux kernel. But unlike regular Linux, it's specialized for touchscreens, sensors, and battery management. iOS uses the same Darwin kernel as macOS but optimized for mobile.
When programs request something from the OS, they use System Calls. Opening files, allocating memory, creating processes... All system calls.
For example, opening a file in Python:
f = open("test.txt", "r")
data = f.read()
f.close()
Internally, this happens:
fopen()fopen() requests kernel's open() system callread() system call reads data from diskclose() system call releases resourcesThe first time I saw this with the strace tool, I was amazed: "Wow, so much happens behind a single line of code."
$ strace python3 -c "open('test.txt').read()"
...
openat(AT_FDCWD, "test.txt", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=1024, ...}) = 0
read(3, "Hello World\n", 4096) = 12
close(3) = 0
...
A running program in the OS is called a Process. Each process gets a unique ID (PID) and independent memory space.
$ ps aux | grep python
user 12345 2.3 1.5 /usr/bin/python3 app.py
Here, 12345 is the process ID. When a program won't stop, you kill -9 12345 to forcefully terminate it—asking the OS "please kill this process."
Processes have a lifecycle:
When first learning this, I wondered "Why have a Ready state instead of just Running?" I accepted that since there are far more processes than CPUs, a waiting queue is necessary.
Physical memory is 16GB, but if you add up all the virtual address spaces programs use, it exceeds 100GB. How is this possible?
Thanks to a technique called Paging. The OS divides memory into 4KB pages, and only loads actually-used pages into physical memory. Unused pages are stored on disk (Swap area) and loaded when needed.
Why is this good?
Of course, excessive Swap use causes severe slowdown due to disk I/O (Thrashing). I later learned that when tuning servers, you adjust the vm.swappiness value.
What happens when you press the power button? Here's the sequence:
Looking at Linux boot logs:
$ dmesg | head -20
[ 0.000000] Linux version 5.15.0-89-generic
[ 0.000000] Command line: BOOT_IMAGE=/vmlinuz root=/dev/sda1
[ 0.000000] Kernel memory protection enabled
[ 0.012345] CPU: Intel Core i7-9750H
...
It was amazing to see how much happens in those few seconds when the computer starts.
Why should we know OS if we just write code? Reasons I've experienced in actual development:
"Why did my Python script suddenly slow down?" → Checked with htop—another process was monopolizing CPU.
"Why is the server frozen?" → Checked with free -m—memory was full and Swap was being thrashed.
Deciding between multithreading or multiprocessing requires understanding OS scheduling. Knowing that Python's GIL (Global Interpreter Lock) makes multithreading useless for CPU-bound tasks also requires OS knowledge.
To understand what Docker containers are, you need to know Linux kernel features like "process isolation," "namespaces," and "cgroups."
When first learning Docker, I only knew "containers are lightweight VMs." But the essence is different.
VMs (Virtual Machines) clone the entire OS. The whole guest OS loads into memory, making them heavy.
Containers share the host OS's kernel and only isolate processes. That's why they're fast and lightweight.
$ docker run -it ubuntu bash
root@abc123:/# uname -a
Linux abc123 5.15.0-89-generic #99-Ubuntu x86_64 GNU/Linux
Checking the kernel version inside an Ubuntu container shows it's the same as the host machine. "Ah, containers are ultimately isolated processes running on the same kernel"—that's how I understood it.
This matters because you can't directly run Windows containers on a MacBook. Different kernels. (Docker Desktop works around this by using VMs internally.)
apt, yum)I use Ubuntu for servers because "I want to keep development and production environments as similar as possible." Reduces situations where it works locally but fails on the server.
I develop on MacBook because installing packages with Homebrew and using iTerm2 is convenient. Plus, iOS app development only works on Mac.
Nowadays, thanks to WSL (Windows Subsystem for Linux), you can use a Linux environment on Windows. The idea of "running a Linux kernel inside Windows" was fascinating.
The biggest realization from studying OS was "the power of abstraction."
As developers, we just call functions like malloc(), open(), fork(). Behind those, thousands of lines of kernel code manipulate page tables, send commands to disk controllers, and modify CPU registers.
Thanks to this abstraction, we can focus on business logic. We only think "when user clicks button, save to database," not "write bytes to which sector of the hard drive."
Ultimately, OS is a system that "hides complexity and provides simplicity." And understanding this system lets you see how your code actually works, why it's slow, and how to optimize it.
Right now inside my MacBook, hundreds of processes are sharing CPU in milliseconds, exchanging memory, reading and writing to disk. What makes all that chaos orderly is the OS.
After accepting this fact, computers started feeling a bit more friendly to me.