
Daemon Process: The Invisible Worker
Not a Demon. It's the 'Guardian Spirit' from Greek mythology. Heroes of the server working silently in the background.

Not a Demon. It's the 'Guardian Spirit' from Greek mythology. Heroes of the server working silently in the background.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

I'm a non-CS founder, and when I tried deploying my first project—a Discord bot written in Python—on AWS EC2, I thought I had succeeded. I SSH'd into the server, ran python bot.py, and it worked perfectly. I felt accomplished.
Then I closed my SSH connection. The bot died instantly. No error message, no crash log. Just... silence.
At first, I thought it was a code issue. I checked the logs—nothing. The process simply "quietly exited" the moment I disconnected my terminal. I didn't understand why. I knew servers were supposed to run 24/7, but I didn't know that closing an SSH session would kill my program. That's when I learned about process lifecycle, terminal sessions, and something called a Daemon.
That moment changed everything. I realized I'd been trying to run a foreground process in the background, which fundamentally doesn't work. Daemons aren't just "background programs." They're terminal-independent, permanent processes that survive even when you log out.
When I first encountered daemons, several things threw me off.
First, the name was scary. I saw processes like systemd, sshd, httpd, mysqld—all ending with d. I assumed it meant "Demon," like malware. Turns out, it comes from Greek mythology's Daemon—a guardian spirit that helps people invisibly. MIT developers named it this way. The metaphor felt perfect once I understood it.
Second, the difference between background processes and daemons. I thought running python bot.py & would make it run in the background permanently. I tried it. The moment I closed SSH, it died. Why? Because & only detaches the process from the terminal's input/output, but it's still bound to the terminal session. When the terminal closes, the kernel sends a SIGHUP signal, and the process terminates.
Third, tools like nohup and screen. I discovered nohup python bot.py &—and it worked! My bot survived SSH disconnection. I also found screen and tmux, which create "fake terminal sessions" that persist after SSH closes. But are these daemons? No. They're workarounds. A true daemon is registered at the system level, managed by systemd or init, and starts automatically at boot.
It took me days to untangle these concepts. The core insight I finally accepted: Daemons are immortal processes with no parent and no terminal, managed directly by the system.
To understand daemons, you first need to understand "how do processes die?"
When you run a program in a terminal, it belongs to a session. A session groups multiple process groups together. When the terminal closes, the kernel sends a SIGHUP (Hangup) signal to all processes in that session, meaning "the terminal is gone, you should exit too."
Daemons must completely escape this session. That's the only way they survive terminal closure. This process is called "detaching from the terminal."
The metaphor that clicked for me: A normal process is like a child who must leave home when the parent does. A daemon is an adult who's legally emancipated—it can stay in the house even after the parent leaves. The terminal (parent) disappears, but the daemon (child) continues living its own life.
Let me clarify these three:
&): Doesn't receive terminal I/O, but still belongs to the session. Dies when terminal closes.Once I understood this distinction, I realized why nohup and screen aren't "complete solutions." They're hacks to make processes behave like daemons. A true daemon is registered with the system, starts at boot, and is managed by systemd or init.
When I researched how to manually create a daemon, I stumbled upon the "Double Fork Technique." At first, I was baffled. Why fork (duplicate the process) twice?
But as I dissected the logic step by step, I realized it was brilliant. This is a "legal escape" that precisely exploits operating system process management rules.
// Step 1: First Fork
pid_t pid = fork();
if (pid > 0) {
exit(0); // Parent exits immediately
}
// Only child process remains
// Step 2: Create a new session
setsid(); // Become session leader, detach from terminal
// Step 3: Second Fork
pid = fork();
if (pid > 0) {
exit(0); // Session leader exits
}
// Only "non-session-leader" process remains
// Step 4: Change working directory
chdir("/");
// Step 5: Close file descriptors
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
// Now a true daemon
while (1) {
// Infinite loop providing service
}
Why the first fork? Kill the parent, making the child an orphan. Orphan processes are automatically adopted by init (PID 1). Now init, not the terminal, is the parent. Terminal closure doesn't affect it anymore.
Why call setsid()?
setsid() creates a new session and makes the calling process the session leader. It also disconnects from the controlling terminal. The process now has no terminal.
Why the second fork? This was the most fascinating part. Even after the first fork and session detachment, why fork again? The reason: "To ensure the process can never reacquire a terminal later." In Unix, only the session leader can open a terminal. The second fork kills the session leader (first child), leaving only its child (second child), which is not a session leader and thus can never open a terminal. Perfect insurance.
This moment made me think, "Now this is real engineering." They knew one fork wasn't enough, so they did two to eliminate all possibilities.
Let me also clarify these related concepts:
wait(), leaving the corpse (exit status) in the process table. Doesn't consume resources but occupies a slot.When creating a daemon, killing the parent in the first fork is deliberate—it's to make the child an orphan. Zombies are bugs; orphans are intentional design.
Manually coding double fork was necessary until the early 2000s. Today, systemd handles everything automatically.
If I want to turn my Python bot into a daemon, I don't need to change a single line of code. I just write a systemd unit file.
# /etc/systemd/system/discord-bot.service
[Unit]
Description=My Discord Bot Service
After=network.target
# network.target = start after network is ready
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/bot
ExecStart=/usr/bin/python3 /home/ubuntu/bot/main.py
Restart=always
RestartSec=10
# Auto-restart 10 seconds after crash
# Environment variables
Environment="DISCORD_TOKEN=your_token_here"
# Log management
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
# Auto-start at boot
Place this file in /etc/systemd/system/, then run these commands:
sudo systemctl daemon-reload # Tell systemd to read the new file
sudo systemctl enable discord-bot # Enable auto-start at boot
sudo systemctl start discord-bot # Start now
sudo systemctl status discord-bot # Check status
Now, even if I disconnect SSH or reboot the server, the bot automatically survives. Systemd daemonizes it, manages it, and resurrects it if it crashes.
Systemd internally handles things like double fork automatically. Developers just need to specify "what program to run."
What I loved most:
Restart=always is true life insurance.journalctl -u discord-bot -f shows real-time logs. Automatically forwarded to syslog.After=network.target ensures "start only after network is ready."After learning this, I started managing all server programs with systemd. Things I used to run with cron or nohup—I converted everything to systemd unit files.
On a Linux system, running `ps aux | grep 'd shows dozens of daemons. Each has a unique mission.
The ancestor of all processes. The first process the kernel starts during boot. It used to be init, but systemd is now the standard. It acts as the adoptive parent for all orphan processes.
SSH connection daemon. Listens on port 22, and when an SSH client connects, it forks a child process to handle authentication. Without this daemon, remote access is impossible.
Web server daemons. Listen on port 80 (HTTP) or 443 (HTTPS) and respond to HTTP requests. httpd is Apache, nginx is Nginx's master process.
Database server daemons. Receive and process SQL queries. They run forever, waiting for client connections.
Scheduled task daemon. Wakes up every minute, checks /etc/crontab and user crontabs, and executes scheduled jobs. Used for automating backups and such.
Systemd's log collection daemon. Collects all system logs in one place. Queryable with journalctl.
All these daemons are in infinite loops, waiting. Their basic structure looks like this:
# Simple daemon pattern (pseudocode)
while True:
request = wait_for_request() # Blocking I/O
if request:
pid = fork()
if pid == 0:
handle_request(request)
exit(0)
else:
# Parent continues waiting
wait_for_child(pid) # Prevent zombies
Since daemons have no terminal I/O, users need a way to control them. That's where signals come in.
kill -HUP <PID>systemctl stop sends this.kill -9 <PID>When building a daemon, you should write signal handlers:
import signal
import sys
def handle_sighup(signum, frame):
print("Reloading configuration...")
reload_config()
def handle_sigterm(signum, frame):
print("Graceful shutdown initiated...")
cleanup()
sys.exit(0)
signal.signal(signal.SIGHUP, handle_sighup)
signal.signal(signal.SIGTERM, handle_sigterm)
while True:
# Main loop
do_work()
Systemd sends SIGTERM when stopping a service, and if the process doesn't die within a timeout (default 90 seconds), it sends SIGKILL.
Daemons have no terminal, so print() statements go nowhere. How do you debug? Log files.
In the past, services wrote logs to /var/log/. Examples: /var/log/apache2/error.log, /var/log/mysql/error.log.
To use syslog in Python:
import syslog
syslog.syslog(syslog.LOG_INFO, "Daemon started")
syslog.syslog(syslog.LOG_ERR, "Error occurred!")
In the systemd era, journald centrally manages all logs. Setting StandardOutput=journal in a systemd unit file automatically routes the program's stdout/stderr to the journal.
Viewing logs:
journalctl -u discord-bot # Logs for specific service
journalctl -u discord-bot -f # Real-time tail
journalctl -u discord-bot --since today # Today's logs only
journalctl -p err # Error level and above
I found journald incredibly convenient. No need to remember log file paths, easy time-based filtering, and automatic log rotation.
If you don't need a formal daemon and just want "a process that survives SSH disconnection," simpler methods exist.
Stands for "No Hang Up." Ignores SIGHUP signals.
nohup python bot.py &
# Output saved to nohup.out
Simple but limited:
Create virtual terminals that persist after SSH disconnects.
screen -S bot # Create a screen session named "bot"
python bot.py # Run here
# Ctrl+A, D to detach
screen -r bot # Reattach later
Useful for temporary work or development. But insufficient for production. Server reboots kill screen sessions.
I use tmux during development, but production services always get registered with systemd.
When I first deployed my Discord bot, I went through these trials:
python bot.py - Died when SSH closedpython bot.py & - Also died when SSH closednohup python bot.py & - Worked! But required manual restart after server reboot@reboot - Worked, but log access was inconvenientAfter switching to systemd, the bot auto-restarted on crash, auto-started after reboot, and logs were easily accessible via journalctl. I wished I'd done this from the start.
After understanding daemon processes, I realized how much precise engineering goes into "servers running 24/7." Nginx, MySQL, and SSH servers always respond because these daemons are running infinite loops, waiting for requests.
Daemons aren't "immortal processes." They're "processes the system takes responsibility for managing." Systemd monitors them, resurrects them when they die, and records their logs. Developers just need to write code for "what work to do."
As a non-CS founder, this concept was difficult when I first encountered it. But now I frame it like this: Daemons are the server's heartbeat. Never stopping, invisible, but the foundation of every service. And creating that heartbeat—whether through the clever trick of double fork or the modern manager systemd—ultimately comes down to the art of making a process that can "live independently."
Now, whenever I deploy a new service to a server, the first thing I do is write a systemd unit file. I've accepted that this is the right way to treat daemons—and the right way to treat servers.` shows dozens of daemons. Each has a unique mission.
The ancestor of all processes. The first process the kernel starts during boot. It used to be init, but systemd is now the standard. It acts as the adoptive parent for all orphan processes.
SSH connection daemon. Listens on port 22, and when an SSH client connects, it forks a child process to handle authentication. Without this daemon, remote access is impossible.
Web server daemons. Listen on port 80 (HTTP) or 443 (HTTPS) and respond to HTTP requests. httpd is Apache, nginx is Nginx's master process.
Database server daemons. Receive and process SQL queries. They run forever, waiting for client connections.
Scheduled task daemon. Wakes up every minute, checks /etc/crontab and user crontabs, and executes scheduled jobs. Used for automating backups and such.
Systemd's log collection daemon. Collects all system logs in one place. Queryable with journalctl.
All these daemons are in infinite loops, waiting. Their basic structure looks like this:
# Simple daemon pattern (pseudocode)
while True:
request = wait_for_request() # Blocking I/O
if request:
pid = fork()
if pid == 0:
handle_request(request)
exit(0)
else:
# Parent continues waiting
wait_for_child(pid) # Prevent zombies
Since daemons have no terminal I/O, users need a way to control them. That's where signals come in.
kill -HUP <PID>systemctl stop sends this.kill -9 <PID>When building a daemon, you should write signal handlers:
import signal
import sys
def handle_sighup(signum, frame):
print("Reloading configuration...")
reload_config()
def handle_sigterm(signum, frame):
print("Graceful shutdown initiated...")
cleanup()
sys.exit(0)
signal.signal(signal.SIGHUP, handle_sighup)
signal.signal(signal.SIGTERM, handle_sigterm)
while True:
# Main loop
do_work()
Systemd sends SIGTERM when stopping a service, and if the process doesn't die within a timeout (default 90 seconds), it sends SIGKILL.
Daemons have no terminal, so print() statements go nowhere. How do you debug? Log files.
In the past, services wrote logs to /var/log/. Examples: /var/log/apache2/error.log, /var/log/mysql/error.log.
To use syslog in Python:
import syslog
syslog.syslog(syslog.LOG_INFO, "Daemon started")
syslog.syslog(syslog.LOG_ERR, "Error occurred!")
In the systemd era, journald centrally manages all logs. Setting StandardOutput=journal in a systemd unit file automatically routes the program's stdout/stderr to the journal.
Viewing logs:
journalctl -u discord-bot # Logs for specific service
journalctl -u discord-bot -f # Real-time tail
journalctl -u discord-bot --since today # Today's logs only
journalctl -p err # Error level and above
I found journald incredibly convenient. No need to remember log file paths, easy time-based filtering, and automatic log rotation.
If you don't need a formal daemon and just want "a process that survives SSH disconnection," simpler methods exist.
Stands for "No Hang Up." Ignores SIGHUP signals.
nohup python bot.py &
# Output saved to nohup.out
Simple but limited:
Create virtual terminals that persist after SSH disconnects.
screen -S bot # Create a screen session named "bot"
python bot.py # Run here
# Ctrl+A, D to detach
screen -r bot # Reattach later
Useful for temporary work or development. But insufficient for production. Server reboots kill screen sessions.
I use tmux during development, but production services always get registered with systemd.
When I first deployed my Discord bot, I went through these trials:
python bot.py - Died when SSH closedpython bot.py & - Also died when SSH closednohup python bot.py & - Worked! But required manual restart after server reboot@reboot - Worked, but log access was inconvenientAfter switching to systemd, the bot auto-restarted on crash, auto-started after reboot, and logs were easily accessible via journalctl. I wished I'd done this from the start.
After understanding daemon processes, I realized how much precise engineering goes into "servers running 24/7." Nginx, MySQL, and SSH servers always respond because these daemons are running infinite loops, waiting for requests.
Daemons aren't "immortal processes." They're "processes the system takes responsibility for managing." Systemd monitors them, resurrects them when they die, and records their logs. Developers just need to write code for "what work to do."
As a non-CS founder, this concept was difficult when I first encountered it. But now I frame it like this: Daemons are the server's heartbeat. Never stopping, invisible, but the foundation of every service. And creating that heartbeat—whether through the clever trick of double fork or the modern manager systemd—ultimately comes down to the art of making a process that can "live independently."
Now, whenever I deploy a new service to a server, the first thing I do is write a systemd unit file. I've accepted that this is the right way to treat daemons—and the right way to treat servers.