
IPC (Inter-Process Communication)
Processes are isolated. But how do Chrome tabs exchange data? From Pipes to Sockets.

Processes are isolated. But how do Chrome tabs exchange data? From Pipes to Sockets.
Why does my server crash? OS's desperate struggle to manage limited memory. War against Fragmentation.

Two ways to escape a maze. Spread out wide (BFS) or dig deep (DFS)? Who finds the shortest path?

Fast by name. Partitioning around a Pivot. Why is it the standard library choice despite O(N²) worst case?

Establishing TCP connection is expensive. Reuse it for multiple requests.

I opened Chrome and fired up the task manager. Each tab ran as a separate process. YouTube had its own, Gmail had another. But when I copied text from YouTube and pasted it into Gmail, it just worked. I thought processes were completely isolated from each other. How was this even possible?
My first naive thought was "just share memory addresses, right?" Process A writes to 0x1234, Process B reads from it. Simple. Except it's completely wrong. Each process has its own virtual address space. Process A's 0x1234 and Process B's 0x1234 point to different physical memory locations. They're walled off from each other.
When I first tried to build a multi-process architecture, I hit this wall hard. I was building a Node.js web server and wanted to utilize all CPU cores by spawning multiple worker processes. The main process would receive requests and distribute work to workers.
The problem was: "How do I pass data?" I thought I could just call worker.process(data) like a regular function. Nope. Memory is separate. Can't share global variables. Can't pass pointers. I was stuck.
That's when it clicked. Process isolation is a design choice for security and stability, but it also creates a barrier for collaboration. Processes are like people in soundproof rooms. To communicate, you need special tools. Enter IPC (Inter-Process Communication).
A metaphor hit me. Processes are people locked in perfectly soundproof rooms. Shouting won't help. You need phones, mailboxes, shared bulletin boards. And the building manager who provides these tools is the operating system.
The OS offers multiple ways for processes to communicate safely. Pipes are like tubes between rooms, flowing data one way. Message queues are mailboxes where you drop messages for later pickup. Shared memory is a common space everyone can access. Sockets are like phone lines for real-time conversation.
Each method has clear trade-offs. Shared memory is fastest but requires synchronization to avoid data corruption. Message queues are safe but slower due to data copying. Sockets are versatile but complex to set up. Choosing the right tool for the situation is key.
IPC isn't just "passing data around." It's the core OS mechanism that enables process collaboration. Modern software architectures are mostly multi-process: Chrome's multi-process design, Docker container communication, microservice data exchange—all are IPC in various forms.
Pipes are the most basic IPC form. The | symbol you use daily in the terminal is a pipe.
# Connect ls process output to grep process input
ls -la | grep ".txt"
# Chain multiple processes
cat access.log | grep "ERROR" | wc -l
# Pipes are unidirectional: data flows one way
ps aux | sort -k 3 -r | head -10
There are two types: anonymous pipes (only between parent-child processes) and named pipes (FIFO, accessible by unrelated processes via a filesystem name).
// Creating a pipe in C (conceptual)
int pipefd[2];
pipe(pipefd); // pipefd[0]: read, pipefd[1]: write
if (fork() == 0) {
// Child: write to pipe
close(pipefd[0]);
write(pipefd[1], "Hello from child", 16);
close(pipefd[1]);
} else {
// Parent: read from pipe
close(pipefd[1]);
char buf[100];
read(pipefd[0], buf, 100);
printf("Received: %s\n", buf);
close(pipefd[0]);
}
The biggest limitation is unidirectional flow. For bidirectional communication, you need two pipes or a different IPC method.
Message queues are like mailboxes. The sender drops a message and moves on. The receiver picks it up when ready. No synchronization needed. Speed differences between sender and receiver don't matter.
There's POSIX message queue and System V message queue. POSIX is more modern and easier to use.
// POSIX message queue concept (pseudocode)
// Process A: send message
mqd_t mq = mq_open("/my_queue", O_WRONLY | O_CREAT, 0644, NULL);
char msg[] = "Task data for worker";
mq_send(mq, msg, strlen(msg), 0);
mq_close(mq);
// Process B: receive message
mqd_t mq = mq_open("/my_queue", O_RDONLY);
char buffer[256];
mq_receive(mq, buffer, 256, NULL);
printf("Received: %s\n", buffer);
mq_close(mq);
The advantage is decoupling. The sender doesn't care if the receiver is running or when it processes the message. Modern architectures extend this concept to the network with RabbitMQ, Kafka, AWS SQS.
Shared memory lets multiple processes access the same physical memory region. No data copying means maximum speed. It's like tearing down the wall between two rooms to create a common space.
// POSIX shared memory concept
// Process A: create and write
int shm_fd = shm_open("/my_shm", O_CREAT | O_RDWR, 0666);
ftruncate(shm_fd, 4096); // 4KB size
void *ptr = mmap(0, 4096, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0);
sprintf(ptr, "Shared data from process A");
// Process B: read
int shm_fd = shm_open("/my_shm", O_RDONLY, 0666);
void *ptr = mmap(0, 4096, PROT_READ, MAP_SHARED, shm_fd, 0);
printf("Data: %s\n", (char*)ptr);
The trap is synchronization. If Process A writes while Process B reads, you get corrupted data. You need semaphores or mutexes to control access.
Semaphores control access to shared resources using a counter. They manage "how many processes can access this resource right now." A mutex is a special semaphore with values 0 or 1 (locked/unlocked).
Signals are lightweight notification mechanisms. They tell a process "an event happened."
# Using signals from terminal
# SIGTERM: request graceful shutdown (process can clean up)
kill -TERM 1234
# SIGKILL: force kill (immediate, no cleanup)
kill -9 1234
# SIGHUP: reload configuration (many daemons use this)
kill -HUP 1234
Signals are inefficient for data transfer but excellent for event notifications. When you run nginx -s reload, it internally sends a SIGHUP signal to reload configuration without restarting.
Sockets are the standard interface for network communication. TCP/IP sockets connect different computers. Unix Domain Sockets connect processes on the same machine.
# Unix Domain Socket example (Python)
import socket
import os
# Server process
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind('/tmp/my_socket')
sock.listen(1)
connection, client_address = sock.accept()
data = connection.recv(1024)
print(f"Received: {data.decode()}")
# Client process
client = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
client.connect('/tmp/my_socket')
client.sendall(b'Hello from client')
client.close()
Unix Domain Sockets are faster than TCP/IP sockets because they bypass the network stack and transfer data directly within the kernel. Docker daemon communicates with the CLI via /var/run/docker.sock.
Memory-mapped files treat files like memory. Map a file into a process's address space, and you can read/write it using memory operations instead of file I/O functions. Multiple processes mapping the same file creates an IPC channel.
// File mapping with mmap
int fd = open("data.bin", O_RDWR);
void *addr = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
// Now use addr like a regular pointer
memcpy(addr, "New data", 8);
munmap(addr, 4096);
Databases use this for performance optimization. SQLite maps database files into memory to minimize disk I/O.
Chrome runs a separate renderer process per tab for security and stability. If one tab crashes, others survive. But isolation creates a problem. How does copying text from one tab and pasting into another work?
Chrome uses a Browser Process as the mediator. All renderer processes communicate with it via IPC. Main methods:
This architecture gives Chrome security (renderer processes sandboxed) and stability (crash isolation) while enabling collaboration (clipboard sharing, history sync).
Traditional IPC was for processes on the same machine. Modern architectures require IPC across networks.
gRPC is Google's RPC framework. Define interfaces with Protocol Buffers, then call remote server functions like local functions.
// user.proto
service UserService {
rpc GetUser(UserRequest) returns (UserResponse);
}
message UserRequest {
int32 user_id = 1;
}
message UserResponse {
string name = 1;
string email = 2;
}
Client code looks like userService.GetUser({user_id: 123}). HTTP/2-based for speed, supports bidirectional streaming. It's the standard for microservice communication.
D-Bus is the standard for Linux desktop app and system service communication. For example, NetworkManager broadcasts a "Wi-Fi connected" signal, and all subscribed apps get notified.
# Send notification via D-Bus
dbus-send --session --type=method_call \
--dest=org.freedesktop.Notifications \
/org/freedesktop/Notifications \
org.freedesktop.Notifications.Notify \
string:"MyApp" uint32:0 string:"icon" \
string:"Hello" string:"This is a test" \
array:string:"" dict:string:string:"" int32:5000
D-Bus uses a message bus concept. All processes connect to the bus and publish or subscribe to messages. It's a local version of the Pub/Sub pattern.
Redis isn't just a cache. Its Pub/Sub feature enables real-time messaging between processes or servers.
# Redis Pub/Sub example
import redis
# Publisher (process A)
r = redis.Redis()
r.publish('notifications', 'New order received')
# Subscriber (processes B, C, D...)
pubsub = r.pubsub()
pubsub.subscribe('notifications')
for message in pubsub.listen():
if message['type'] == 'message':
print(f"Received: {message['data']}")
I used Redis Pub/Sub for a real-time chat system. Multiple web server instances were running, but when one server received a message, it published to a Redis channel. Other servers subscribed and received it. Users got messages regardless of which server they connected to.
RabbitMQ is an AMQP-based message broker. It's the network version of message queues. Exchange, Queue, and Binding concepts enable complex routing.
Kafka is a distributed messaging system for high-volume data streaming. Built by LinkedIn for log collection, event sourcing, and real-time data pipelines. Messages persist to disk for later replay.
| Method | Speed | Complexity | Best For | Constraints |
|---|---|---|---|---|
| Pipe | Fast | Easy | Simple parent-child, CLI | Unidirectional, same machine |
| Named Pipe | Fast | Easy | Unrelated processes, one-way | Unidirectional, same machine |
| Message Queue | Medium | Medium | Async work queue, task distribution | Message size limits |
| Shared Memory | Very Fast | Hard | Large data, high performance | Sync required, complex |
| Semaphore | N/A | Medium | Shared resource control, sync | No data transfer |
| Signal | Fast | Easy | Simple event notification | Limited data transfer |
| Unix Socket | Fast | Medium | Bidirectional, complex protocols | Same machine |
| TCP Socket | Slow | Medium | Network comms, remote servers | Network latency |
| gRPC | Fast | Medium | Microservices, type safety | HTTP/2 required |
| Redis Pub/Sub | Fast | Easy | Real-time event broadcast | Message loss possible |
| RabbitMQ | Medium | Hard | Complex routing, reliability | Infrastructure needed |
| Kafka | Very Fast | Hard | High-volume streams, log collection | Complex setup |
Same machine, simple communication: Pipe or Unix Domain Socket. Docker CLI uses sockets to talk to the Docker daemon.
Same machine, high performance: Shared memory. For large data transfers like databases or video processing.
Async work processing: Message queue. Distributing background jobs (email sending, image resizing) to worker processes.
Microservice communication: gRPC or REST API. gRPC when type safety and performance matter, REST for simplicity and compatibility.
Real-time event broadcast: Redis Pub/Sub. Chat, notifications, real-time dashboard updates.
High-volume log/event processing: Kafka. When handling millions of events per second.
Studying IPC taught me it's not just "data transfer methods" but a system design philosophy. Isolate processes for stability, enable collaboration via IPC. This balance is the core of modern operating systems.
I used to think "why make it so complicated when a function call works?" But after designing multi-process architectures and seeing systems survive process crashes, I understood. Solving isolation and communication simultaneously is the essence of IPC.
Open Chrome and check the task manager. Each tab runs a separate process, yet they collaborate smoothly. Bookmark sync, clipboard sharing, extension communication. All thanks to IPC. Processes reaching across walls to shake hands—that's IPC.