π L4: Inter-Process Communication (IPC)
Lecture Goal
Understand the four main IPC mechanisms β Shared Memory, Message Passing, Unix Pipes, and Signals β and know when to use each based on efficiency, portability, and synchronization requirements.
The Big Picture
Processes are isolated by default β each has its own memory space. IPC is the bridge that lets them cooperate. The key trade-off: Shared Memory = fast but hard to sync, Message Passing = slower but safer and portable.
1. Why IPC?
π§ The Core Problem
Each process in an OS has its own independent memory space. By default, Process A cannot read or write Process Bβs memory β they are completely isolated.
But real-world applications need processes to cooperate and share data (e.g., web server handling requests, compiler pipeline, database).
flowchart LR subgraph Problem ["The Problem"] P1["Process A<br/>Memory Space"] P2["Process B<br/>Memory Space"] P1 -.-x|"β No access"| P2 end Problem --> Solution["IPC Mechanisms"] style Problem fill:#ff9999 style Solution fill:#99ff99
IPC (Inter-Process Communication) is a set of mechanisms provided by the OS that allows processes to communicate and synchronize.
π Four Main IPC Mechanisms
| Mechanism | Type | Key Characteristic |
|---|---|---|
| Shared Memory | General | Fastest β direct memory access |
| Message Passing | General | Portable β works across networks |
| Unix Pipe | Unix-specific | Unidirectional byte stream |
| Unix Signal | Unix-specific | Asynchronous notification |
β οΈ Common Pitfalls
Pitfall: Threads vs. Processes
Threads within the same process do share memory β IPC is needed only across separate processes. Donβt confuse the two!
2. Shared Memory
π§ Core Concept
A special memory region M is carved out and made accessible to multiple processes simultaneously. Any write by one process is immediately visible to all others β like a shared whiteboard ποΈ.
flowchart LR subgraph SharedMemory ["Shared Memory Region M"] M["π Memory"] end P1["Process A"] <-->|"Read/Write"| M P2["Process B"] <-->|"Read/Write"| M style SharedMemory fill:#ffcc99 style M fill:#99ff99
Lifecycle:
Key Insight
The OS is only involved in Create and Attach. After that, read/write happens at memory speed β no OS involvement! This makes it very efficient.
π οΈ POSIX System Calls
| Step | System Call | Purpose |
|---|---|---|
| Create | shmget(IPC_PRIVATE, size, flags) | Creates region, returns ID |
| Attach | shmat(shmid, NULL, 0) | Attaches to process address space |
| Detach | shmdt(ptr) | Detaches the region |
| Destroy | shmctl(shmid, IPC_RMID, 0) | Deletes the shared memory |
β Advantages vs. β Disadvantages
| β Advantages | β Disadvantages |
|---|---|
| Efficient β OS only involved in setup | Synchronization required β race conditions |
| Easy to use β behaves like normal memory | Harder to implement correctly |
| Supports arbitrary data types and sizes | Only works on the same machine |
π Code Example: Master/Slave Pattern
sequenceDiagram participant Master participant SHM as Shared Memory participant Slave Master->>SHM: shmget() - Create Master->>SHM: shmat() - Attach Master->>SHM: shm[0] = 0 (not ready) Slave->>SHM: shmat() - Attach (using ID) Slave->>SHM: Write data to shm[1..3] Slave->>SHM: shm[0] = 1 (signal ready) Master->>SHM: Poll until shm[0] == 1 Master->>SHM: Read shm[1..3] Master->>SHM: shmdt() + shmctl() - Cleanup
Master (Waiter): Creates memory, polls shm[0] until slave signals ready.
Slave (Writer): Attaches using the ID, writes data, sets shm[0] = 1 to signal.
β οΈ Common Pitfalls
Pitfall 1: Forgetting to detach before destroy
You cannot call
shmctl(IPC_RMID)if the region is still attached β it will fail or be deferred.
Pitfall 2: Race conditions
If both processes write simultaneously, data corruption can occur. The example avoids this by having one writer, one reader.
Pitfall 3: Memory leaks
If a process crashes before destroying shared memory, the region persists. Always clean up!
β Mock Exam Questions
Q1: OS Involvement
At which steps does the OS need to be involved in shared memory IPC?
Answer
Answer: Create and Attach only
After attaching, the shared memory region behaves like normal memory β reads and writes happen directly without OS intervention.
Q2: Efficiency
Why is shared memory more efficient than message passing for high-throughput IPC?
Answer
Answer: No system calls per operation
Shared memory requires OS involvement only during setup. Message passing requires a system call for every send/receive.
3. Message Passing
π§ Core Concept
Processes communicate by explicitly sending and receiving messages through the OS kernel β like sending emails π§.
flowchart LR P1["Process A"] -->|"send(Msg)"| OS["OS Kernel"] OS -->|"receive(Msg)"| P2["Process B"] style OS fill:#ffcc99
Every send/receive goes through the OS β this is the key difference from shared memory.
π Two Design Decisions
1. Naming β How to address the recipient?
| Scheme | Description | Example |
|---|---|---|
| Direct | Explicitly name the other process | send(P2, Msg) |
| Indirect | Use a shared mailbox/port | send(MB, Msg) |
Indirect is more flexible
Many processes can share one mailbox β neither sender nor receiver needs to know the otherβs PID.
2. Synchronization β When to block?
| Behavior | Send | Receive |
|---|---|---|
| Blocking (Sync) | Waits until message received | Waits until message arrives |
| Non-Blocking (Async) | Returns immediately | Returns message or βnot readyβ |
β Advantages vs. β Disadvantages
| β Advantages | β Disadvantages |
|---|---|
| Portable β works across networks | Inefficient β every operation needs OS |
| Easier synchronization β blocking auto-syncs | Limited message size/format |
β οΈ Common Pitfalls
Pitfall: Blocking receive β busy-waiting
Blocking means the OS suspends the process until message arrives β it does NOT spin in a loop wasting CPU!
β Mock Exam Questions
Q3: Direct vs. Indirect
What is the key difference between direct and indirect communication?
Answer
Answer: Indirect uses a mailbox as intermediary β neither process needs to know the otherβs PID. Direct requires both parties to know each otherβs identity.
4. Unix Pipes
π§ Core Concept
A Unix Pipe creates a unidirectional byte channel with two ends:
flowchart LR P["Process (Writer)"] -->|"write(fd[1])"| Buffer["Pipe Buffer<br/>FIFO"] Buffer -->|"read(fd[0])"| Q["Process (Reader)"] style Buffer fill:#ffcc99
Think of it like a literal water pipe π° β data flows in one direction only.
π οΈ System Call
#include <unistd.h>
int pipe(int fd[]); // fd[0] = read end, fd[1] = write end| File Descriptor | Purpose | Analogy |
|---|---|---|
fd[0] | Read end | Like stdin (fd=0) |
fd[1] | Write end | Like stdout (fd=1) |
π Pipe Semantics
| Property | Behavior |
|---|---|
| Order | FIFO β First In, First Out |
| Writer blocks | When buffer is full |
| Reader blocks | When buffer is empty |
| Direction | Unidirectional (half-duplex) |
π Code Example
int pipeFd[2];
pipe(pipeFd);
if (fork() > 0) { // Parent = WRITER
close(pipeFd[0]); // Close unused read end!
write(pipeFd[1], "Hello!", 7);
close(pipeFd[1]); // Signal EOF
} else { // Child = READER
close(pipeFd[1]); // Close unused write end!
read(pipeFd[0], buffer, 7);
close(pipeFd[0]);
}flowchart TD subgraph Fork ["After fork()"] P["Parent: has fd[0], fd[1]"] C["Child: has fd[0], fd[1]"] end P -->|"close(fd[0])"| PW["Parent: Writer only"] C -->|"close(fd[1])"| CR["Child: Reader only"] PW -->|"write"| Pipe["Pipe"] Pipe -->|"read"| CR style PW fill:#99ff99 style CR fill:#99ff99
β οΈ Common Pitfalls
Pitfall 1: Not closing unused ends
If writer doesnβt close its read end, reader may block forever waiting for EOF. Always close the end youβre not using!
Pitfall 2: Pipes are FIFO only
You cannot seek inside a pipe β data must be consumed in order.
Pitfall 3: Related processes only
Basic
pipe()works only between parent and children (viafork()). For unrelated processes, use named pipes (FIFOs).
β Mock Exam Questions
Q4: Pipe Ends
In
pipe(fd), which is the read end?Answer
Answer: fd[0]
Memory aid: 0 β input (like stdin=0), 1 β output (like stdout=1).
Q5: Bidirectional Communication
How can you achieve bidirectional communication between parent and child?
Answer
Answer: Use two pipes
- Pipe 1: parent β child
- Pipe 2: child β parent
A single pipe is unidirectional only.
5. Unix Signals
π§ Core Concept
A Unix Signal is an asynchronous notification sent to a process β like a phone call interrupting your work π±.
flowchart LR OS["OS/Kernel"] -->|"Signal (e.g., SIGSEGV)"| P["Process"] P -->|"Handler Function"| Action["Handle Signal"] style OS fill:#ffcc99 style Action fill:#99ff99
Key properties:
- Asynchronous β can arrive at any time
- Handler function β process must handle it
- Default handlers β some signals have built-in behavior
- Custom handlers β can override defaults (for most signals)
π Common Unix Signals
| Signal | Meaning | Default Action | Can Override? |
|---|---|---|---|
SIGKILL | Kill process | Terminate | β NO |
SIGTERM | Polite termination | Terminate | β Yes |
SIGSTOP | Pause process | Stop | β NO |
SIGCONT | Continue stopped process | Continue | β Yes |
SIGSEGV | Segmentation fault | Terminate + core | β Yes |
SIGFPE | Arithmetic error | Terminate | β Yes |
π οΈ Registering a Custom Handler
void myHandler(int signo) {
if (signo == SIGSEGV) {
printf("Memory access error!\n");
exit(1);
}
}
int main() {
signal(SIGSEGV, myHandler); // Register custom handler
int *p = NULL;
*p = 123; // Triggers SIGSEGV
return 0;
}β οΈ Common Pitfalls
Pitfall 1: SIGKILL and SIGSTOP cannot be caught
Never try to handle these β the OS wonβt let you. They always use default behavior.
Pitfall 2: Handlers must be async-signal-safe
Using
malloc(),printf(), or other complex functions inside handlers is risky (technically unsafe).
Pitfall 3: Signals convey no data
Only the signal number is passed. Signals are for notifications, not full data transfer.
β Mock Exam Questions
Q6: Uncatchable Signals
Which signals cannot be caught or overridden by a custom handler?
Answer
Answer: SIGKILL and SIGSTOP
These always use their default behavior regardless of any
signal()call.
6. Summary
π IPC Mechanisms Comparison
| Mechanism | Speed | Portability | Synchronization | Use Case |
|---|---|---|---|---|
| Shared Memory | π Fastest | Same machine only | Manual (mutex) | High-throughput local IPC |
| Message Passing | π’ Slower | Works across network | Built-in blocking | Distributed systems |
| Unix Pipe | β‘ Fast | Unix only | Blocking I/O | Parent-child communication |
| Unix Signal | β‘ Fast | Unix only | Async handlers | Event notifications |
π― Decision Guide
flowchart TD Start["Need IPC?"] Start --> Q1{"Same machine?"} Q1 -- "No" --> MP["Message Passing"] Q1 -- "Yes" --> Q2{"High throughput?"} Q2 -- "Yes" --> SM["Shared Memory"] Q2 -- "No" --> Q3{"Parent-child?"} Q3 -- "Yes" --> Pipe["Unix Pipe"] Q3 -- "No" --> Q4{"Just notification?"} Q4 -- "Yes" --> Sig["Signal"] Q4 -- "No" --> MP style SM fill:#99ff99 style MP fill:#99ff99 style Pipe fill:#99ff99 style Sig fill:#99ff99
π Connections
| π Lecture | π Connection |
|---|---|
| L1 | OS services β IPC is a core OS service for communication |
| L2 | Process isolation β IPC bridges isolated memory spaces |
| L3 | Scheduling β blocked processes waiting for IPC go to Blocked state |
| L5 | Threads β share memory within same process (no IPC needed!) |
| L6 | Synchronization β semaphores protect shared memory; pipes use blocking I/O |
| L10 | File descriptors β pipes use the file descriptor interface |
| L11 | File system β pipes are implemented via file system mechanisms |
| L12 | File system calls β read()/write() used in pipe communication |
The Key Insight
The key to IPC is knowing when the OS is involved, and always thinking about synchronization. Shared Memory is fast but you manage races; Message Passing is slower but safer!