πŸ“˜ L4: Inter-Process Communication (IPC)

Lecture Goal

Understand the four main IPC mechanisms β€” Shared Memory, Message Passing, Unix Pipes, and Signals β€” and know when to use each based on efficiency, portability, and synchronization requirements.

The Big Picture

Processes are isolated by default β€” each has its own memory space. IPC is the bridge that lets them cooperate. The key trade-off: Shared Memory = fast but hard to sync, Message Passing = slower but safer and portable.


1. Why IPC?

🧠 The Core Problem

Each process in an OS has its own independent memory space. By default, Process A cannot read or write Process B’s memory β€” they are completely isolated.

But real-world applications need processes to cooperate and share data (e.g., web server handling requests, compiler pipeline, database).

flowchart LR
    subgraph Problem ["The Problem"]
        P1["Process A<br/>Memory Space"]
        P2["Process B<br/>Memory Space"]
        P1 -.-x|"❌ No access"| P2
    end

    Problem --> Solution["IPC Mechanisms"]

    style Problem fill:#ff9999
    style Solution fill:#99ff99

IPC (Inter-Process Communication) is a set of mechanisms provided by the OS that allows processes to communicate and synchronize.


πŸ“Š Four Main IPC Mechanisms

MechanismTypeKey Characteristic
Shared MemoryGeneralFastest β€” direct memory access
Message PassingGeneralPortable β€” works across networks
Unix PipeUnix-specificUnidirectional byte stream
Unix SignalUnix-specificAsynchronous notification

⚠️ Common Pitfalls

Pitfall: Threads vs. Processes

Threads within the same process do share memory β€” IPC is needed only across separate processes. Don’t confuse the two!


2. Shared Memory

🧠 Core Concept

A special memory region M is carved out and made accessible to multiple processes simultaneously. Any write by one process is immediately visible to all others β€” like a shared whiteboard πŸ—’οΈ.

flowchart LR
    subgraph SharedMemory ["Shared Memory Region M"]
        M["πŸ“ Memory"]
    end

    P1["Process A"] <-->|"Read/Write"| M
    P2["Process B"] <-->|"Read/Write"| M

    style SharedMemory fill:#ffcc99
    style M fill:#99ff99

Lifecycle:

Key Insight

The OS is only involved in Create and Attach. After that, read/write happens at memory speed β€” no OS involvement! This makes it very efficient.


πŸ› οΈ POSIX System Calls

StepSystem CallPurpose
Createshmget(IPC_PRIVATE, size, flags)Creates region, returns ID
Attachshmat(shmid, NULL, 0)Attaches to process address space
Detachshmdt(ptr)Detaches the region
Destroyshmctl(shmid, IPC_RMID, 0)Deletes the shared memory

βœ… Advantages vs. ❌ Disadvantages

βœ… Advantages❌ Disadvantages
Efficient β€” OS only involved in setupSynchronization required β€” race conditions
Easy to use β€” behaves like normal memoryHarder to implement correctly
Supports arbitrary data types and sizesOnly works on the same machine

πŸ“– Code Example: Master/Slave Pattern

sequenceDiagram
    participant Master
    participant SHM as Shared Memory
    participant Slave

    Master->>SHM: shmget() - Create
    Master->>SHM: shmat() - Attach
    Master->>SHM: shm[0] = 0 (not ready)
    Slave->>SHM: shmat() - Attach (using ID)
    Slave->>SHM: Write data to shm[1..3]
    Slave->>SHM: shm[0] = 1 (signal ready)
    Master->>SHM: Poll until shm[0] == 1
    Master->>SHM: Read shm[1..3]
    Master->>SHM: shmdt() + shmctl() - Cleanup

Master (Waiter): Creates memory, polls shm[0] until slave signals ready.

Slave (Writer): Attaches using the ID, writes data, sets shm[0] = 1 to signal.


⚠️ Common Pitfalls

Pitfall 1: Forgetting to detach before destroy

You cannot call shmctl(IPC_RMID) if the region is still attached β€” it will fail or be deferred.

Pitfall 2: Race conditions

If both processes write simultaneously, data corruption can occur. The example avoids this by having one writer, one reader.

Pitfall 3: Memory leaks

If a process crashes before destroying shared memory, the region persists. Always clean up!


❓ Mock Exam Questions

Q1: OS Involvement

At which steps does the OS need to be involved in shared memory IPC?

Answer

Answer: Create and Attach only

After attaching, the shared memory region behaves like normal memory β€” reads and writes happen directly without OS intervention.

Q2: Efficiency

Why is shared memory more efficient than message passing for high-throughput IPC?

Answer

Answer: No system calls per operation

Shared memory requires OS involvement only during setup. Message passing requires a system call for every send/receive.


3. Message Passing

🧠 Core Concept

Processes communicate by explicitly sending and receiving messages through the OS kernel β€” like sending emails πŸ“§.

flowchart LR
    P1["Process A"] -->|"send(Msg)"| OS["OS Kernel"]
    OS -->|"receive(Msg)"| P2["Process B"]

    style OS fill:#ffcc99

Every send/receive goes through the OS β€” this is the key difference from shared memory.


πŸ“› Two Design Decisions

1. Naming β€” How to address the recipient?

SchemeDescriptionExample
DirectExplicitly name the other processsend(P2, Msg)
IndirectUse a shared mailbox/portsend(MB, Msg)

Indirect is more flexible

Many processes can share one mailbox β€” neither sender nor receiver needs to know the other’s PID.


2. Synchronization β€” When to block?

BehaviorSendReceive
Blocking (Sync)Waits until message receivedWaits until message arrives
Non-Blocking (Async)Returns immediatelyReturns message or β€œnot ready”

βœ… Advantages vs. ❌ Disadvantages

βœ… Advantages❌ Disadvantages
Portable β€” works across networksInefficient β€” every operation needs OS
Easier synchronization β€” blocking auto-syncsLimited message size/format

⚠️ Common Pitfalls

Pitfall: Blocking receive β‰  busy-waiting

Blocking means the OS suspends the process until message arrives β€” it does NOT spin in a loop wasting CPU!


❓ Mock Exam Questions

Q3: Direct vs. Indirect

What is the key difference between direct and indirect communication?

Answer

Answer: Indirect uses a mailbox as intermediary β€” neither process needs to know the other’s PID. Direct requires both parties to know each other’s identity.


4. Unix Pipes

🧠 Core Concept

A Unix Pipe creates a unidirectional byte channel with two ends:

flowchart LR
    P["Process (Writer)"] -->|"write(fd[1])"| Buffer["Pipe Buffer<br/>FIFO"]
    Buffer -->|"read(fd[0])"| Q["Process (Reader)"]

    style Buffer fill:#ffcc99

Think of it like a literal water pipe 🚰 β€” data flows in one direction only.


πŸ› οΈ System Call

#include <unistd.h>
int pipe(int fd[]);   // fd[0] = read end, fd[1] = write end
File DescriptorPurposeAnalogy
fd[0]Read endLike stdin (fd=0)
fd[1]Write endLike stdout (fd=1)

πŸ“Š Pipe Semantics

PropertyBehavior
OrderFIFO β€” First In, First Out
Writer blocksWhen buffer is full
Reader blocksWhen buffer is empty
DirectionUnidirectional (half-duplex)

πŸ“– Code Example

int pipeFd[2];
pipe(pipeFd);
 
if (fork() > 0) {  // Parent = WRITER
    close(pipeFd[0]);               // Close unused read end!
    write(pipeFd[1], "Hello!", 7);
    close(pipeFd[1]);               // Signal EOF
} else {            // Child = READER
    close(pipeFd[1]);               // Close unused write end!
    read(pipeFd[0], buffer, 7);
    close(pipeFd[0]);
}
flowchart TD
    subgraph Fork ["After fork()"]
        P["Parent: has fd[0], fd[1]"]
        C["Child: has fd[0], fd[1]"]
    end

    P -->|"close(fd[0])"| PW["Parent: Writer only"]
    C -->|"close(fd[1])"| CR["Child: Reader only"]

    PW -->|"write"| Pipe["Pipe"]
    Pipe -->|"read"| CR

    style PW fill:#99ff99
    style CR fill:#99ff99

⚠️ Common Pitfalls

Pitfall 1: Not closing unused ends

If writer doesn’t close its read end, reader may block forever waiting for EOF. Always close the end you’re not using!

Pitfall 2: Pipes are FIFO only

You cannot seek inside a pipe β€” data must be consumed in order.

Pitfall 3: Related processes only

Basic pipe() works only between parent and children (via fork()). For unrelated processes, use named pipes (FIFOs).


❓ Mock Exam Questions

Q4: Pipe Ends

In pipe(fd), which is the read end?

Answer

Answer: fd[0]

Memory aid: 0 β†’ input (like stdin=0), 1 β†’ output (like stdout=1).

Q5: Bidirectional Communication

How can you achieve bidirectional communication between parent and child?

Answer

Answer: Use two pipes

  • Pipe 1: parent β†’ child
  • Pipe 2: child β†’ parent

A single pipe is unidirectional only.


5. Unix Signals

🧠 Core Concept

A Unix Signal is an asynchronous notification sent to a process β€” like a phone call interrupting your work πŸ“±.

flowchart LR
    OS["OS/Kernel"] -->|"Signal (e.g., SIGSEGV)"| P["Process"]
    P -->|"Handler Function"| Action["Handle Signal"]

    style OS fill:#ffcc99
    style Action fill:#99ff99

Key properties:

  • Asynchronous β€” can arrive at any time
  • Handler function β€” process must handle it
  • Default handlers β€” some signals have built-in behavior
  • Custom handlers β€” can override defaults (for most signals)

πŸ“Š Common Unix Signals

SignalMeaningDefault ActionCan Override?
SIGKILLKill processTerminate❌ NO
SIGTERMPolite terminationTerminateβœ… Yes
SIGSTOPPause processStop❌ NO
SIGCONTContinue stopped processContinueβœ… Yes
SIGSEGVSegmentation faultTerminate + coreβœ… Yes
SIGFPEArithmetic errorTerminateβœ… Yes

πŸ› οΈ Registering a Custom Handler

void myHandler(int signo) {
    if (signo == SIGSEGV) {
        printf("Memory access error!\n");
        exit(1);
    }
}
 
int main() {
    signal(SIGSEGV, myHandler);  // Register custom handler
    int *p = NULL;
    *p = 123;  // Triggers SIGSEGV
    return 0;
}

⚠️ Common Pitfalls

Pitfall 1: SIGKILL and SIGSTOP cannot be caught

Never try to handle these β€” the OS won’t let you. They always use default behavior.

Pitfall 2: Handlers must be async-signal-safe

Using malloc(), printf(), or other complex functions inside handlers is risky (technically unsafe).

Pitfall 3: Signals convey no data

Only the signal number is passed. Signals are for notifications, not full data transfer.


❓ Mock Exam Questions

Q6: Uncatchable Signals

Which signals cannot be caught or overridden by a custom handler?

Answer

Answer: SIGKILL and SIGSTOP

These always use their default behavior regardless of any signal() call.


6. Summary

πŸ“Š IPC Mechanisms Comparison

MechanismSpeedPortabilitySynchronizationUse Case
Shared MemoryπŸš€ FastestSame machine onlyManual (mutex)High-throughput local IPC
Message Passing🐒 SlowerWorks across networkBuilt-in blockingDistributed systems
Unix Pipe⚑ FastUnix onlyBlocking I/OParent-child communication
Unix Signal⚑ FastUnix onlyAsync handlersEvent notifications

🎯 Decision Guide

flowchart TD
    Start["Need IPC?"]
    Start --> Q1{"Same machine?"}
    Q1 -- "No" --> MP["Message Passing"]
    Q1 -- "Yes" --> Q2{"High throughput?"}
    Q2 -- "Yes" --> SM["Shared Memory"]
    Q2 -- "No" --> Q3{"Parent-child?"}
    Q3 -- "Yes" --> Pipe["Unix Pipe"]
    Q3 -- "No" --> Q4{"Just notification?"}
    Q4 -- "Yes" --> Sig["Signal"]
    Q4 -- "No" --> MP

    style SM fill:#99ff99
    style MP fill:#99ff99
    style Pipe fill:#99ff99
    style Sig fill:#99ff99

πŸ”— Connections

πŸ“š LectureπŸ”— Connection
L1OS services β€” IPC is a core OS service for communication
L2Process isolation β€” IPC bridges isolated memory spaces
L3Scheduling β€” blocked processes waiting for IPC go to Blocked state
L5Threads β€” share memory within same process (no IPC needed!)
L6Synchronization β€” semaphores protect shared memory; pipes use blocking I/O
L10File descriptors β€” pipes use the file descriptor interface
L11File system β€” pipes are implemented via file system mechanisms
L12File system calls β€” read()/write() used in pipe communication

The Key Insight

The key to IPC is knowing when the OS is involved, and always thinking about synchronization. Shared Memory is fast but you manage races; Message Passing is slower but safer!