📘 L3: Process Scheduling

Lecture Goal

Understand how the OS decides which process runs next, master the major scheduling algorithms (FCFS, SJF, SRT, RR, Priority, MLFQ, Lottery), and know the trade-offs between fairness, throughput, and responsiveness.

The Big Picture

Scheduling is the heart of multiprogramming. The core tension: throughput vs. responsiveness — you can’t optimize both simultaneously. Every algorithm is a different compromise.


1. Concurrent Execution

🧠 Core Concept

Concurrent processes appear to run at the same time. Two forms:

FormDescriptionExample
Virtual ParallelismIllusion — 1 CPU, processes take turnsSingle-core laptop
Physical ParallelismTruly simultaneous on multiple CPUsQuad-core CPU
flowchart LR
    subgraph Virtual ["Virtual Parallelism (1 CPU)"]
        V1["A"] --> V2["B"] --> V3["A"] --> V4["B"]
    end

    subgraph Physical ["Physical Parallelism (2 CPUs)"]
        P1["A on Core 1"]
        P2["B on Core 2"]
    end

    style Virtual fill:#ffcc99
    style Physical fill:#99ff99

Every context switch incurs overhead — wasted time saving/restoring process state.


⚠️ Common Pitfalls

Pitfall: Concurrent ≠ Parallel

Two processes can be concurrent on 1 CPU (they take turns) without ever truly running at the same instant.


❓ Mock Exam Questions

Q1: Virtual vs Physical

What is the difference between virtual and physical parallelism?

Answer

Answer: Virtual parallelism uses timeslicing on a single CPU to create the illusion of simultaneity. Physical parallelism runs processes truly simultaneously on multiple CPUs.

Q2: Context Switch Cost

Why does context switching introduce overhead?

Answer

Answer: The OS must save the current process’s state (registers, PC, SP) and restore the next process’s state. This takes CPU time that could otherwise be used for useful work.


2. Process Scheduling — Core Definitions

🧠 Core Concept

When more processes are ready to run than there are CPUs, the OS must decide which process runs next.

TermDefinition
SchedulerThe part of the OS that makes scheduling decisions
Scheduling AlgorithmThe logic/rules the scheduler uses to pick the next process
flowchart LR
    RQ["Ready Queue<br/>P1, P2, P3, ..."] --> Scheduler["Scheduler"]
    Scheduler -->|"Pick next"| CPU["CPU"]

    style Scheduler fill:#ffcc99
    style CPU fill:#99ff99

⚠️ Common Pitfalls

Pitfall: Scheduler ≠ Algorithm

The scheduler is part of the OS; the algorithm is the logic it uses. They are two different things.


❓ Mock Exam Questions

Q3: When Scheduling Is Not Needed

Can there ever be a scenario where scheduling is not needed?

Answer

Answer: Yes — when there is only one process ready to run (or fewer processes than CPUs). No decision needed.


3. Process Behavior & Processing Environments

🧠 Process Behavior

Every process alternates between two types of activity:

flowchart LR
    CPU["CPU Activity<br/>(Computation)"] --> IO["I/O Activity<br/>(Waiting)"]
    IO --> CPU

    style CPU fill:#99ff99
    style IO fill:#ffcc99
TypeDescriptionProcess Label
CPU ActivityComputation, number crunchingCompute-Bound
I/O ActivityWaiting for disk, keyboard, screenI/O-Bound

🧠 Processing Environments

EnvironmentWho Uses ItKey Requirement
Batch ProcessingNo user (overnight jobs)Maximize throughput
InteractiveActive users (desktop OS)Fast, consistent response
Real-TimeEmbedded/critical systemsMeet deadlines

⚠️ Common Pitfalls

Pitfall: Real-time ≠ Interactive

Real-time systems have hard deadlines (missing one = failure). Interactive systems just need to feel snappy.


❓ Mock Exam Questions

Q4: Environment Classification

A video encoder running overnight with no user interaction — which environment?

Answer

Answer: Batch processing, compute-bound

No user interaction needed, maximize throughput.


4. Criteria for Good Scheduling

🧠 Universal Criteria

CriterionMeaning
FairnessEach process gets a fair share; no starvation
BalanceKeep all parts of the system busy (CPU, I/O devices)

🧠 Batch-Specific Criteria

CriterionFormula / Meaning
Turnaround Time
Waiting TimeTime spent in ready queue, not executing
ThroughputTasks completed per unit time
CPU UtilizationFraction of time CPU does useful work

🧠 Interactive-Specific Criteria

CriterionMeaning
Response TimeTime from user request to first response
PredictabilityLow variance in response time
flowchart TD
    Criteria["Scheduling Criteria"]
    Criteria --> Universal["Universal<br/>Fairness, Balance"]
    Criteria --> Batch["Batch<br/>Turnaround, Throughput"]
    Criteria --> Interactive["Interactive<br/>Response Time"]

    style Universal fill:#99ff99
    style Batch fill:#ffcc99
    style Interactive fill:#ffcc99

Turnaround = Waiting + Execution


⚠️ Common Pitfalls

Pitfall: Fairness ≠ Equal CPU time

A user running 10 processes shouldn’t hog more CPU than a user running 1. Fairness can be per-user, not per-process.

Pitfall: Criteria conflict

Optimizing throughput might hurt response time and vice versa. No single algorithm optimizes everything.


❓ Mock Exam Questions

Q5: Turnaround vs. Waiting

What is the difference between turnaround time and waiting time?

Answer

Answer: Turnaround time = total time from arrival to finish. Waiting time = only the time spent in the ready queue. Turnaround = waiting + execution.


5. When to Schedule: Non-Preemptive vs Preemptive

🧠 Core Concept

PolicyHow It WorksAnalogy
Non-PreemptiveProcess runs until it voluntarily gives up CPUPolite conversation
PreemptiveOS forcibly stops process after time quotaTalk show host cuts you off ⏱️
flowchart TD
    Policy["Scheduling Policy"]
    Policy --> NP["Non-Preemptive<br/>Process yields voluntarily"]
    Policy --> P["Preemptive<br/>OS forces context switch"]

    NP --> NP1["Blocks on I/O"]
    NP --> NP2["Calls yield()"]
    NP --> NP3["Terminates"]

    P --> P1["Timer interrupt fires"]
    P --> P2["Higher priority arrives"]

    style NP fill:#ffcc99
    style P fill:#99ff99

Scheduling procedure:

  1. Scheduler is triggered (OS takes control)
  2. Save context of current process → place on queue
  3. Pick a suitable process using the algorithm
  4. Restore context for chosen process
  5. Let it run

⚠️ Common Pitfalls

Pitfall: Non-preemptive ≠ forever

Non-preemptive means the process runs until it chooses to stop (blocks on I/O, yields, or terminates) — not that it runs infinitely.

Pitfall: Preemptive requires hardware

Preemptive scheduling requires a hardware timer to fire interrupts. Without it, the OS cannot take back control!


❓ Mock Exam Questions

Q6: Non-Preemptive Problem

Why is non-preemptive scheduling problematic for interactive systems?

Answer

Answer: If a process decides to run for a long time, no other process can run until it voluntarily yields. This makes the system unresponsive.

Q7: Early Yield in Preemptive

In what scenario might a preemptive process give up the CPU before its time quantum expires?

Answer

Answer: When it blocks on I/O (e.g., waiting for disk/keyboard input) or when a higher-priority process arrives and preempts it.


6. Batch Processing Algorithms

6.1 First-Come First-Served (FCFS)

🧠 Core Concept

  • Tasks queued in FIFO order by arrival time
  • First task runs until it finishes or blocks
  • Blocked tasks rejoin the back of the queue
  • Guaranteed: No starvation
flowchart LR
    A["A arrives first"] --> B["B arrives second"] --> C["C arrives third"]
    A -->|"runs first"| CPU["CPU"]
    B -->|"waits"| CPU
    C -->|"waits"| CPU

    style A fill:#99ff99
    style B fill:#ffcc99
    style C fill:#ff9999

Example: A=8, B=5, C=3 (all arrive at t=0)

Timeline: [AAAAAAAA][BBBBB][CCC]
           0        8     13  16
TaskArrivalFinishWaiting
A080
B0138
C01613


⚠️ Common Pitfalls

Pitfall: Convoy Effect

If a long CPU-bound job is at the front, all short I/O-bound jobs pile up behind it. Both CPU and I/O devices end up idle at different times — very wasteful!


❓ Mock Exam Questions

Q8: FCFS Starvation

Why does FCFS guarantee no starvation?

Answer

Answer: Every task eventually reaches the front of the FIFO queue. No task can be skipped indefinitely.


6.2 Shortest Job First (SJF)

🧠 Core Concept

  • Select the task with the smallest total CPU time
  • Non-preemptive — once started, runs to completion
  • Optimal for minimizing average waiting time (fixed job set)
  • Starvation possible — long jobs may never run
flowchart LR
    C["C (3 TUs)<br/>shortest"] --> B["B (5 TUs)"] --> A["A (8 TUs)<br/>longest"]

    style C fill:#99ff99
    style A fill:#ff9999

Example: Same tasks, SJF reorders: C → B → A

Timeline: [CCC][BBBBB][AAAAAAAA]
           0    3     8        16
TaskWaiting
C0
B3
A8

Compare to FCFS’s 7 TUs — SJF is much better!


Predicting CPU Time (Exponential Average)

Since we often don’t know CPU time in advance:

ValueBehavior
Trust only most recent burst
Ignore recent history entirely
Balance recent and historical

⚠️ Common Pitfalls

Pitfall: SJF requires foreknowledge

SJF requires knowing the total CPU time upfront. In practice, this is estimated via exponential averaging — not exact.

Pitfall: SJF starvation

If short jobs keep arriving, a long job may never run.


❓ Mock Exam Questions

Q9: SJF Optimality

Why does SJF minimize average waiting time for a fixed job set?

Answer

Answer: Short jobs don’t add much to the waiting time of others. Running them first minimizes the total “waiting contributed” by each job. A long job running first adds its full duration to everyone else’s wait.

Q10: Alpha Value

What value of gives full weight to the most recent CPU burst?

Answer

Answer:

The formula becomes: Predicted = Actual (ignores all history).


6.3 Shortest Remaining Time (SRT)

🧠 Core Concept

SRT is the preemptive version of SJF:

  • Use remaining CPU time instead of total
  • When a new job arrives with shorter remaining time → preempt immediately
flowchart TD
    A0["t=0: A starts (rem=8)"]
    C1["t=1: C arrives (rem=3) < A (rem=7)"]
    C1 -->|"Preempt A!"| Crun["C runs"]
    B2["t=2: B arrives (rem=5) > C (rem=2)"]
    B2 -->|"C continues"| Cdone["t=4: C finishes"]
    Cdone -->|"B(5) < A(7)"| Brun["B runs"]
    Brun --> Arun["t=9: A resumes"]

    style C1 fill:#ff9999
    style Crun fill:#99ff99

Example: A(8, t=0), C(3, t=1), B(5, t=2)

Timeline: [A][CCC][BBBBB][AAAAAAA]
           0 1    4      9       16

⚠️ Common Pitfalls

Pitfall: SRT overhead

More context switches than SJF — preemption has overhead. But it handles late-arriving short jobs well.


❓ Mock Exam Questions

Q11: SRT vs. SJF

How does SRT differ from SJF?

Answer

Answer: SRT is preemptive — when a new job arrives with shorter remaining time, the current job is preempted. SJF is non-preemptive — once a job starts, it runs to completion.


7. Interactive System Algorithms

Key Difference

Interactive systems use preemptive scheduling. The OS regains control periodically via timer interrupts.

TermMeaningTypical Value
Interval of Timer Interrupt (ITI)How often hardware clock fires1ms – 10ms
Time QuantumExecution slot per process5ms – 100ms

Quantum is always a multiple of ITI

If ITI = 10ms and quantum = 20ms, the scheduler fires every 10ms but only switches every 2nd trigger.


7.1 Round Robin (RR)

🧠 Core Concept

  • Processes stored in a FIFO queue
  • Each process runs for at most one time quantum (q)
  • When quantum expires → process goes to back of queue
  • Response time guarantee:
flowchart LR
    subgraph Queue ["Ready Queue (FIFO)"]
        A["A"] --> B["B"] --> C["C"]
    end

    A -->|"q expires"| Back["Back of queue"]
    CPU["CPU"] -->|"next"| B

    style Queue fill:#ffcc99

Time Quantum Trade-Off

Quantum SizeProCon
LargeBetter CPU utilizationLonger wait for others
SmallShorter wait timeMore context switches → overhead

⚠️ Common Pitfalls

Pitfall: RR = Preemptive FCFS

Same queue ordering as FCFS, but with a time limit. Very small quantum → CPU spends more time context switching than doing work!


❓ Mock Exam Questions

Q12: RR Wait Time

Derive the maximum waiting time formula for Round Robin.

Answer

Answer: With n processes and quantum q, the worst case is that a process must wait for all other n-1 processes to each use their full quantum before it runs again: .


7.2 Priority Scheduling

🧠 Core Concept

  • Assign each process a priority value
  • Scheduler always picks the highest priority ready process
  • Preemptive: arriving high-priority immediately displaces current
  • Non-preemptive: waits for next scheduling decision

Problem: Priority Inversion 😱

A lower-priority task effectively delays a higher-priority task:

flowchart TD
    C["C (low) locks resource"] -->|"C can't release"| B["B (mid) preempts C"]
    B -->|"B runs instead of A"| A["A (high) blocked waiting for lock"]

    A -.->|"Needs C's lock"| C

    style C fill:#ff9999
    style B fill:#ffcc99
    style A fill:#ff9999

Scenario: C locks a resource → B preempts C → A needs C’s lock → A blocked → B runs instead of A!

Real-World Impact

Priority inversion crashed the Mars Pathfinder rover in 1997! 🚀

Fixes: Priority inheritance (boost priority of lock holder) or aging (gradually increase priority of waiting processes).


⚠️ Common Pitfalls

Pitfall: Starvation risk

Low-priority processes may never run if high-priority ones keep arriving.


❓ Mock Exam Questions

Q13: Priority Inversion

What is priority inversion? Describe the conditions needed.

Answer

Answer: A high-priority process is blocked waiting for a resource held by a low-priority process, while a mid-priority process runs instead. Conditions: (1) low-priority holds a shared resource, (2) mid-priority preempts low, (3) high-priority needs the same resource.


7.3 Multi-Level Feedback Queue (MLFQ)

🧠 Core Concept

MLFQ solves the hardest problem: scheduling well without knowing process behavior in advance. It learns from behavior and adjusts dynamically.

flowchart TD
    Q2["Q2 (Highest Priority)<br/>quantum = small"]
    Q1["Q1 (Medium Priority)<br/>quantum = medium"]
    Q0["Q0 (Lowest Priority)<br/>quantum = large"]

    Q2 -->|"uses full quantum<br/>→ demote"| Q1
    Q1 -->|"uses full quantum<br/>→ demote"| Q0
    Q0 -->|"stays here"| Q0

    style Q2 fill:#99ff99
    style Q1 fill:#ffcc99
    style Q0 fill:#ff9999

Rules

RuleDescription
Rule 1Higher priority always runs first
Rule 2Same priority → Round Robin
Rule 3New job → placed in highest priority queue
Rule 4Uses full quantum → priority decreases (CPU-bound)
Rule 5Gives up / blocks early → priority stays (I/O-bound)

How MLFQ Learns

Process TypeBehaviorMLFQ Response
CPU-boundUses full quantum every timeSinks to Q0 (lowest)
I/O-boundBlocks before quantum expiresStays at Q2 (highest)
Short jobFinishes quickly at Q2/Q1Gets fast response

⚠️ Common Pitfalls

Pitfall: Gaming MLFQ

A clever process could yield just before its quantum expires every turn — staying at high priority while behaving like a CPU hog!

Fix: Track total CPU usage across all quanta, not just per-slice.

Pitfall: Starvation of CPU-bound jobs

If I/O-bound or short jobs keep arriving, CPU-bound jobs stuck at Q0 may starve.

Fix: Periodically boost all jobs back to the highest queue (priority reset).


❓ Mock Exam Questions

Q14: MLFQ New Jobs

Why does MLFQ give new jobs the highest priority?

Answer

Answer: MLFQ assumes a new job might be short (I/O-bound or interactive). By starting it at the highest priority, it gets fast response. If it turns out to be CPU-bound, it will naturally sink to lower queues.

Q15: MLFQ Abuse

Describe how a malicious process could abuse the basic MLFQ rules.

Answer

Answer: A process could voluntarily yield just before its quantum expires on every turn. Since it never uses its full quantum, MLFQ thinks it’s I/O-bound and keeps it at high priority — even though it’s actually consuming lots of CPU time.


7.4 Lottery Scheduling

🧠 Core Concept

  • Give each process lottery tickets proportional to desired CPU share
  • Each decision: draw a random ticket → winner runs
  • Over time: X% of tickets ≈ X% of CPU time

flowchart LR
    A["Process A<br/>75 tickets"] --> Lottery["🎲 Lottery<br/>(1–100)"]
    B["Process B<br/>25 tickets"] --> Lottery
    Lottery -->|"1–75"| WinA["A runs"]
    Lottery -->|"76–100"| WinB["B runs"]

    style Lottery fill:#ffcc99
PropertyExplanation
ResponsiveNew processes join immediately
ControllableGive important processes more tickets
SimpleEasy to implement

⚠️ Common Pitfalls

Pitfall: Short-term unfairness

B might “get lucky” and win many times in a row. Fairness is only probabilistic in the long run.

Pitfall: Not for real-time

Lottery scheduling is not deterministic. Real-time systems need guaranteed deadlines.


❓ Mock Exam Questions

Q16: Lottery vs. Priority

What is a key advantage of lottery scheduling over priority scheduling?

Answer

Answer: Lottery scheduling guarantees proportional CPU sharing over time. Priority scheduling can cause starvation of low-priority processes. Even a process with 1 ticket will eventually win.


8. Summary

📊 Algorithm Comparison

AlgorithmTypeEnvironmentStarvation?Key Property
FCFSNon-preemptiveBatch❌ NoSimple, Convoy Effect
SJFNon-preemptiveBatch✅ YesOptimal avg wait (fixed set)
SRTPreemptiveBatch✅ YesGood for late short jobs
Round RobinPreemptiveInteractive❌ NoFair, bounded response
PriorityBothInteractive✅ YesFlexible, Inversion risk
MLFQPreemptiveInteractive✅ (CPU)Adaptive, learns behavior
LotteryPreemptiveInteractive✅ (unlikely)Proportional, probabilistic

🎯 Decision Guide

flowchart TD
    Start["Which algorithm?"]
    Start --> Q1{"Environment?"}
    Q1 -- "Batch" --> Q2{"Know job lengths?"}
    Q2 -- "Yes" --> SJF["SJF"]
    Q2 -- "No" --> FCFS["FCFS"]
    Q1 -- "Interactive" --> Q3{"Need proportional sharing?"}
    Q3 -- "Yes" --> Lottery["Lottery"]
    Q3 -- "No" --> Q4{"Adaptive behavior?"}
    Q4 -- "Yes" --> MLFQ["MLFQ"]
    Q4 -- "No" --> RR["Round Robin"]

    style SJF fill:#99ff99
    style FCFS fill:#99ff99
    style Lottery fill:#99ff99
    style MLFQ fill:#99ff99
    style RR fill:#99ff99

🔗 Connections

📚 Lecture🔗 Connection
L2Process creation — scheduler picks from ready queue
L4IPC — scheduling affects when processes can communicate
L5Threads — same scheduling concepts apply to threads

The Key Insight

There is no single “best” scheduling algorithm. The right choice depends on the environment, workload, and which criteria you prioritize. Understanding the trade-offs is what matters!