Introduction

Page replacement: critical component of virtual memory management. Objective: decide which memory pages to replace on page faults. Balances limited physical memory with large virtual address space. Efficient replacement improves system throughput and responsiveness.

"Effective page replacement reduces page faults and system thrashing, optimizing memory use." -- Abraham Silberschatz

Virtual Memory Overview

Concept

Virtual memory: abstraction allowing processes to use address space larger than physical memory. Pages mapped between virtual and physical memory.

Page and Frame

Page: fixed-size block of virtual memory. Frame: fixed-size block of physical memory. Mapping via page tables.

Address Translation

Virtual addresses translated to physical addresses using page tables and TLBs (Translation Lookaside Buffers) for speed.

Memory Hierarchy

Memory hierarchy: cache, RAM, disk. Virtual memory allows slower disk to supplement faster RAM transparently.

Page Faults

Definition

Page fault: event triggered when a referenced page not in physical memory. Causes interrupt and kernel action.

Types

Minor page fault: page in memory but not mapped. Major page fault: page must be fetched from disk.

Handling

OS locates free frame or selects victim page. Loads required page from disk. Updates page tables.

Impact

Page faults incur latency. Excessive faults degrade performance, may cause thrashing.

Need for Page Replacement

Limited Physical Memory

Physical memory smaller than virtual address space. OS must manage limited frames efficiently.

Page Frame Allocation

Frames allocated to processes. When full, replacement decides which page to evict.

Goal

Minimize page faults by selecting victim pages unlikely to be used soon.

Challenges

Predicting future references is non-trivial. Algorithms approximate optimal behavior.

Page Replacement Algorithms

Classification

Algorithms: demand-paging based. Categories: optimal, approximation, heuristic.

Criteria

Factors: simplicity, overhead, fault rate, implementation complexity.

Types

Common algorithms: FIFO, LRU, Optimal, Clock, NRU.

Algorithm Selection

Depends on system constraints, workload characteristics, hardware support.

FIFO Algorithm

Principle

First-In-First-Out: evicts oldest loaded page. Simple queue structure.

Advantages

Easy to implement. Minimal overhead.

Disadvantages

Poor performance, may remove frequently used pages. Susceptible to Belady's anomaly.

Use Cases

Educational, simple systems with minimal memory management needs.

Page Reference StringPage Frames (3)Page Fault
7, 0, 1, 2, 0, 3, 0, 47, 0, 1 → 2 replaces 7 → 0, 1, 2 → 3 replaces 0 → 1, 2, 3 → 0 replaces 1 → 2, 3, 0 → 4 replaces 26

LRU Algorithm

Principle

Least Recently Used: evicts page unused longest. Exploits temporal locality.

Implementation

Counter or stack-based. Hardware support improves efficiency.

Advantages

Better fault rate than FIFO. Approximates optimal.

Disadvantages

High overhead for tracking usage. Complex to implement in hardware-less systems.

On page access: Update usage timestamp or move page to stack top.On replacement: Evict page with oldest timestamp or bottom of stack.

Optimal Algorithm

Principle

Evicts page not used for longest future time. Minimizes page faults theoretically.

Practicality

Requires future knowledge. Used as benchmark, not implementable in real systems.

Significance

Baseline for evaluating other algorithms.

Example

For each page in memory: Calculate distance to next use.Replace page with maximum distance or no future use.

Clock Algorithm

Principle

Approximation of LRU using circular queue and reference bits.

Mechanism

Pointer sweeps pages: if reference bit 0, evict; if 1, clear bit and advance pointer.

Advantages

Low overhead, simple hardware implementation.

Disadvantages

Less accurate than pure LRU, but practical tradeoff.

PageReference Bit
51
20
91

Performance Metrics

Page Fault Rate

Ratio of page faults to total memory references. Lower is better.

Effective Access Time (EAT)

EAT = (1 - page fault rate) × memory access time + page fault rate × page fault service time.

Throughput

Number of instructions executed per unit time. Affected by page fault overhead.

Overhead

CPU and memory overhead for maintaining replacement data structures.

EAT = (1 - p) × ma + p × pfWhere: p = page fault rate ma = memory access time pf = page fault service time

Thrashing and Solutions

Definition

Thrashing: excessive paging causing severe performance degradation.

Causes

Insufficient frames, poor replacement policy, high multiprogramming level.

Detection

High page fault rate, low CPU utilization.

Solutions

Reduce multiprogramming, increase frames, better replacement algorithms, working set model.

Case Studies and Applications

Unix Systems

Early Unix used second-chance (Clock) algorithm. Balanced overhead and fault rate.

Windows OS

Uses variants of LRU with aging and working set algorithms for dynamic frame allocation.

Embedded Systems

Often avoid paging due to real-time constraints. Use static memory allocation.

Modern Virtualization

Hypervisors implement nested paging and complex replacement policies to optimize guest OS performance.

References

  • Silberschatz, A., Galvin, P. B., & Gagne, G., Operating System Concepts, 10th ed., Wiley, 2018, pp. 215-265.
  • Tanenbaum, A. S., Modern Operating Systems, 4th ed., Pearson, 2014, pp. 280-330.
  • Denning, P. J., "The Working Set Model for Program Behavior," Communications of the ACM, vol. 11, no. 5, 1968, pp. 323-333.
  • Smith, A. J., "Page Replacement Algorithms," ACM Computing Surveys, vol. 14, no. 1, 1982, pp. 34-53.
  • Hennessy, J. L., & Patterson, D. A., Computer Architecture: A Quantitative Approach, 6th ed., Morgan Kaufmann, 2017, pp. 450-475.