Introduction
Paging: memory management method dividing virtual memory into fixed-size blocks called pages. Physical memory split into frames of equal size. Eliminates external fragmentation. Facilitates virtual memory use and process isolation. Enables efficient address translation and swapping.
"Paging revolutionized memory management by enabling flexible, non-contiguous allocation of physical memory." -- Abraham Silberschatz
Basic Concepts
Virtual Memory
Abstraction of memory allowing processes to use more memory than physically available. Pages represent fixed-size logical blocks of virtual memory.
Pages and Frames
Page: fixed-size block of virtual memory (commonly 4KB). Frame: physical memory block of same size. One-to-one mapping between pages and frames.
Logical vs Physical Address
Logical address: generated by CPU, consists of page number and offset. Physical address: actual RAM location, frame number plus offset.
Page Number and Offset
Logical address split into page number (identifies page) and offset (location within page). Offset size = log2(page size) bits.
Fixed Page Size
Uniform page size simplifies hardware and software design. Common sizes: 4KB, 8KB, 16KB.
Address Translation
Translation Lookaside Buffer (TLB)
Hardware cache storing recent page-to-frame mappings. Reduces memory access latency. TLB hit avoids page table lookup.
Page Table Walk
Process of translating page number to frame number by consulting page table in memory. Slower without TLB.
Physical Address Calculation
Physical address = Frame number × page size + offset. Frame number obtained from page table entry.
Multi-level Paging
Hierarchical page tables reduce memory overhead. Page tables split into levels to handle large address spaces efficiently.
Inverted Page Table
Single global page table with one entry per frame, stores process and page info. Reduces memory but slower lookups.
Page Tables
Structure and Entries
Page table: array indexed by page number. Each entry contains frame number, valid bit, protection bits, dirty bit, reference bit.
Valid/Invalid Bit
Indicates if page is in physical memory (valid) or not (invalid). Invalid triggers page fault.
Protection Bits
Control access rights: read, write, execute permissions per page.
Dirty and Reference Bits
Dirty bit: indicates if page modified. Reference bit: tracks if page accessed recently, used in replacement algorithms.
Example Page Table
| Page Number | Frame Number | Valid Bit | Protection | Dirty Bit | Reference Bit |
|---|---|---|---|---|---|
| 0 | 5 | 1 | R/W | 0 | 1 |
| 1 | 9 | 1 | R | 0 | 0 |
Types of Paging
Simple Paging
Single-level page table, direct mapping of pages to frames. Suitable for small address spaces.
Hierarchical Paging
Multi-level page tables split address space into multiple segments for memory efficiency.
Inverted Paging
One page table for entire system indexed by frame number, reduces memory but increases lookup time.
Demand Paging
Pages loaded only when needed (page fault), reduces memory usage and startup time.
Pre-paging
Loads multiple pages in anticipation of future use, reduces page faults but may waste memory.
Page Replacement Algorithms
First-In-First-Out (FIFO)
Replaces oldest loaded page. Simple but can suffer from Belady's anomaly.
Least Recently Used (LRU)
Replaces page not used for longest time. Approximates optimal but costly to implement.
Optimal (MIN)
Replaces page not used for longest time in future. Ideal but impossible to implement in practice.
Clock Algorithm
Efficient approximation of LRU using circular list and reference bits.
Additional Algorithms
NRU, Second Chance, Aging, and Working Set algorithms balance efficiency and accuracy.
Algorithm Clock: pointer = 0 while true: if reference_bit[pointer] == 0: replace page at pointer break else: reference_bit[pointer] = 0 pointer = (pointer + 1) mod number_of_framesDemand Paging
Definition
Pages loaded from secondary storage only when referenced. Saves memory and I/O.
Page Fault Handling
On page fault: OS suspends process, loads page from disk, updates page table, resumes process.
Thrashing
Excessive page faults causing severe performance degradation. Occurs when working set exceeds physical memory.
Working Set Model
Tracks set of pages used recently to reduce thrashing and optimize page allocation.
Performance Impact
Demand paging reduces memory footprint but increases latency on page faults.
Advantages and Limitations
Advantages
Eliminates external fragmentation. Enables virtual memory and process isolation. Simplifies memory allocation. Allows efficient swapping.
Limitations
Internal fragmentation due to fixed page size. Overhead of page table management. Increased memory access time without TLB.
Hardware Support
Requires MMU for address translation and TLB caching. Complexity increases with multi-level tables.
Software Overhead
Page fault handling and replacement algorithms add OS complexity and processing time.
Security Considerations
Protection bits enforce access rights. Page isolation prevents process interference.
Fragmentation
Internal Fragmentation
Unused memory within a page due to fixed page size. Typical waste: up to nearly one page per allocation.
External Fragmentation
Absent in paging due to non-contiguous allocation. Contrasts with segmentation and contiguous allocation.
Fragmentation Trade-offs
Smaller pages reduce internal fragmentation but increase page table size and overhead.
Fragmentation Mitigation
Choosing optimal page size balances internal fragmentation and overhead.
Example Calculation
Internal fragmentation (%) = (average wasted space per page) / page size × 100Average wasted space ≈ page size / 2Segmentation vs Paging
Segmentation
Variable-size memory chunks reflecting logical program structure. Prone to external fragmentation.
Paging
Fixed-size blocks abstracted from process structure. Eliminates external fragmentation.
Combination (Segmentation with Paging)
Segments further divided into pages, combining benefits of both. Used in modern OSes.
Address Translation Differences
Segmentation uses segment number and offset; paging uses page number and offset.
Protection and Sharing
Segmentation allows logical sharing and protection; paging isolates pages but can share frames.
Performance Considerations
TLB Hit Ratio
High hit ratio critical for performance; reduces costly page table lookups.
Page Fault Rate
Lower rates improve throughput; excessive faults cause thrashing and degrade performance.
Page Size Impact
Large pages reduce page table size but increase internal fragmentation and page fault latency.
Memory Access Time
Effective access time (EAT) = (TLB hit time × hit ratio) + (Page table lookup time × miss ratio) + (Page fault service time × fault ratio).
EAT = (h × t_TLB) + ((1 - h) × t_PT) + (p × t_PF)where:h = TLB hit ratiot_TLB = TLB access timet_PT = Page table access timep = Page fault ratet_PF = Page fault service timeImplementation Details
Hardware Components
Memory Management Unit (MMU) performs address translation. TLB caches recent translations. Page table base register stores page table location.
Operating System Role
Maintains page tables, handles page faults, manages page replacement, and allocates frames.
Page Table Storage
Stored in main memory, accessed on TLB miss. Multi-level page tables reduce memory usage.
Context Switching
Page table base register updated per process. TLB flushed or entries invalidated to maintain memory protection.
Example: x86 Paging
Uses 4-level page tables with 9-bit indexes per level, 4KB pages. Supports 48-bit virtual addressing.
References
- Silberschatz, A., Galvin, P. B., & Gagne, G. Operating System Concepts. Wiley, 10th ed., 2018, pp. 215-260.
- Tanenbaum, A. S., & Bos, H. Modern Operating Systems. Pearson, 4th ed., 2015, pp. 245-290.
- Stallings, W. Operating Systems: Internals and Design Principles. Pearson, 9th ed., 2018, pp. 320-370.
- Hennessy, J. L., & Patterson, D. A. Computer Architecture: A Quantitative Approach. Morgan Kaufmann, 6th ed., 2017, pp. 567-590.
- Arpaci-Dusseau, R. H., & Arpaci-Dusseau, A. C. Operating Systems: Three Easy Pieces. Arpaci-Dusseau Books, 2018, Chapter 9-10.