Introduction
Memory management: cornerstone of operating systems. Task: allocate, track, protect, and optimize memory usage. Goal: maximize utilization, minimize waste, ensure process isolation. Challenges: limited physical memory, concurrent processes, varying program sizes.
"Memory management is the foundation of all system performance and security." -- Abraham Silberschatz
Memory Hierarchy
Levels of Memory
Registers: fastest, smallest, CPU-local storage. Cache: levels L1, L2, L3; low latency, moderate size. Main memory (RAM): large, slower than cache. Secondary storage: disk, slowest, persistent.
Speed and Cost Trade-offs
Speed inversely proportional to size and cost. Faster memories expensive and limited capacity. Hierarchy balances speed and capacity.
Role in Memory Management
OS manages main memory; hardware manages caches. Memory management ensures data availability from slower memory levels transparently.
Memory Allocation Techniques
Static Allocation
Fixed size, compile time. Simple but inflexible. Used in embedded systems.
Dynamic Allocation
Allocates at runtime based on need. Supports varying program sizes and lifetimes.
Contiguous Allocation
Single continuous block per process. Simple addressing, prone to fragmentation.
Non-contiguous Allocation
Multiple blocks per process, enables more flexible use, supports paging and segmentation.
Allocation Algorithms
First-fit: allocates first sufficient block. Best-fit: smallest fitting block, reduces waste. Worst-fit: largest block, avoids small fragments.
Paging
Concept
Divides memory into fixed-size pages (4KB typical). Allows non-contiguous physical allocation.
Page Table
Maps logical pages to physical frames. Contains frame number, status bits.
Address Translation
Logical address: page number + offset. Physical address: frame number + offset.
Advantages
Eliminates external fragmentation, simplifies allocation.
Disadvantages
Internal fragmentation possible, overhead of page tables.
| Page Size | Typical Value |
|---|---|
| Small | 4 KB |
| Large | 2 MB, 4 MB (huge pages) |
Segmentation
Definition
Memory divided into variable-sized segments. Reflects logical divisions: code, data, stack.
Segment Table
Maps segment number to base address and limit (length).
Address Translation
Logical address: segment number + offset. Checks offset < segment limit.
Advantages
Supports modular programming, easier protection and sharing.
Disadvantages
External fragmentation, complex allocation.
Virtual Memory
Concept
Extends physical memory using disk space. Programs use addresses larger than physical memory.
Demand Paging
Pages loaded only when accessed. Reduces memory use, increases responsiveness.
Page Fault
Occurs when page not in memory. Triggers disk fetch and page replacement.
Page Replacement Algorithms
FIFO: replaces oldest page. LRU: replaces least recently used. Optimal: replaces page not used longest in future (theoretical).
Thrashing
Excessive page faults due to insufficient memory. Degrades performance severely.
// Pseudocode: Basic LRU Algorithminitialize empty page frame listfor each page reference: if page in frame list: move page to front (most recently used) else: if frame list full: remove last page (least recently used) add new page to frontSwapping
Definition
Temporarily moves entire process memory to disk to free RAM.
Purpose
Enables multiprogramming, increases memory availability.
Swap Space
Dedicated disk area for storing swapped-out processes.
Swapping Overhead
High latency due to disk I/O. Used sparingly to minimize delays.
Interaction with Virtual Memory
Swapping complements paging by moving whole processes; paging moves pages.
Fragmentation
External Fragmentation
Free memory scattered in small blocks, insufficient for new allocations.
Internal Fragmentation
Allocated memory slightly larger than requested. Wasted space inside blocks.
Compaction
Rearranges memory to combine free spaces. Expensive, used in contiguous allocation.
Mitigation Techniques
Paging eliminates external fragmentation. Segmentation susceptible to external fragmentation.
Measurement Metrics
Fragmentation ratio: wasted memory / total memory. Used to evaluate allocation efficiency.
Memory Protection
Purpose
Prevent unauthorized memory access, ensure process isolation.
Hardware Support
Base and limit registers, MMU (Memory Management Unit), page tables with protection bits.
Protection Mechanisms
Read, write, execute permissions per page or segment. Access violations cause exceptions.
Shared Memory
Allows controlled sharing between processes via protected memory regions.
Security Implications
Memory protection prevents buffer overflows, code injection, and privilege escalation.
Garbage Collection
Definition
Automatic reclaiming of memory no longer in use by programs.
Types
Reference counting: tracks references, frees when zero. Mark-and-sweep: marks reachable objects, sweeps unmarked.
Generational GC
Divides heap by object age, collects young generation frequently, old occasionally.
Performance Impact
Pauses execution during collection, optimized by incremental and concurrent GC.
Applications
Common in managed languages: Java, C#, functional languages.
// Mark-and-Sweep Algorithm Outlinefor each object in heap: mark object as unvisitedfor each root reference: recursively mark reachable objectsfor each object in heap: if not marked: free objectPerformance Considerations
Allocation Speed
Fast allocation algorithms critical for system throughput.
Memory Access Latency
Address translation overhead impacts speed; TLBs (Translation Lookaside Buffers) reduce latency.
Overhead of Paging and Segmentation
Page table size and management complexity affect system resources.
Fragmentation Impact
Fragmentation reduces usable memory, increases allocation failures.
Trade-offs in Virtual Memory
Balancing page size, replacement algorithms, and working set size optimizes performance.
| Factor | Effect |
|---|---|
| TLB Hit Rate | Lower address translation time |
| Page Size | Large pages reduce overhead, increase internal fragmentation |
| Fragmentation | Decreases usable memory, increases allocation failures |
Case Studies and Implementations
Unix/Linux Memory Management
Uses demand paging, swapping, slab allocator for kernel objects, buddy system for page allocation.
Windows Memory Manager
Combines paging with segmentation features, uses working set model and sophisticated page replacement.
Embedded Systems
Often use static or simple dynamic allocation, limited virtual memory due to hardware constraints.
Java Virtual Machine (JVM)
Implements garbage collection, heap segmentation, generational GC for performance.
Real-Time Systems
Memory management prioritizes predictability, often avoids paging and garbage collection pauses.
References
- Silberschatz, A., Galvin, P. B., & Gagne, G. Operating System Concepts. 9th ed., Wiley, 2013, pp. 201-265.
- Tanenbaum, A. S., & Bos, H. Modern Operating Systems. 4th ed., Pearson, 2015, pp. 123-190.
- Stallings, W. Operating Systems: Internals and Design Principles. 9th ed., Pearson, 2018, pp. 271-330.
- Hennessy, J. L., & Patterson, D. A. Computer Architecture: A Quantitative Approach. 6th ed., Morgan Kaufmann, 2017, pp. 350-400.
- Jones, R., & Lins, R. Garbage Collection: Algorithms for Automatic Dynamic Memory Management. Wiley, 1996, pp. 45-90.