Introduction

Working set model: technique to manage memory by tracking a processโ€™s active pages. Objective: reduce page faults and thrashing. Basis: locality of reference principle. Context: virtual memory systems in modern OS. Importance: optimizes memory utilization, improves system throughput.

"The working set model provides a foundation for understanding the dynamic memory needs of processes." -- Peter J. Denning

Background and Motivation

Memory Management Challenges

Problem: limited physical memory versus large virtual address space. Consequence: page faults, delays, overhead. Need: dynamic control of memory allocation per process.

Paging and Page Faults

Paging: divides memory into fixed-size pages. Page fault: occurs if page not in memory. Excessive faults: thrashing, system slowdown. Solution: control set of pages resident per process.

Early Models

Static allocation: fixed frames per process, inefficient. Demand paging: loads pages on fault, reactive. Working set model: proactive, based on recent usage.

Definition of Working Set

Conceptual Definition

Working set (WS): set of pages a process actively uses during a time interval ๐œ (window). Reflects current locality. Variable size, changes over time.

Mathematical Representation

WS(t, ๐œ) = {pages referenced by process in interval (t - ๐œ, t)}. Size denoted WSS(t, ๐œ). ๐œ controls recency window.

Significance

WS approximates minimal set of pages to avoid faults. Helps allocate minimal memory without performance loss.

Locality of Reference

Temporal Locality

Definition: recently accessed pages likely accessed again soon. Basis for ๐œ window in WS.

Spatial Locality

Definition: pages near recently accessed ones likely to be used. Influences page replacement and prefetching.

Locality and Working Set

Working set tracks temporal locality explicitly. Spatial locality assists by clustering pages in WS.

Working Set Algorithm

Algorithm Overview

Periodically calculates WS of each process. Allocates frames equal to WS size. Page replacement based on pages outside WS.

Steps

1. Track references over ๐œ interval. 2. Identify WS(t, ๐œ). 3. Assign memory frames accordingly. 4. Evict pages not in WS when needed.

Advantages

Reduces thrashing by maintaining active pages. Dynamic adjustment to process needs.

Parameters and Window Size

Choice of ๐œ (Window Size)

Small ๐œ: WS small, frequent page faults. Large ๐œ: WS large, excessive memory use. Balance critical for performance.

Impact on System Behavior

๐œ affects responsiveness to locality shifts. Adaptive ๐œ improves accuracy.

Measurement Techniques

Reference bits, time stamps, counters used to approximate WS.

Thrashing and Its Prevention

Definition of Thrashing

Excessive paging causing low CPU utilization. Occurs when total WS sizes exceed available memory.

Working Set Role

Maintains minimal resident pages. Prevents thrashing by controlling allocation.

Detection and Reaction

Detect thrashing via high page fault rate. OS can reduce multiprogramming level or adjust ๐œ.

Memory Allocation Strategies

Fixed Allocation

Static number of frames per process. Ignores dynamic WS size, inefficient.

Proportional Allocation

Frames distributed based on process size. Partial solution, no WS tracking.

Working Set Based Allocation

Frames allocated to match WS size. Dynamic, efficient memory use.

StrategyDescriptionAdvantagesDisadvantages
Fixed AllocationStatic frames per processSimple implementationInefficient memory use
Proportional AllocationBased on process sizeBetter than fixedIgnores dynamic locality
Working Set BasedAllocates per WS sizeEfficient, reduces thrashingComplex implementation

Performance Implications

Page Fault Rate Reduction

WS model minimizes faults by maintaining active pages. Directly improves CPU utilization.

Throughput and Response Time

Reduced thrashing improves throughput. Faster response due to fewer delays.

Overhead Considerations

Tracking WS requires extra memory and CPU cycles. Trade-off between accuracy and overhead.

Comparison to Other Models

LRU (Least Recently Used)

LRU approximates WS by evicting least recently used pages. WS more precise but costlier.

Page Fault Frequency (PFF)

PFF monitors page fault rate to adjust allocation. Reactive vs WS proactive approach.

Working Set vs Clock Algorithm

Clock algorithm approximates LRU with less overhead. WS offers better locality tracking.

Implementation Considerations

Hardware Support

Reference bits and time stamps assist WS tracking. Lack of hardware makes WS costly.

Software Algorithms

Uses sampling, counters, and periodic scans. Approximations balance overhead and accuracy.

Adaptive Techniques

Dynamic adjustment of ๐œ and allocation based on system state improves efficiency.

Algorithm WorkingSetPageReplacement: Input: Process P, current time t, window size ฯ„ For each page p in P: If p referenced in (t-ฯ„, t): Mark p as in working set Allocate frames to P equal to size of working set Replace pages not in working set if memory needed

Case Studies and Applications

UNIX Operating Systems

Early WS implementation in UNIX kernel reduced thrashing. Influenced later systems.

Windows Memory Management

Windows uses similar locality concepts, adaptive WS-like mechanisms to optimize memory.

Embedded Systems

WS model adapted for constrained memory environments to balance performance and resources.

SystemWS ImplementationOutcome
UNIXKernel-level WS trackingReduced thrashing, improved throughput
WindowsAdaptive locality-based allocationBalanced memory usage, responsiveness
EmbeddedSimplified WS heuristicsOptimized limited memory

References

  • P. J. Denning, "The Working Set Model for Program Behavior," Communications of the ACM, vol. 11, no. 5, 1968, pp. 323-333.
  • A. Silberschatz, P. B. Galvin, G. Gagne, Operating System Concepts, 9th ed., Wiley, 2013, pp. 110-125.
  • M. J. Bach, The Design of the UNIX Operating System, Prentice Hall, 1986, pp. 200-215.