290 likes | 535 Views
Virtual Memory: Page Replacement. Realizing Virtual Memory. Hardware support Memory Management Unit (MMU): address translation, bits, interrupts Operating system support Page replacement policy Resident set management Load control degree of multiprogramming. Page Replacement Policy.
E N D
Realizing Virtual Memory • Hardware support • Memory Management Unit (MMU): address translation, bits, interrupts • Operating system support • Page replacement policy • Resident set management • Load control • degree of multiprogramming
Page Replacement Policy • Resident set maintenance • Fixed or variable allocation • Per-process or global replacement • Page replacement problem • A fixed number of frames, M, is used to map the process virtual memory pages • Which page should be replaced when a page fault occurs and all M frames are occupied?
Requirements and Metrics • Workload: a sequence of virtual memory references (page numbers) • Page fault rate = #page faults/#memory references • Minimize the page fault rate for workloads obeying the principle of locality • Keep hardware/software overhead as small as possible
Algorithms • Optimal (OPT) • Least Recently Used (LRU) • First-In-First-Out (FIFO) • Clock
Optimal Policy (OPT) • Replace the page which will be referenced again in the most remote future • Impossible to implement • Why? • Serves as a baseline for other algorithms
Least Recently Used (LRU) • Replace the page that has not been referenced for the longest time • The best approximation of OPT for the locality constrained workloads • Possible to implement • Infeasible as the overhead is high • Why?
First-In-First-Out (FIFO) • Page frames are organized in a circular buffer with a roving pointer • Pages are replaced in round-robin style • When page fault occur, replace the page to which the pointer points to • Simple to implement, low overhead • High page fault rate, prone to anomalous behavior
Clock (second chance) • Similar to FIFO but takes page usage into account • Circular buffer + page use bit • When a page is referenced: set use_bit=1 • When a page fault occur: For each page: if use_bit==1: give page a second chance: use_bit=0; continue scan; if use_bit==0: replace the page
0 n Page 19 use = 1 Page 9 use = 1 1 Page 1 use = 0 . . 2 next frame pointer Page 45 use = 1 . Page 191 use = 1 Page 222 use = 0 3 8 Page 556 use = 0 Page 33 use = 1 4 Page 67 use = 1 Page 13 use = 0 7 6 5 Example: Page 727 is needed
After replacement 0 n Page 19 use = 1 Page 9 use = 1 1 Page 1 use = 0 . . 2 Page 45 use = 0 . next frame pointer Page 191 use = 0 Page 222 use = 0 3 8 Page 727 use = 0 Page 33 use = 1 4 Page 67 use = 1 Page 13 use = 0 7 6 5
LRU and non-local workloads • Workload: 1 2 3 4 5 1 2 3 4 5… • Typical for array based applications • What is the page fault rate for M=1,…,5? • A possible alternative is to use a Most Recently Use (MRU) replacement policy
Belady’s Anomaly • It is reasonable to expect that regardless of a workload, the number of page faults should not increase if we add more frames: not true for the FIFO policy: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 1 5 4 1 1 4 5 2 2 1 5 2 2 1 3 3 3 2 3 3 2 4 4 4 3
Clock algorithm with 2 bits • Use “modified” bit to evict unmodified (clean) pages in preference over modified (dirty) pages • Four classes: • u=0; m=0: not recently used, clean • u=0; m=1: not recently used, dirty • u=1; m=0: recently used, clean • u=1; m=1: recently used, dirty
Clock algorithm with 2 bits • First scan: look for (0,0) frame, do not change the use bit • If (0,0) frame is found, replace it • Second scan: look for (0,1) frame, set use bit to 0 in each frame bypassed • If (0,1) frame is found, replace it • If all failed, repeat the above procedure • this time we will certainly find something
Page buffering • Evicted pages are kept on two lists: • free and modified page lists • Pages are read into the frames on the free page list • Pages are written to disk in large chunks from the modified page list • If an evicted page is referenced, and it is still on one of the lists, it is made valid at a very low cost
Page Buffering Buffered frames (B) Page fault: 55 is needed 22 is evicted Normal frames (N)
Resident set management • With multiprogramming, a fixed number of memory frames are shared among multiple processes • How should the frames be partitioned among the active processes? • Resident set is the set of process pages currently allocated to the memory frames
Global page replacement • All memory frames are candidates for page eviction • A faulting process may evict a page of other process • Automatically adjusts process sizes to their current needs • Problem: can steal frames from “wrong” processes • Leads to thrashing
Local page replacement • Only the memory frames of a faulting process are candidates for replacement • Dynamically adjust the process allocation • Working set model • Page-Fault Frequency (PFF) algorithm
The working set model [Denning’68] • Working set is the set of pages in the most recent page references • Working set is an approximation of the program locality
The working set strategy • Monitor the working set for each currently active process • Adjust the number of pages assigned to each process according to its working set size • Monitoring working set is impractical • The optimal value of is unknown and would vary
Page-Fault Frequency (PFF) • Approximate the page-fault frequency: • Count all memory references for each active process • When a page fault occurs, compare the current counter value with the previous page fault counter value for the faulting process • If < F, expand the WS; Otherwise, shrink the WS by discarding pages with use_bit==0
Swapping • If a faulting process cannot expand its working set (all frames are occupied), some process should be swapped out • The decision to swap processes in/out is the responsibility of the long/medium term scheduler • Another reason: not enough memory to run a new process
Long (medium) term scheduling • Controls multiprogramming level • Decision of which processes to swap out/in is based on • CPU usage (I/O bound vs. CPU bound) • Page fault rate • Priority • Size • Blocked vs. running
UNIX process states running user schedule sys. call interrupt return zombie ready user interrupt running kernel terminated preempt wait for event schedule ready kernel event done blocked created Swap out Swap in Swap out ready swapped blocked swapped event done