250 likes | 555 Views
Memory Management. Basic memory management Swapping Virtual memory Page replacement algorithms. Chapter 4. Memory Management . Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache
E N D
Memory Management Basic memory management Swapping Virtual memory Page replacement algorithms Chapter 4
Memory Management • Ideally programmers want memory that is • large • fast • non volatile • Memory hierarchy • small amount of fast, expensive memory – cache • some medium-speed, medium price main memory • gigabytes of slow, cheap disk storage • Memory manager handles the memory hierarchy
Basic Memory ManagementMonoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process
Multiprogramming with Fixed Partitions Disadvantage Queue for small partition may be full, but the queue for a large partition is empty • Fixed memory partitions • separate input queues for each partition • single input queue • Disadvantage • Wastage of space • If largest process is • selected, small process • Are deprived
Relocation and Protection • Relocation • Cannot be sure where program will be loaded in memory • suppose 1st instruction of a program jumps at absolute address 200 within the exe file and it is loaded into partition 1 (previous slide) started at 100K. Jump should be performed at 100K+200. • This problem is called relocation • Relocation Solution • As the program is loaded into memory, modify the instructions accordingly. • Address locations are added to the base location of the partition to map to the physical address. • Protection • Must keep a program out of other processes’ partitions • Relocation during loading may cause protection problem • Protection Solution • Address locations larger than limit value is an error
Swapping and Virtual Memory • Sometimes there is not enough main memory to hold all the currently active processes. • So excess processes must be kept on disk and brought into run dynamically • Two approach • Swapping: • Bring in each process entirely, running it for a while and then put it back on the disk • Virtual memory: • Brings each process partially, not entirely
Swapping (1) • Memory allocation changes as • processes come into memory and leave memory • No concept of fixed partitioning of memory like previous. Number, location and size of partitions vary dynamically. • High Utilization of memory, but it complicates allocation and de-allocation of memory
Swapping (2) • Allocating space for growing data segment • Allocating space for growing stack & data segment
Memory Management for Dynamic Allocation(3) • When memory is assigned dynamically, the OS must manage it. Two ways: • With bit maps • With linked list
Memory Management with Bit Maps and Linked List(4) • Bit Map • 1 = Allocated block • 0 = Free block • Linked List • P = Process allocation • H = Hole
Virtual Memory (1) • The basic idea behind virtual memory is that the combined size of the program may exceed the amount of physical memory available for it. • The OS keeps part of the program currently in use in main memory, and the rest on the disk. • For example • A 16MB program can run on a 4MB memory • Virtual memory also allows many programs in memory at once. • Most virtual memory systems use a technique called paging
Paging (2) The relation between virtual addresses and physical memory addresses given by page table
TLBs – Translation Lookaside Buffers (3) A TLB to speed up paging
Page Replacement Algorithms • Page fault forces choice • which page must be removed • make room for incoming page • Modified page must first be saved • unmodified just overwritten • Better not to choose an often used page • will probably need to be brought back in soon
Optimal Page Replacement Algorithm • Replace page needed at the farthest point in future • Optimal but unrealizable • Estimate by … • logging page use on previous runs of process • although this is impractical
Not Recently Used Page Replacement Algorithm • Each page has Reference bit, Modified bit • bits are set when page is referenced, modified • Pages are classified • not referenced, not modified • not referenced, modified • referenced, not modified • referenced, modified • NRU removes page at random • from lowest numbered non empty class
FIFO Page Replacement Algorithm • Maintain a linked list of all pages • in order they came into memory • Page at beginning of list is replaced • Disadvantage • Highly used page may be a victim
Second Chance Page Replacement Algorithm • Operation of a second chance • pages sorted in FIFO order • Page list if fault occurs at time 20, A has R bit set(numbers above pages are loading times)
Least Recently Used (LRU) • Assume pages used recently will used again soon • throw out page that has been unused for longest time • Must keep a linked list of pages • most recently used at front, least at rear • update this list every memory reference !! • Alternatively keep counter in each page table entry • choose page with lowest value counter • periodically zero the counter
Page Size (1) Small page size • Advantages • less internal fragmentation • less unused program in memory • Disadvantages • programs need many pages, larger page tables
Page Size (2) page table space internal fragmentation Optimized when • Overhead due to page table and internal fragmentation • Where • s = average process size in bytes • p = page size in bytes • e = page entry
Page Fault Handling (1) 1. Hardware traps to kernel 2. OS determines which virtual page needed 3. Save registers 4. OS checks validity of address, seeks page frame 5. If selected frame is dirty, write it to disk
Page Fault Handling (2) 6. OS brings new page in memory from disk 7. Page tables updated 8. Faulting instruction backed up to when it began 9. Faulting process scheduled 10. Registers restored 11. Program continues