1 / 24

Memory Management

Memory Management. Basic memory management Swapping Virtual memory Page replacement algorithms. Chapter 4. Memory Management . Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory – cache

shubha
Download Presentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management Basic memory management Swapping Virtual memory Page replacement algorithms Chapter 4

  2. Memory Management • Ideally programmers want memory that is • large • fast • non volatile • Memory hierarchy • small amount of fast, expensive memory – cache • some medium-speed, medium price main memory • gigabytes of slow, cheap disk storage • Memory manager handles the memory hierarchy

  3. Basic Memory ManagementMonoprogramming without Swapping or Paging Three simple ways of organizing memory - an operating system with one user process

  4. Multiprogramming with Fixed Partitions Disadvantage Queue for small partition may be full, but the queue for a large partition is empty • Fixed memory partitions • separate input queues for each partition • single input queue • Disadvantage • Wastage of space • If largest process is • selected, small process • Are deprived

  5. Relocation and Protection • Relocation • Cannot be sure where program will be loaded in memory • suppose 1st instruction of a program jumps at absolute address 200 within the exe file and it is loaded into partition 1 (previous slide) started at 100K. Jump should be performed at 100K+200. • This problem is called relocation • Relocation Solution • As the program is loaded into memory, modify the instructions accordingly. • Address locations are added to the base location of the partition to map to the physical address. • Protection • Must keep a program out of other processes’ partitions • Relocation during loading may cause protection problem • Protection Solution • Address locations larger than limit value is an error

  6. Swapping and Virtual Memory • Sometimes there is not enough main memory to hold all the currently active processes. • So excess processes must be kept on disk and brought into run dynamically • Two approach • Swapping: • Bring in each process entirely, running it for a while and then put it back on the disk • Virtual memory: • Brings each process partially, not entirely

  7. Swapping (1) • Memory allocation changes as • processes come into memory and leave memory • No concept of fixed partitioning of memory like previous. Number, location and size of partitions vary dynamically. • High Utilization of memory, but it complicates allocation and de-allocation of memory

  8. Swapping (2) • Allocating space for growing data segment • Allocating space for growing stack & data segment

  9. Memory Management for Dynamic Allocation(3) • When memory is assigned dynamically, the OS must manage it. Two ways: • With bit maps • With linked list

  10. Memory Management with Bit Maps and Linked List(4) • Bit Map • 1 = Allocated block • 0 = Free block • Linked List • P = Process allocation • H = Hole

  11. Virtual Memory (1) • The basic idea behind virtual memory is that the combined size of the program may exceed the amount of physical memory available for it. • The OS keeps part of the program currently in use in main memory, and the rest on the disk. • For example • A 16MB program can run on a 4MB memory • Virtual memory also allows many programs in memory at once. • Most virtual memory systems use a technique called paging

  12. Paging (2) The relation between virtual addresses and physical memory addresses given by page table

  13. TLBs – Translation Lookaside Buffers (3) A TLB to speed up paging

  14. Page Replacement Algorithms • Page fault forces choice • which page must be removed • make room for incoming page • Modified page must first be saved • unmodified just overwritten • Better not to choose an often used page • will probably need to be brought back in soon

  15. Optimal Page Replacement Algorithm • Replace page needed at the farthest point in future • Optimal but unrealizable • Estimate by … • logging page use on previous runs of process • although this is impractical

  16. Not Recently Used Page Replacement Algorithm • Each page has Reference bit, Modified bit • bits are set when page is referenced, modified • Pages are classified • not referenced, not modified • not referenced, modified • referenced, not modified • referenced, modified • NRU removes page at random • from lowest numbered non empty class

  17. FIFO Page Replacement Algorithm • Maintain a linked list of all pages • in order they came into memory • Page at beginning of list is replaced • Disadvantage • Highly used page may be a victim

  18. Second Chance Page Replacement Algorithm • Operation of a second chance • pages sorted in FIFO order • Page list if fault occurs at time 20, A has R bit set(numbers above pages are loading times)

  19. The Clock Page Replacement Algorithm

  20. Least Recently Used (LRU) • Assume pages used recently will used again soon • throw out page that has been unused for longest time • Must keep a linked list of pages • most recently used at front, least at rear • update this list every memory reference !! • Alternatively keep counter in each page table entry • choose page with lowest value counter • periodically zero the counter

  21. Page Size (1) Small page size • Advantages • less internal fragmentation • less unused program in memory • Disadvantages • programs need many pages, larger page tables

  22. Page Size (2) page table space internal fragmentation Optimized when • Overhead due to page table and internal fragmentation • Where • s = average process size in bytes • p = page size in bytes • e = page entry

  23. Page Fault Handling (1) 1. Hardware traps to kernel 2. OS determines which virtual page needed 3. Save registers 4. OS checks validity of address, seeks page frame 5. If selected frame is dirty, write it to disk

  24. Page Fault Handling (2) 6. OS brings new page in memory from disk 7. Page tables updated 8. Faulting instruction backed up to when it began 9. Faulting process scheduled 10. Registers restored 11. Program continues

More Related