1 / 72

Chapter 12 Memory Management

Chapter 12 Memory Management. Objectives. Discuss the following topics: Memory Management The Sequential-Fit Methods The Nonsequential-Fit Methods Garbage Collection Case Study: An In-Place Garbage Collector. Memory Management.

anika
Download Presentation

Chapter 12 Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 12Memory Management

  2. Objectives Discuss the following topics: • Memory Management • The Sequential-Fit Methods • The Nonsequential-Fit Methods • Garbage Collection • Case Study: An In-Place Garbage Collector

  3. Memory Management • The heapis the region of main memory from which portions of memory are dynamically allocated upon request of a program • The memory manager is responsible for: • The maintenance of free memory blocks • Assigning specific memory blocks to the user programs • Cleaning memory from unneeded blocks to return them to the memory pool

  4. Memory Management (continued) • The memory manager is responsible for: • Scheduling access to shared data, • Moving code and data between main and secondary memory • Keeping one process away from another • External fragmentation amounts to the presence of wasted space between allocated segments of memory

  5. Memory Management (continued) • Internal fragmentation amounts to the presence of unused memory inside the segments

  6. The Sequential-Fit Methods • In the sequential-fit methods, all available memory blocks are linked, and the list is searched to find a block whose size is larger than or the same as the requested size • The first-fitalgorithm allocates the first block of memory large enough to meet the request • The best-fitalgorithm allocates a block that is closest in size to the request

  7. The Sequential-Fit Methods (continued) • The worst-fit method finds the largest block on the list so that the remaining part is large enough to be used in later requests • The next-fitmethod allocates the next available block that is sufficiently large • The way the blocks are organized on the list determines how fast the search for an available block succeeds or fails

  8. The Sequential-Fit Methods (continued) Figure 12-1 Memory allocation using sequential-fit methods

  9. The Nonsequential-Fit Methods • An adaptive exact-fittechnique dynamically creates and adjusts storage block lists that fit the requests exactly • In adaptive exact-fit, a size-list of block lists of a particular size returned to the memory pool during the last T allocations is maintained • The exact-fit method disposes of entire block lists if no request comes for a block from this list in the last T allocations

  10. The Nonsequential-Fit Methods (continued) t = 0; allocate (reqSize) t++; if a block listb1 withreqSize blocks is onsizeList lastref(b1) = t; b = head ofblocks(b1); if b was the only block accessible fromb1 detachb1 fromsizeList; else b = search-memory-for-a-block-of(reqSize); dispose of all block lists onsizeList for whicht - lastref(b1) < T; return b;

  11. The Nonsequential-Fit Methods (continued) Figure 12-2 An example configuration of a size-list and heap created by the adaptive exact-fit method

  12. Buddy Systems • Nonsequential memory management methods or buddy systems assign memory in sequential slices, and divide it into two buddies that are merged whenever possible • In the buddy system, two buddies are never free • A block can have either a buddy used by the program or none

  13. Buddy Systems (continued) • In the binary buddy system each block of memory (except the entire memory) is coupled with a buddy of the same size that participates with the block in reserving and returning chunks of memory

  14. Buddy Systems (continued) Figure 12-3 Block structure in the binary buddy system

  15. Buddy Systems (continued) Figure 12-4 Reserving three blocks of memory using the binary buddy system

  16. Buddy Systems (continued) Figure 12-4 Reserving three blocks of memory using the binary buddy system (continued)

  17. Buddy Systems (continued) Figure 12-4 Reserving three blocks of memory using the binary buddy system (continued)

  18. Buddy Systems (continued) Figure 12-5 (a) Returning a block to the pool of blocks, (b) resulting in coalescing one block with its buddy

  19. Buddy Systems (continued) Figure 12-5 (c) Returning another block leads to two coalescings (continued)

  20. Buddy Systems (continued) avail[i] = -1 fori = 0, . . . , m-1; avail[m] = first address in memory; reserveFib(reqSize) availSize=the position of the first Fibonacci number greater than reqSize for whichavail[availSize] > -1;

  21. Buddy Systems (continued) Figure 12-6 (a) Splitting a block of size Fib(k) into two buddies using the buddy-bit and the memory-bit

  22. Buddy Systems (continued) Figure 12-6 (b) Coalescing two buddies utilizing information stored in buddy- and memory-bits

  23. Buddy Systems (continued) • Aweighted buddy systemis to decrease the amount of internal fragmentation by allowing more block sizes than in the binary system • A buddy system that takes a middle course between the binary system and the weighted system is a dual buddy system

  24. Garbage Collection • A garbage collectoris automatically invoked to collect unused memory cells when the program is idle or when memory resources are exhausted • References to all linked structures currently utilized by the program are stored in a root set, which contains all root pointers

  25. Garbage Collection (continued) • There are two phases of garbage collection: • The marking phase — to identify all currently used cells • The reclamationphase — when all unmarked cells are returned to the memory pool; this phase can also include heap compaction

  26. Mark-and-Sweep • Memory cells currently in use are marked by: • Traversing each linked structure • Then the memory is swept to glean unused (garbage) cells that put them together in a memory pool marking (node) if node is not marked marknode; if node is not an atom marking(head(node)); marking(tail(node));

  27. Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells

  28. Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

  29. Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

  30. Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

  31. Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

  32. Mark-and-Sweep (continued) Figure 12-7 An example of execution of the Schorr and Waite algorithm for marking used memory cells (continued)

  33. Space Reclamation sweep() for eachlocation from the last to the first if mark(location) is0 insertlocation in front ofavailList; else setmark(location) to0;

  34. Compaction Figure 12-8 An example of heap compaction

  35. Copying Methods • The stop-and-copy algorithmdivides the heap into two semispaces, one of which is only used for allocating memory • Lists can be copied using breadth-first traversal that allows it to combine two tasks: copying lists and updating references • This algorithm requires no marking phase and no stack

  36. Copying Methods (continued) Figure 12-9 (a) A situation in the heap before copying the contents of cells in use from semispace1 to semispace2

  37. Copying Methods (continued) Figure 12-9 (b) the situation right after copying; all used cells are packed contiguously (continued)

  38. Incremental Garbage Collection • Incremental garbage collectors,whose execution is interleaved with the execution of the program, is desirable for a fast response to a program • After the collector partially processes some lists, the program can change or mutate those lists using a program called a mutator • The Baker algorithm uses two semispaces, called fromspace and tospace, which are both active to ensure proper cooperation between the mutator and the collector

  39. Incremental Garbage Collection (continued) Figure 12-10 A situation in memory (a) before and (b) after allocating a cell with head and tail references referring to cells P and Q in tospace according to the Baker algorithm

  40. Incremental Garbage Collection (continued) Figure 12-10 A situation in memory (a) before and (b) after allocating a cell with head and tail references referring to cells P and Q in tospace according to the Baker algorithm (continued)

  41. Incremental Garbage Collection (continued) • The mutator is preceded by a read barrier, which precludes utilizing references to cells in fromspace • The generational garbage collectiontechnique divides all allocated cells into at least two generations and focuses its attention on the youngest generation, which generates most of the garbage

  42. Incremental Garbage Collection (continued) Figure 12-11 Changes performed by the Baker algorithm when addresses P and Q refer to cells in fromspace, P to an already copied cell, Q to a cell still in fromspace

  43. Incremental Garbage Collection (continued) Figure 12-11 Changes performed by the Baker algorithm when addresses P and Q refer to cells in fromspace, P to an already copied cell, Q to a cell still in fromspace (continued)

  44. Incremental Garbage Collection (continued) Figure 12-12 A situation in three regions (a) before and (b) after copying reachable cells from region rito region r’i in the Lieberman-Hewitt technique of generational garbage collection

  45. Incremental Garbage Collection (continued) Figure 12-12 A situation in three regions (a) before and (b) after copying reachable cells from region ri to region r’i in theLieberman- Hewitt technique of generational garbage collection (continued)

  46. Noncopying Methods createRootPtr(p,q,r) // Lisp’s cons if collector is in the marking phase mark up tok1cells; else if collector is in the sweeping phase sweep up to k2cells; else if the number of cells onavailList is low push all root pointers onto collector’s stackst; p = first cell onavailList; head(p) = q; tail(p) = r; markp if it is in the unswept portion of heap;

  47. Noncopying Methods (continued) Figure 12-13 An inconsistency that results if, in Yuasa’s noncopying incremental garbage collector, a stack is not used to record cells possibly unprocessed during the marking phase

  48. Noncopying Methods (continued) Figure 12-13 An inconsistency that results if, in Yuasa’s noncopying incremental garbage collector, a stack is not used to record cells possibly unprocessed during the marking phase (continued)

  49. Noncopying Methods (continued) Figure 12-14 Memory changes during the sweeping phase using Yuasa’s method

  50. Noncopying Methods (continued) Figure 12-14 Memory changes during the sweeping phase using Yuasa’s method (continued)

More Related