1 / 22

Cache Memory

Cache Memory. By JIA HUANG. "Computer Science has only three ideas: cache, hash, trash.“ - Greg Ganger, CMU. The idea of using caching. Using very short time to access the recently and frequently data. cache: fast but expensive. disks: cheap but slow. Type of cache.

Download Presentation

Cache Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cache Memory By JIA HUANG

  2. "Computer Science has only three ideas: cache, hash, trash.“ - Greg Ganger, CMU

  3. The idea of using caching • Using very short time to access the recently and frequently data cache: fast but expensive disks: cheap but slow

  4. Type of cache • CPU cache • Disk cache • Other caches • Proxy web cache

  5. Usage of caching • Caching is used widely in: • Storage systems • Databases • Web servers • Middleware • Processors • Operating systems • RAID controller • Many other applications

  6. Cache Algorithms • Famous algorithms • LRU (Least Recently Used) • LFU (Least Frequently Used) • Not so Famous algorithms • LRU –K • 2Q • FIFO • others

  7. LRU (Least Recently Used) • LRU is implemented by a linked list. • Discards the least recently used items first.

  8. LFU (least frequently used) • Counts, how often an item is needed. Those that are used least often are discarded first.

  9. LRU vs. LFU • The fundamental locality principle claims that if a process visits a location in the memory, it will probably revisit the location and its neighborhood soon • The advanced locality principle claims that the probability of revisiting will increased if the number of the visits is bigger

  10. Disadvantages • LRU – problem with process scans a huge database • LFU – process scans a huge database, but it made the performance even worse.

  11. can there be a better algorithm? Yes

  12. New algorithm • ARC (Adaptive Replacement Cache) - itcombines the virtues of LRU and LFU, while avoiding vices of both. The basic idea behind ARC is to adaptively, dynamically and relentlessly balance between "recency" and "frequency" to achieve a high hit ratio. -Invented byIBM in 2003 ( Almaden Research Center, San Jose)

  13. How it works?

  14. L1: pages were seenonce recently("recency") L2: pages were seen atleasttwice recently("frequency") If L1 contains exactly c pages --replace the LRU page in L1 else --replace the LRU page in L2. Lemma: The c most recent pages are in the union of L1 and L2. ARC LRU L2 MRU MRU L1 LRU

  15. ARC LRU • Divide L1 into T1 (top) & B1 (bottom) • Divide L2 into T2 (top) & B2 (bottom) • T1 and T2 contain c pages • in cache and in directory • B1 and B2 contain c pages • in directory, but not in cache • If T1 contains more than p pages, --replace LRU page in T1, else --replace LRU page in T2. L2 MRU MRU L1 LRU

  16. ARC • Adapt target size of T1 to an observed workload • A self-tuning algorithm: • hit in T1 or T2 : do nothing • hit in B1: increase target of T1 • hit in B2: decrease target of T1 L2"frequency" Midpoint L1"recency"

  17. ARC • ARC has low space complexity. A realistic implementation had a total space overhead of less than 0.75%. • ARC has low time complexity; virtually identical to LRU. • ARC is self-tuning and adapts to different workloads and cache sizes. In particular, it gives very little cache space to sequential workloads, thus avoiding a key limitation of LRU. • ARC outperforms LRU for a wide range of workloads.

  18. Example • For a huge, real-life workload generated by a large commercial search engine with a 4GB cache, ARC's hit ratio was dramatically better than that of LRU (40.44 percent vs. 27.62 percent). -IBM (Almaden Research Center)

  19. ARC vs. LRU

  20. ARC vs. LRU

  21. ARC • Currently, ARC is a research prototype and will be available to customers via many of IBM's existing and future products.

  22. References • http://en.wikipedia.org/wiki/Caching#Other_caches • http://www.cs.biu.ac.il/~wiseman/2os/2os/os2.pdf • http://www.almaden.ibm.com/StorageSystems/autonomic_storage/ARC/index.shtml • http://www.almaden.ibm.com/cs/people/dmodha/arc-fast.pdf

More Related