1 / 33

Trading Flash Translation Layer For Performance and Lifetime

王 江 涛. Trading Flash Translation Layer For Performance and Lifetime. 2. 3. 4. Outline. Introduction. 1. 5. Flash Translation Layer. Address Mapping. Wear Leveling. Conclusion. Introduction of Flash Memory. Pros Small size and Lighter weight Low power consumption and Non-Volatility

Download Presentation

Trading Flash Translation Layer For Performance and Lifetime

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 王 江 涛 Trading Flash Translation Layer For Performance and Lifetime

  2. 2 3 4 Outline Introduction 1 5 Flash Translation Layer Address Mapping Wear Leveling Conclusion

  3. Introduction of Flash Memory • Pros • Small size and Lighter weight • Low power consumption and Non-Volatility • Shock resistance and Lesser noise • Faster access performance

  4. flash memory chip OOB DATA • OOB • ECC(Hamming Code) • Logical page number • State: erased/valid/invalid

  5. Introduction of Flash Memory • Cons • Write granularity (page) • Erase before write (block) • Sequential write within a block • Limited erase/write • Out-of-place update • when a page is to be overwritten, we allocate a new free or erased page • we used a software layer called FTL indicate the physical location change of the page

  6. 2 3 4 Outline Introduction 1 5 Flash Translation Layer Address Mapping Wear Leveling Conclusion

  7. Flash Translation Layer • Function • Address mapping • Garbage collection(block reclamation) • Wear-leveling

  8. 2 3 4 Outline Introduction 1 5 Flash Translation Layer Address Mapping Wear Leveling Conclusion

  9. Address Mapping • Mapping granularity • Page-level scheme • Block-level scheme • Hybrid scheme • Block-level FTL Scheme • The same page offset within logical and physical block • Mapping table reside in RAM (size is small) • Erase operation frequent and space utilization is low

  10. Address Mapping • Hybrid FTL Scheme • DBA: store user data (block-level mapping) • LBA: store overwriting data (page-level mapping)

  11. Address Mapping • Page-level FTL Scheme • A logical page number can be mapped into any page • Mapping table stored in SRAM[1995] • Mapping table is stored in Flash and cached in SRAM[2009]

  12. Page-level FTL Scheme • Related work • DFTL:A Flash Translation Layer Employing Demand-based Selective Caching of Page-level Address Mapping. (ASPLOS 2009) • A Workload-Aware Adaptive Hybrid Flash Translation Layer with an Efficient Caching Strategy (CFTL) (MASCOTS 2011) • LazyFTL: A Page-level Flash Translation Layer Optimized for NAND Flash Memory (SIGMOD 2011)

  13. DFTL • Divided flash memory into MBA and DBA • MBA--Store the full mapping table on flash • DBA—Store user data • Use page-level mapping • Dynamically swap page-level mapping entries in/out SRAM

  14. DFTL SRAM CMT: Caching Mapping Table GMD: Global Mapping Directory LPN:logical page number PPN: physical page number

  15. DFTL • Pros • Realize sequential program within a block • Avoid full merge • Cons • Frequently update the mapping pages during garbage collection • A poor reliability of mapping information • The cost of read is large

  16. Workload-Aware Adaptive FTL(CFTL) • Divided flash memory into MBA and DBA • MBA--Store the full mapping on flash • DBA—Store user data • Use page-level mapping and block-level mapping • Dynamically swap page-level mapping entries in/out SRAM • Convert to each other based on data access patterns • Read intensive: Block-level mapping • Write intensive: Page-level mapping

  17. Workload-Aware Adaptive FTL(CFTL) • Pros • Realize sequential program within a block • Avoid full merge • Exploit temporal and spatial locality and workloads • Improve the performance of read • Cons • Frequently update the mapping pages during garbage collection • A poor reliability of mapping information • Expensive read/write cost to build block mapping table

  18. LazyFTL • DBA: store user data • MBA: store mapping pages(page-level scheme) • CBA: store valid user data when implement garbage collection • UBA: Implement write requests

  19. LazyFTL • Write • Complete a write request in the UBA • Store new mapping formation in RAM ( UMT: update mapping table ) • Garbage Collection • Reclaim a victim block in the DBA or the MBA • Move valid data pages to CBA and store new mapping formation in RAM (UMT) • Convert • Implement a batch updates • Convert CBA/UBA to DBA

  20. LazyFTL

  21. ConvertAlgorithm

  22. Performance Evaluation

  23. LazyFTL • Pros • Adopt an update buffer to decrease frequently update the mapping pages • Achieve consistency and reliability • Improve write performance by reduce erase operation • Cons • Increase the cost of read operation • Decrease speedup of garbage collection • Not considering hot-cold data for wear leveling

  24. 2 3 4 Outline Introduction 1 5 Flash Translation Layer Address Mapping Wear Leveling Conclusion

  25. Wear Leveling • Introduction • Any one part of flash memory can only withstand a limited number of erase-write cycles • Localities of data access inevitably degrade wear evenness in flash • Some definitions • Hot data block and cold data block (access frequency) • Old block and young block(erase counts) • Basic principle • Prevent old blocks from being erased(cool down) • Start erasing young blocks actively(heat up)

  26. Wear Leveling • Cold data migration • move cold data from young blocks to old blocks • Select young blocks when execute garbage collection • Related work(Hybrid FTL Scheme) • A Low-Cost Wear-Leveling Algorithm for Block-Mapping Solid-State Disks (lazy scheme) (LCTES2011)

  27. Lazy Scheme • Overview • Cold data migration • Consider recency (recent wear history ) and frequency • Recency • Update recency( a logical block ) the time length since the latest update to a logical block • Erase recency( a physical block ) the time length since the latest erase operation on a physical block • Frequency • Elder block larger than the average erase count • Junior block smaller than the average erase count

  28. Lazy Scheme The goal of wear-leveling is that a block should keep its erase count close to the average. For junior block, we are interested in Block e and f. For elder block, we are interested in block a and b. We need to move valid data in block e and ftoblock a and b Re-map e and f to a and b and select Block e and f as victim blocks

  29. Algorithm

  30. Lazy Scheme • Pros • does not store wear information in RAM, but leaves all of this information in flash instead. • utilizes the address-mapping information available and do not need to add extra data structures for wear leveling • Cons • It is important to uniformly visit every logical block when selecting a logical block for re-mapping.

  31. 2 3 4 Outline Introduction 1 5 Flash Translation Layer Address Mapping Wear Leveling Conclusion

  32. Conclusion • Conclusion • Page mapping scheme shows the best performance in that it can decrease erase operations • It is necessary to design an efficient wear leveling scheme for Page-Mapping Solid-State disks • Some operations in FTL can be executed without interrupting current flash accesses by exploiting internal parallelism of flash memory • As write caching to reduce erase operation SSD for primary storage, auxiliary PCM

  33. Thank you

More Related