80 likes | 201 Views
Dynamic Hot Data Stream Prefetching for General-Purpose Programs Chilimbi and Hirzel. John-Paul Fryckman CSE 231: Paper Presentation 23 May 2002. Why Prefetching?. Increasing memory latencies Not enough single thread ILP to hide memory latencies
E N D
Dynamic Hot Data Stream Prefetching for General-Purpose ProgramsChilimbi and Hirzel John-Paul Fryckman CSE 231: Paper Presentation 23 May 2002
Why Prefetching? • Increasing memory latencies • Not enough single thread ILP to hide memory latencies • Minimizing stall cycles due to misses increases IPC (increases performance) • Fetch data before it is needed!
Target Hot Data • Highly repetitious memory sequences • Hot sequences are 15-20 objects long • Hot data • 90% of program references • 80% of cache misses
Why Dynamic • Dynamic prefetching translates into a general purpose solution • Many unknowns at compile time • Pointer chasing code • Irregular strides
Dynamic Hot Data Stream Prefetching • Profile memory references • Detect hot data streams • Create and insert triggers for these streams • And, repeat!
Profiling and Detection • Need to minimize profiling overhead • Use sampling • Switch into instrumented code • Collect traces • Find hot data streams • Generate context-free grammars for hot sequences
DFSM Prefetching Engine • Merge CFGs together into a massive DFSM • DFSM detects prefixes for hot sequences, then generates fetches for the rest of the data • Insert prefetching code • Presumably, states are removed when they are no longer hot
Good and the Not So Good • Good: • With overhead, 5-19% speedups • Questionable Questions: • How does it impact easy to predict code? • Worse case state for DFSM: O(2n) • They did not study this. Is this possible? • Do they always prefetch in time? • What about phase changes/cache pollution?