1 / 44

A Code Layout Framework for Embedded Processors with Configurable Memory Hierarchy

A Code Layout Framework for Embedded Processors with Configurable Memory Hierarchy. Kaushal Sanghai David Kaeli ECE Department Northeastern University Boston, MA. Outline. Motivation and goals Blackfin 53x memory architecture L1 code memory configurations Code layout algorithm

ronni
Download Presentation

A Code Layout Framework for Embedded Processors with Configurable Memory Hierarchy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Code Layout Framework for Embedded Processors with Configurable Memory Hierarchy Kaushal Sanghai David Kaeli ECE Department Northeastern University Boston, MA

  2. Outline • Motivation and goals • Blackfin 53x memory architecture • L1 code memory configurations • Code layout algorithm • PGO linker tool • Methodology • Results • Conclusions and future work • References

  3. Motivation • Blackfin processor cores provide highly configurable memory subsystems to better match application-specific workload characteristics • Spatial and temporal locality present in applications should be exploited to produce efficient layouts • Code and data layout can be optimized by profile guidance

  4. Motivation • Most developers rely on hand tuning the layout which not only increases the time-to-market embedded products but also results in an inefficient memory mapping • Program optimization techniques to automatically optimize memory layout for such memory subsytems are thereby needed

  5. Goals • Develop a complete code-mapping framework that provides for automatic code layout for the range of L1 memory configurations available on Blackfin • Create tools that enable fast and easy design space exploration across the range of L1 memory configurations • Utilize execution profiles to tune code layout • Evaluate performance of the code mapping algorithms on the available L1 memory configurations for embedded multimedia applications

  6. Memory Architecture SDRAM (External) 4x (16 – 128 MB) An optional L2 SRAM SRAM Core 10-12 system clock cycles Cache SRAM/Cache Single cycle L1 Instruction Memory

  7. Memory Architecture SDRAM (External) 4x (16 – 128 MB) An optional L2 SRAM SRAM Core 10-12 system clock cycles Cache SRAM/Cache Single cycle L1 Instruction Memory L1 SRAM Configuration

  8. Memory Architecture SDRAM (External) 4x (16 – 128 MB) An optional L2 SRAM SRAM Core 10-12 system clock cycles Cache SRAM/Cache Single cycle L1 Instruction Memory L1 Cache Configuration

  9. Memory Architecture SDRAM (External) 4x (16 – 128 MB) An optional L2 SRAM SRAM Core 10-12 system clock cycles Cache SRAM/Cache Single cycle L1 Instruction Memory L1 SRAM/Cache Configuration

  10. Tradeoffs Involved • L1 SRAM • Most of the cache misses are avoided by mapping most frequently executed code (i.e., hot code) or critical code sections in SRAM • Performance can suffer if all of the hot code cannot be mapped to L1 SRAM • L1 Cache • Exploits temporal locality in code • May increase external memory bandwidth requirements • Performance can suffer if application has poor temporal locality • L1 SRAM/Cache • Mapping hot sections in L1 SRAM reduces external memory bandwidth requirements • Cache provides low latency access to infrequent code

  11. Code Layout Algorithms

  12. L1 SRAM Layout Why Knapsack?

  13. L1 SRAM Layout Why Knapsack?

  14. L1 SRAM Layout Why Knapsack?

  15. L1 SRAM Layout Why Knapsack?

  16. Efficient L1 SRAM Layout Where, is the execution percentage of the code section i relative to the entire execution is the size of code section i This is an NP-complete problem!

  17. Efficient Cache Layout A 300 2 F D 30 50 50 B E C 50 200 H G Nodes – functions Edge weight– calling frequency Each color represents a cache line. Functions mapped to the same color conflict [Hashemi & Kaeli’98]

  18. Efficient Cache Layout A A 300 2 300 2 F D F Improved Mapping D 30 50 50 30 50 50 B E C B E C 50 200 50 200 H G H G Nodes – functions Edge weight– calling frequency Each color represents a cache line. Functions mapped to the same color conflict [Hashemi & Kaeli’98]

  19. Efficient L1 SRAM/Cache layout • Partition code into sections to be placed in L1 SRAM and L1 Cache • L1 SRAM mapping • Maximize the amount of execution from L1 SRAM • Map functions with low temporal locality in L1 SRAM • Solve the Knapsack for all functions based on the execution percentage, size and temporal reuse distance • L1 Cache mapping • Of the remaining functions merge frequently executed caller/callee function pairs and map into contiguous memory locations

  20. Algorithm • Inputs • Execution percentage and size • Weighted Call Graph • Temporal re-use distance (RUD) for every function

  21. Algorithm L1 SRAM mapping Step 1: Filter out functions with less that 1% of execution percentage Step 2: Compute (Execution %/Size) /RUD for the remaining functions Step 3: Solve the Knapsack problem and map the solution to the L1 SRAM space

  22. Algorithm L1 cache mapping Step 4: Form the call graph of the remaining functions and sort by edge weights Step 5: Set the threshold on max merged node size (MNsize); this is equal to the size of one way of the cache Step 6: For all edges in the sorted list start merging nodes until merged node size <= MNsize

  23. Algorithm Let A and B be the nodes connected to an edge and SA and SB be their corresponding sizes. We would have 4 cases based on the merged node assignment of the nodes connected to an edge Step 7: case 1: A and B merged node && SA + SB < MNsize merge A and B and assign a common merged node id case 2: A merged node and B merged node if (SA + SB < MNsize) then merge B with A else proceed to the next edge case 3: B merged node and A merged node same as in case 2 but swap A with B and B with A case 4: A and B merged node if total size of two merged nodes is less than MNsize merged the two merged nodes to form a bigger node else proceed to the next edge Step 8: Map the resulting merged nodes in contiguous memory locations; starting with the merged node containing the heaviest edge

  24. PGO Linker Framework Program application

  25. PGO Linker Framework Program application Read function symbol module

  26. PGO Linker Framework Program application Read function symbol module Gather profile information module

  27. PGO Linker Framework Program application Instrumentation Read function symbol module Gather profile information module

  28. PGO Linker Framework Program application Instrumentation Program instrumentation module Read function symbol module Gather profile information module

  29. PGO Linker Framework Program application Instrumentation Program instrumentation module Function call trace and temporal reuse distance Read function symbol module Gather profile information module Call trace processing module

  30. PGO Linker Framework Program application Instrumentation Program instrumentation module Function call trace and temporal reuse distance Read function symbol module Gather profile information module Call trace processing module Callgraph Reuse distance EP/size

  31. PGO Linker Framework Program application Instrumentation Program instrumentation module Function call trace and temporal reuse distance Read function symbol module Gather profile information module Call trace processing module Callgraph Callgraph Reuse distance EP/size Code layout module

  32. PGO Linker Framework Program application Instrumentation Program instrumentation module Function call trace and temporal reuse distance Read function symbol module Gather profile information module Call trace processing module Callgraph Reuse distance EP/size Code layout module Generate linker directive file

  33. PGO Linker Framework Program application Instrumentation Program instrumentation module Function call trace and temporal reuse distance Read function symbol module Gather profile information module Call trace processing module Callgraph Reuse distance EP/size Code layout module Generate linker directive file Relink the application

  34. Methodology • Evaluated the algorithms on six consumer benchmark programs from the EEMBC suite

  35. Methodology • Configured L1 memory as L1 SRAM/Cache for all the benchmarks • All experiments are performed on the Blackfin 533 EZ-Kit hardware board • 4 different L1 memory configurations considered • 12K L1 - divided as 8K SRAM and 4K Cache • 16K L1 - divided as 12K SRAM and 4K Cache and compared to • 16K L1 - divided as 8K SRAM and 8K Cache and compared to • 80K L1 - divided as 64K SRAM and 16K Cache

  36. Results

  37. Results

  38. Results

  39. Enhanced System Implementation Cycle Code development Debug successful Program optimization Compiler optimization and/or Profile guided compiler optimizations Evaluate L1 memory configurations and size within the PGO linker framework System design

  40. Features of the framework • Process is completely automated • Gather profiles • Generate dynamic function call graphs • Run optimization algorithms • Re-linking the project for improved layout • Can be used with hardware, compiled simulation or cycle accurate simulation sessions in the VisualDSP++ development environment for BFxxx • Code mapping at the function level granularity • Efficient in run time

  41. Conclusion • We have developed a completely automated and efficient code layout framework for a configurable L1 code memory supported by the BFxxx • We show a minimum of 3% to a maximum of 33% performance improvement (20% on average) for the six benchmark programs with a 12K L1 memory • We show that by efficiently mapping code, a 16K L1 memory results in a similar performance as a 80K of L1 memory

  42. Future work • The mapping can be extended to basic block granularity • Code mapping to avoid external memory bank contention (SDRAM) can be incorporated • Code layout techniques for multi-core architectures can be developed; considering shared memory accesses • The framework can be extended to data layout techniques

  43. References • Kaushal Sanghai, David Kaeli and Richard Gentile, “Code and Data Partitioning on Blackfin for partitioned multimedia benchmark programs”, In the Proceedings of the 2005 Workshop on Optimizations for DSP and Embedded Systems, Mar-2005 • Kaushal Sanghai, David Kaeli, Alex Raikman and Ken Butler, “A Code Layout Framework for Configurable Memory Systems in Embedded processors”, General Technical Conference, Analog Devices Inc., Jun-2006

  44. Command Line Interface • PGOLinker <dxefile> <linker directive output file(.asm)> -multicore –algorithm Sample Output Algorithm Selected--> KNAPSACK Connecting to the IDDE and loading Program Connection to the IDDE established and Program loaded Gathering the function symbol information Function symbol information obtained No existing profile session. A new profile session will be created Application Running. Processor Halted Getting profile Information Analyzing the profile information obtained Analysis Done Total sample count collected is --> 905 The total execution from L1 for 4KB of L1 is 98.232% Total functions in L1 14 -------------------------------------------------------------------------------- The total execution from L1 for 8KB of L1 is 100% Total functions in L1 22

More Related