1 / 14

Rules for Designing Multithreaded Applications

Rules for Designing Multithreaded Applications. CET306 Harry R. Erwin University of Sunderland. Texts. Clay Breshears (2009) The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications , O'Reilly Media.

paige
Download Presentation

Rules for Designing Multithreaded Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rules for Designing Multithreaded Applications CET306 Harry R. Erwin University of Sunderland

  2. Texts • Clay Breshears (2009) The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications, O'Reilly Media. • Mordechai Ben-Ari (2006) Principles of Concurrent and Distributed Programming, Second edition, Addison-Wesley.

  3. Signup for Individual Feedback • Choose a 15 minute slot with your regular tutor. • There are several questions: • What is your approach (overview)? (4 marks) • How are you testing it? (4 marks) • How are you solving the first half—threading? (4 marks) • How are you solving the second half—cleaning up the overlapping reservations? (4 marks) • Looking back, what surprises have you encountered? (4 marks) • Mark scale: 0—non-engagement; 1—serious problems; 2—average; 3—good; 4—professional.

  4. Eight Rules of Concurrent Design • Identify Truly Independent Computations • Implement Concurrency at the Highest Level Possible • Plan Early for Scalability to Take Advantage of Increasing Numbers of Cores • Make Use of Thread-Safe Libraries Wherever Possible • Use the Right Threading Model • Never Assume a Particular Order of Execution • Use Thread-Local Storage Whenever Possible or Associate Locks to Specific Data • Dare to Change the Algorithm for a Better Chance of Concurrency Concurrent programming remains more art than science!

  5. Identify Truly Independent Computations • You cannot execute something concurrently unless the operations in each thread are independent of each other! • Review the list in “What’s Not Parallel.” (Next Slide)

  6. What’s Not Parallel • Having a baby • Algorithms, functions, or procedures with persistent state. • Recurrence relations using data from loop t in loop t+1. If it’s loop t+k for k>1, you can ‘unwind’ the loop for some parallelism. • Induction variables incremented non-linearly with each loop pass. • Reductions computing a value from a vector. • Loop-carried dependence—where data generated in some previous loop iteration is used in the current iteration.

  7. Implement Concurrency at the Highest Level Possible • Suppose you have serial code and wish to thread it. • You can work top-down or bottom-up. • In your initial analysis, you’re looking for hot-spotsthat run in parallel give you the best performance. • In bottom-up, you start with the hot-spots and move up. In top-down you consider the whole application and break it down. • Placing concurrency at the highest possible level breaks the program up into naturally independent threads of work that are unlikely to share data. This provides structure for your more detailed threading.

  8. Plan Early for Scalability (Taking Advantage of the Added Cores) • The number of cores will only increase. Plan for it. This is not Moore’s Law—the speed-up is not background; you have to make it happen. • Scalability is the ability of your application to handle useful increases in system resources (cores, memory, bus performance) • Data decomposition methods give more scalable solutions. (Hint!) • Note the project exploits data decomposition.

  9. Make Use of Thread-Safe Libraries Wherever Possible • Don’t reinvent the wheel, especially when it’s complicated. • Many libraries already take advantage of multicore processors • Intel Math Kernel Library (MKL) • Intel Integrated Performance Primitives (IPP) • Even more important—all library calls used should be thread-safe. Check the library documentation. • In your own libraries—routines should be reentrant.

  10. Use the Right Threading Model • If threaded libraries are not good enough, so that you need to use your own threads, don’t use explicit threads if an implicit threading model is good enough. • OpenMP (data decomposition, threading loops running over large data sets) • Intel Threading Building Blocks • Keep it as simple as possible! • If you can’t use third party libraries in the deliverable code, prototype with them first and then convert.

  11. Never Assume a Particular Order of Execution • The execution order of threads is non-deterministic. • There is no reliable way of predicting the ordering. If you assume an ordering, you will have data races, particularly when the hardware changes. • Let the threads run as fast as possible and design them to be unencumbered. • Synchronise only when necessary.

  12. Use Thread-Local Storage or Associate Locks to Specific Data • Synchronisation costs—don’t do it unless it’s needed for correctness. • Use thread-local storage or memory associated with specific threads. • Watch out for assumptions on the number of threads—don’t hard-code your design. • Avoid frequent shared updates. • If you must synchronise, use carefully designed locks, usually one-to-one with data structures or critical clumps of data. • Only one lock to a data object. (Document!)

  13. Dare to Change the Algorithm for a Better Chance of Concurrency • The bottom line is execution time. • Analysis usually uses asymptotic performance (big-O notation, to be covered later). • However, the best serial algorithm may not be parallelisable. Then consider a suboptimal serial algorithm that you can parallelise. • Know where to find a good book on algorithms • Knuth (the ‘Bible’ of algorithm theory) • Sedgewick (3rd or 4th edition, 3rd edition is more advanced)

  14. Discussion

More Related