1 / 15

Non-Blocking Concurrent Data Objects With Abstract Concurrency

By Jack Pribble Based on, “A Methodology for Implementing Highly Concurrent Data Objects,” by Maurice Herlihy. Non-Blocking Concurrent Data Objects With Abstract Concurrency. Concurrent Object. A data structure shared by concurrent processes

barryjames
Download Presentation

Non-Blocking Concurrent Data Objects With Abstract Concurrency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. By Jack Pribble Based on, “A Methodology for Implementing Highly Concurrent Data Objects,” by Maurice Herlihy Non-Blocking Concurrent Data Objects With Abstract Concurrency

  2. Concurrent Object • A data structure shared by concurrent processes • Traditionally implemented using critical sections (locks) • In asynchronous systems slow or halted processes can impede the progress of fast processes • Alternatives to critical sections include non-blocking implementations and the stronger wait free implementations

  3. Non-Blocking Concurrency • Non-blocking: after a finite number of steps at least one process must complete • Wait-free: every process must complete after a finite number of steps • A system that is merely non-blocking is prone to starvation and thus should only be used when starvation is unlikely. • A wait-free system protects against starvation, so it should be used when some processes run slower than others.

  4. Methodology for Constructing Concurrent Objects • Data objects are implemented in a sequential fashion with with certain conventions adhered to, but with no explicit synchronization. • The sequential implementation cannot modify memory other than the concurrent object, and it must always leave the object in a legal state. • The sequential code is automatically transformed into concurrent code through synchronization and memory management techniques. • The transformation is simple enough for a compiler or preprocessor to handle.

  5. Basic Concurrency Transformation • Each object holds a pointer to the current version of the object • Each Process: • Reads the pointer using load-linked • Copies the indicated version of the object to a block of memory • Applies the sequential operation to the copy of the object • Uses store-conditional to swing the pointer from the old to the new version • If 4 fails then the the process restarts at 1.

  6. To ensure that a process is not accessing an incomplete state while another process updates the shared object, two version counters are used (check[0] and check[1]). • When a process modifies an object it updates check[0], does the modification, then updates check[1]. • When a process copies an object it reads check[1], copies the version, then reads check[0]. • The copy will only succeed if the modifying process has completed all modifications on the object, thus the object won't be left in an incomplete state.

  7. Typedef struct { pqueue_type version; unsigned check[2];}Pqueue_type; • static Pqueue_type *new_pqueue; • int Pqueue_deq(Pqueue_type **Q) Pqueue_type old_pqueue; /* concurrent object */ pqueue_type *old_version, *new_version; /* sequential object */ int result; unsinged first, last; while(1) { old_pqueue = load_linked(Q); old_version = &old_pqueue - > version; new_version = &new_pqueue - > version; first = old_pqueue - > check[1]; copy(old_version, new_version); last = old_pqueue - > check[0]; if (first == last) { result = pqueue_deq(new_version); if (store_conditional(Q, new_version)) break; } } new_pqueue = old_pqueue; return result;}

  8. Performance of a Simple Non-Blocking Implementation vs. a Simple Spin-Lock

  9. Performance with Backoff vs. Simple Spin-Lock and Spin-Lock with Backoff

  10. Wait-Free Implementation • Each process has an invocation structure updated when beginning an operation and a response structure updated when completing an operation. • Invocation Structure: operation name, the argument value, and a toggle bit to determine if the invocation is old or new • Response Structure: result value, and toggle bit

  11. The concurrent object contains an array field called responses. • Responses records the result of the most recently completed operation of process P at responses[P]. • Processes share an array called announce. • Process P records its argument and operation name at announce[P] when starting a new operation, and it also compliments the toggle bit.

  12. Performance of Non-Blocking with Backoff vs. Wait-Free with Backoff

  13. Large Concurrent Objects • Cannot be copied in a single block • Represented by a set of blocks that are linked by pointers • The programmer is responsible for determining which blocks of the object are necessary to copy • The less that is copied the better the code will perform that interacts with the object

More Related