1 / 61

Adaptive Functional Programming

This paper discusses the concept of adaptive functional programming and its application to solving the problem of computing the convex hull of moving points. It explores the use of dependency graphs and change propagation to optimize computations. The work is a collaboration between Robert Harper, Umut Acar, and Guy Blelloch from Carnegie Mellon University.

jboler
Download Presentation

Adaptive Functional Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Functional Programming Robert Harper Carnegie Mellon University (Joint work with Umut Acar and Guy Blelloch.)

  2. b c h g a d f e Convex Hull of Moving Points Convex Hull = [a,b,c,d] Adaptive Functional Programming

  3. b c h g g a d f e Convex Hull of Moving Points Convex Hull = [a,b,c,d] Adaptive Functional Programming

  4. b c h g g a d f e Convex Hull of Moving Points Convex Hull = [a,g,b,c,d] Adaptive Functional Programming

  5. Adaptive Computing • Run a program on some data • Re-run on “slightly perturbed” data • Obtain the result faster than a complete rerun Adaptive Functional Programming

  6. Previous Work • Atallah '85: Dynamic Computational Geometry • Eppstein, Galil, Italiano '99: Dynamic Graph Algorithms • Bash, Guibas, Hershberger '97: Data Structures for Mobile Data • Demers, Reps, Teitelbaum '81; Reps '82: LBE's • Hoover '87: Incremental Graph Evaluation • Pugh and Teitelbaum '87: Function Caching • Sundaresh and Hudak '91: Partial Evaluation • Liu and Teitelbaum late 90s: Systematic Derivation • (and lots of others) Adaptive Functional Programming

  7. Approaches • Dependency Graphs (DGs) • Build a DG to represent the computation (a "circuit") • When input changes, perform a modify and propagate step • Modify Step: Add and delete dependencies • Propagate Step: Perform Change Propagation • Memoization • Cache results • Consult the cache before a function call Adaptive Functional Programming

  8. Example: Dependency Graphs 2 1 3 fun f (a,b,c) = let u = a+b in if u > 0 then 1/(b+c) else u end b c a u v b+c=5 a+b=3 x 1/v=0.2 r if u > 0 then x else u = 0.2 Adaptive Functional Programming

  9. Modify 2 2 3 fun f (a,b,c) = let u = a+b in if u > 0 then 1/(b+c) else u end b c a u v b+c=5 a+b=3 x 1/v=0.2 r if u > 0 then x else u = 0.2 Adaptive Functional Programming

  10. Propagate 2 2 3 fun f (a,b,c) = let u = a+b in if (u>0) then 1/(b+c) else u end b c a u v b+c=5 a+b=4 x 1/v=0.2 r if (u>0) then x else u = 0.2 Adaptive Functional Programming

  11. Modify 2 -2 -2 fun f (a,b,c) = let u = a+b in if u > 0 then 1/(b+c) else u end b c a u v b+c=5 a+b=3 x 1/v=0.2 r if u > 0 then x else u = 0.2 Adaptive Functional Programming

  12. Propagate: Divide by 0 Exception 2 -2 -2 fun f (a,b,c) = let u = a+b in if u > 0 then 1/(b+c) else u end b c a u v b+c=0 a+b=0 x 1/v=undef r if u > 0 then x else u = 0.2 Adaptive Functional Programming

  13. Propagate: Divide by 0 Exception 2 -2 -2 fun f (a,b,c) = let u = a+b in if u > 0 then 1/(b+c) else u end b c a u v b+c=0 a+b=0 x 1/v=undef r if u > 0 then x else u = 0.2 Adaptive Functional Programming

  14. Modify: Delete Dependencies 2 2 -2 -2 -2 -2 b b c c a a u v u b+c=5 a+b=3 a+b=3 x 1/v=0.2 if u > 0 then x else u = 0.2 if u > 0 then x else u = 0.2 r r Adaptive Functional Programming

  15. Propagate 2 2 -2 -2 -2 -2 b b c c a a u v u b+c=5 a+b=3 a+b=0 x 1/v=0.2 if u > 0 then x else u = 0.0 if u > 0 then x else u = 0.2 r r Adaptive Functional Programming

  16. Static DGs are not General • Static DGs remain the same during propagation • Determined directly from the program text. • Change propagation follows static dependencies. • Relies on some form of program analysis. • Dynamic DG’s are modified during propagation. • Execution reveals the true dependencies. • Avoid problems by considering only actual dependencies. • No need for program analysis; always maintain true dependencies. • Our approach is based on dynamic DG’s. Adaptive Functional Programming

  17. Dynamic Dependency Graphs • Dependency graph is created during initial execution. • Record which computations depend on which others. • Change propagation maintains accurate DG. • Changes can invalidate old dependencies, create new dependencies. • Works for general functional programs. • No limitations on programming style. • Supports recursive types, recursive functions with no restrictions. Adaptive Functional Programming

  18. Dynamic DG Example 1 2 3 b c a fun f (a,b,c) = let u = a+b in if (u > 0) then g(u) else g(b+c) end a+b 1+b u 3 if u>0 then g(u) else g(b+c) g(u) r 0.33 Adaptive Functional Programming

  19. Dynamic DG Example -2 2 3 b c a fun f (a,b,c) = let u = a+b in if (u > 0) then g(u) else g(b+c) end a+b 1+b u 3 if u>0 then g(u) else g(b+c) g(u) r 0.33 Adaptive Functional Programming

  20. Dynamic Change Propagation -2 2 3 b c a fun f (a,b,c) = let u = a+b in if (u > 0) then g(u) else g(b+c) end a+b -2+b u v 0 5 if u>0 then g(u) else g(b+c) g(u) g(v) r 0.5 Adaptive Functional Programming

  21. Dynamic Change Propagation -2 2 3 b c a fun f (a,b,c) = let u = a+b in if (u > 0) then g(u) else g(b+c) end a+b -2+b b+c 2+c u v 0 5 if u>0 then g(u) else g(b+c) g(v) r 0.5 Adaptive Functional Programming

  22. Challenges Towards Dynamic DGs • How to build the initial DG? • How to determine what components to add? • How to determine what components to delete? • The major challenge is how to solve these problems efficiently. Adaptive Functional Programming

  23. Execution Creates Dependencies • Each read creates an edge • Each edge is tagged with a closure • Example: Leta = 1, b = 2 and u=a+b 1 2 1 u=a+b a b 2 u=1+b 1 2 a+b 1+b u=1+2=3 3 u 3 execution Adaptive Functional Programming

  24. Re-execution Adds Edges • Re-run an edge = Re-execute its closure. • Source node is the input. • Target node receives result. • Re-execution adds new edges to the DG. • If branch outcomes change, then dependencies change. • These are created just as for the initial execution. • Re-execution also deletes edges. • Obsolete dependencies must be eliminated. • Which ones to delete? Adaptive Functional Programming

  25. Delete Contained Edges • An edge e iscontained in another edge e' if e is created during the execution of e' • Before re-running an edge, delete all of the edges contained with it. • Might be re-created, if dependencies do not change. • Execution determines new dependencies. • How can we test containment efficiently? • Idea: use time intervals to represent “ends” of an edge. • Edge containment = containment of time interval Adaptive Functional Programming

  26. Containment and Intervals 1 u=a+b b c a 2 1 3 2 u=1+b 1-3 a+b 2-3 1+b 3 u=1+2=3 u 3 r=if u>0... 4 5-6 5 r=g(u) if u>0 then g(u) else g(b+c) 4-15 r=… g(u) 6 14-15 ... 15 r=… r execution f(1,2,3) Adaptive Functional Programming

  27. Dynamic Change Propagation b c a 2 1 3 1-3 2-3 a+b 1+b u 3 5-6 if u>0 then g(u) else g(b+c) 4-15 g(u) 14-15 ... r Adaptive Functional Programming

  28. Dynamic Change Propagation b c a 2 -2 3 1-3 2-3 a+b 1+b u 3 5-6 if u>0 then g(u) else g(b+c) 4-15 g(u) 14-15 ... r Adaptive Functional Programming

  29. Dynamic Change Propagation b c a 2 -2 3 1-3 2-3 a+b 1+b u 3 5-6 if u>0 then g(u) else g(b+c) 4-15 g(u) 14-15 ... r Adaptive Functional Programming

  30. Dynamic Change Propagation b c a 2 -2 3 1-3 a+b u 3 5-6 if u>0 then g(u) else g(b+c) 4-15 g(u) 14-15 ... r Adaptive Functional Programming

  31. Dynamic Change Propagation b c a 2 -2 3 1-3 2-3 a+b -2+b u 0 5-6 if u>0 then g(u) else g(b+c) 4-15 g(u) 14-15 ... r Adaptive Functional Programming

  32. Dynamic Change Propagation b c a 2 -2 3 1-3 2-3 a+b -2+b u 0 if u>0 then g(u) else g(b+c) 4-15 r Adaptive Functional Programming

  33. Dynamic Change Propagation b c a 2 -2 3 1-3 5-7 2-3 a+b -2+b b+c 2+c 6-7 v u 5 0 8-8.5 if u>0 then g(u) else g(b+c) 4-15 g(v) 14.5-15 r Adaptive Functional Programming

  34. Dynamic Change Propagation • The new computation takes place in the same interval as the old! • Change propagation “replaces” the then branch with the else branch. • There is no assurance that each branch takes the same number of steps! • How can we cram more instructions into the same time interval? • This is a central algorithmic problem for our approach. Adaptive Functional Programming

  35. How to Maintain the Intervals • The “obvious” approach is to use real numbers. • Infinitely divisible into sub-intervals. • But requires unbounded precision, in general! • (Float’s won’t work.) • Instead we use linearly-ordered time stamps. • Maintain a linear ordering of stamps in an Order-Maintenance Structure • Sleator-Dietz: worst-case constant time for insertion, deletion, comparison. • [a,b] < [c,d] iff c<a and b<d Adaptive Functional Programming

  36. Linguistic Support • We consider two ways of writing adaptive programs. • In ML, using an adaptivity library. • In AFL, with primitive adaptivity mechanisms. • Both approaches support selective adaptivity. • Programmer determines which aspects are adaptive. • Reduces overhead, increases expressive power. • Two modes of expression: • Changeable: Sensitive to modifications. • Stable: Insensitive to modifications. Adaptive Functional Programming

  37. Modifiable References • Write-once reference cell of arbitrary type. • Type: t mod • Created using mod(e) • A modifiable reference is a stable value! • Changeable expressions write their values into a modifiable reference. • The “destination” of that changeable. • Any expression that depends on a changeable expression must read the associated modifiable. • Establishes dependency and re-execution code. Adaptive Functional Programming

  38. Modifiables and Changeables • Changeables communicate through modifiables mod … read a … write v' mod … write v b a EXP1 EXP2 Adaptive Functional Programming

  39. Modifiables and Dynamic DGs • Modifiables = Nodes • Initialized by a changeable expression. • Reads = Edges • Labeled with re-execution code. • Time stamp reads and writes • Records time intervals for change propagation. Adaptive Functional Programming

  40. The ML Interface • Here is a (slightly simplified) ML interface:signature ADAPTIVE = sig type ‘a mod type ‘a dest type chg val mod : (‘a dest -> chg) -> ‘a mod val read : ‘a mod -> (‘a -> chg) -> chg val write : ‘a dest * ‘a -> unit val change : ‘a mod * ‘a -> unit val init : unit -> unit val propagate : unit -> unitend Adaptive Functional Programming

  41. The ML Interface • The ML interface requires run-time checks. • Enforce write-once policy (at least once, at most once). • Ensure write-before-read. • Cannot easily ensure acyclic dependencies • Could check at run-time, but too inefficient. • Easily implemented in ML. • Only 100 lines of code for adaptivity mechanism • Uses priority queues and Sleator-Dietz OM library. • Supports experimentation. Adaptive Functional Programming

  42. Standard Quick Sort datatype 'a list = NIL | CONS of ('a*'a list) fun qsort (l: 'a list) = let fun qs (l,rest) = case l of NIL => rest | CONS(h,t) => let val less = filter (fn x => x < h) t val bigger = filter (fn x => x >= h) t val bigger' = qs(bigger,rest) in qs(less,CONS(h,bigger')) end) in qs (l,NIL) end Adaptive Functional Programming

  43. Adaptive Quick Sort datatype 'a modlist = NIL | CONS of ('a*'a modlist mod) fun qsort (l: 'a modlist mod) = let fun qs (l,rest,d) = read(l, fn l' => case l' of NIL => write (d, rest) | CONS(h,t) => let val less = filter (fn x => x < h) t val bigger = filter (fn x => x >= h) t val bigger' = mod (fn d => qs(bigger,rest,d)) in qs(less,CONS(h,bigger',d)) end) in mod(fn d => qs (l,NIL,d)) end Adaptive Functional Programming

  44. Changeable and Stable Expressions datatype 'a modlist = NIL | CONS of ('a*'a modlist mod) fun qsort (l: 'a modlist mod) = let fun qs (l,rest,d) = read(l, fn l' => case l' of NIL => write(d, rest) | CONS(h,t) => let val less = filter (fn x => x < h) t val bigger = filter (fn x => x >= h) t val bigger' = mod (fn d =>qs(bigger,rest,d)) in qs(less,CONS(h,bigger',d)) end) in mod(fn d =>qs (l,NIL,d)) end Adaptive Functional Programming

  45. Adaptation - val l = [1,0,3]; - val (modlist,last) = toModifiableList (l); - val r = qsort' (modlist); (* [0,1,3] *) (* Extend the list with the key 2 *) - val _ = change (last,CONS(2,mod(write NIL))) - val _ = propagate ( ) (* [0,1,2,3] *) Adaptive Functional Programming

  46. Performance • Theorem: Adaptive Quick Sort updates its output in O(log n) average time upon an extension of its input by one new key. • Caveat: The logarithmic bound does not hold for insertions in the middle of the list! • To do this requires memoization, which we are currently developing. • Related to functioncaching in Pugh, et al’s work. Adaptive Functional Programming

  47. Modal Type System for Adaptivity • Modal type system based on Pfenning/Davies '99 • Two expression modes: Changeable and Stable • Modality: ‘a mod • Properties of the type system: • Write each modifiable exactly once • No read before write • No lost dependencies • No cyclic dependencies Adaptive Functional Programming

  48. Modal Types for Adaptivity • Two typing judgements: • Expression e is a stable expression of type t : • Expression e is a changeable expression of type t : Adaptive Functional Programming

  49. Some Typing Rules • Introduction (modality): • Elimination (read): Adaptive Functional Programming

  50. Some Typing Rules • Inclusion (write): • Some other constructs (see paper): • Stable and changeable conditionals. • Sequencing of stable within changeable. • Changeable and stable function spaces (for efficiency). Adaptive Functional Programming

More Related