1 / 21

Component Approach to Distributed Multiscale Simulations

Component Approach to Distributed Multiscale Simulations. Katarzyna Rycerz(1,2), Marian Bubak(1,3) (1) AGH University of Technology, Institute of Computer Science AGH, Mickiewicza 30, 30-059 Kraków, Poland (2) ACC Cyfronet AGH, ul. Nawojki 11, 30-950 Kraków, Poland

zeheb
Download Presentation

Component Approach to Distributed Multiscale Simulations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Component Approach to Distributed Multiscale Simulations Katarzyna Rycerz(1,2), Marian Bubak(1,3) (1) AGH University of Technology, Institute of Computer Science AGH, Mickiewicza 30, 30-059 Kraków, Poland (2) ACC Cyfronet AGH, ul. Nawojki 11, 30-950 Kraków, Poland (3)University of Amsterdam, Institute for Informatics, Amsterdam, The Netherlands

  2. Outline • Requirements of multiscale simulations • Motivation for a component model for such simulations • HLA-based component model: idea, design challenges and solutions • Experiment with Multiscale Multiphysics Scientific Environment (MUSE) • Execution in GridSpace VL (demo) • Summary

  3. Multiscale Simulations Consists of modules of different scale Examples: virtual physiological human initiative reacting gas flows capillary growth colloidal dynamics stellar systems and many more ... the reoccurrence of stenosis, anarrowing of a blood vessel,leading to restricted blood flow

  4. Multiscale Simulations - Requirements • Actual connection of two or more models together • obeying laws of physics (e.g. conservation law) • advanced time management: ability to connect modules with different time scales and internal time management • support for connecting models of different space scale • Composability and reusability of existing models of different scale • finding existing models needed and connecting them either together or to new models • ease of plugging in and unplugging of models from a running system • standarized models’ connections + many users sharing their models = more chances for general solutions

  5. Motivation • To wrap simulations into recombinant components that can be selected and assembled in various combinations to satisfy requirements of multiscale simulations • machanisms specyfic for distributed multiscale simulation • adaptation of one of the existing solutions for distributed simulations – our choice – High Level Architecture (HLA) • support for long running simulations - setup and steering of components should be possible also during runtime • possibility to wrap legacy simulation kernels into components • Need for an infrastructure that facilitates cross-domain exchange of components among scientists • need for support for the component model • using Grid solutions (e-infrastructures) for crossing administrative domains

  6. Related Work • Model Coupling Toolkit • message passing(MPI) style of communication between simulation models. • domain data decomposition of the simulated problem • supportfor advanced data transformations between different models • J. Larson, R. Jacob, E. Ong ”The Model Coupling Toolkit: A New Fortran90Toolkit for Building Multiphysics Parallel Coupled Models.” 2005: Int. J. HighPerf. Comp. App.,19(3), 277-292. • Multiscale Multiphysics Scientific Environment (MUSE), now AMUSE • The Astrophysical Multi-Scale Environment • scripting approach (Python) is used to couple models together. • models include: stellar evolution, hydrodynamics, stellar dynamics and radiative transfer • S. Portegies Zwart, S. McMillan, at al. A Multiphysics and Multiscale Software Environment for Modeling Astrophysical Systems, New Astronomy, volume 14, issue 4, year 2009, pp. 369 - 378 • The Multiscale Coupling Library and Environment (MUSCLE) • a software framework to build simulations according to the complex automatatheory • concept of kernels that communicate by unidirectional pipelinesdedicated to pass a specific kind of data from/to a kernel (asynchronous communication) • J. Hegewald, M. Krafczyk, at al.. An agent-based coupling platform for complex automata. ICCS, volume 5102 of Lecture Notes in Computer Science, pages 227-233. Springer, 2008.

  7. Why High Level Architecture (HLA) ? • Introduces the concept of simulation systems (federations) built from distributed elements (federates) • Supports joining models of different time scale - ability to connect simulations with different internal time management in one system • Supports data management (publish/subscribe mechanism) • Separates actual simulation from communication between fedarates • Partial support for interoperability and reusability (Simulation Object Model (SOM), Federation Object Model (FOM), Base Object Model (BOM)) • Well-known IEEE and OMT standard • Reference implementation – HLA Runtime Infrastructure (HLA RTI) • Open source implementations available – e.g. CERTI, ohla

  8. HLA Component Model • Model differs from common models (e.g. CCA) – no direct connections, no remote procedure call (RPC) • Components run concurrently and communicate using HLA mechanisms • Components use HLA facilities (e.g. time and data management) • Differs from original HLA mechanism: • interactions can be dynamically changed at runtime by a user • change of state is triggered from outside of any federate CCA model HLA model

  9. HLA Components Design Challenges Grid platform (H2O) Component HLA Simulation Code • Transfer of control between many layers • requests from the Gridlayer outside the component • simulation code layer • HLA RTI layer. • The component should be able to efficiently process concurrently: • actual simulation that communicates with other simulation components via RTI layer • external requests ofchanging state of simulation in HLA RTI layer . External requests: start/stop join/resign set time policy publish/subscribe CompoHLA library HLA RTI Grid platform (H2O) Component HLA

  10. HLA RTI Concurrent Access Control Grid platform (H2O) Component HLA Simulation Code • Use concurrent access exception handling available in HLA • Transparent to developer • Synchronous mode - requests processedas they come • simulation is running in a separate thread • Dependent on implementation of concurrencycontrol in used HLA RTI • Concurrency difficult to handle effectively • e.g starvation of requests thatcauses overhead in simulation execution External requests CompoHLA library HLA RTI (concurrent access control) Grid platform (H2O) Component HLA

  11. Advanced Solution - Use Active Object Pattern Grid platform (H2O) Component HLA Simulation Code • Requires to call a single routine in a simulationloop • Asynchronous mode - separates invocationfrom execution • Requests processed when scheduler iscalled from simulation loop • Independent on behavior of HLA implementation • Concurrency easy to handle • JNI used for communication between Simulation Code, Scheduler and CompoHLA library External requests Scheduler CompoHLA library Queue HLA RTI Grid platform (H2O) Component HLA

  12. Interactions between Components • Modules taken from Multiscale Multiphysics Scientific Environment (MUSE) • Multiscale simulation of dense stellar systems • Two modules of different time scale: • stellar evolution (macro scale) • stellar dynamics - N-body simulation (meso scale) • Data management • mass of changed stars are sent from evolution (macro scale) to dynamics (meso scale) • no data is needed from dynamics to evolution • data flow affects whole dynamics simulation • Dynamics takes more steps than evolution to reach the same point of simulation time • Time management - regulating federate (evolution) regulate the progress in timeof constrained federate (dynamics) • The maximalpoint in time which the constrained federate can reach (LBTS) at certain moment iscalculated dynamically according to the position of regulating federate on thetime axis

  13. Concurrent execution, conservative approach of dynamics and evolution as HLA components (total time 18.3 sec): Pure calculations of more computationally intensive (dynamics) component 17.6 sec Component architecture overhead: Request processing (through grid and component layer) 4-14 msec depending on request type Request realisation (scheduler) 0.6 sec HLA-based distribution overhead: Synchronization with evolution component 7 msec Experiment Results • H2O v2.1 as a Grid platform and HLA CERTI v 3.2.4 – open source • Experiment run on DAS3 grid nodes in: • Delft (MUSE sequential version and dynamics component) • Amsterdam UvA (evolution component) • Leiden (component client) • Amsterdam VU (RTIexec control process) • Detailed results - in a paper

  14. HLA Components in GridSpace VL Demo

  15. Run PBS job allocate nodes start H2O kernels Demo experiment – allocation of resources user GridSpace Ruby script (snippet 1) PBS run job (start H2O kernel) H2O kernel H2O kernel node A node B

  16. Asksselectedcomponents to joinsimulation system Asksselectedcomponents to publishorsubscribe to data objects (stars) Askscomponents to set their time policy Determineswhereoutput/errorstreamsshould go Demo experiment – simulation setup user GridSpace Ruby script (snippet 1) Jruby script (snippet 2) join federation join federation create components set streaming subscribe set streaming publish be constrained be regulating Dynamics HLAComponent Evolution HLAComponent H2O kernel H2O kernel node A node B HLA communication

  17. Asks components to start Alters the time policy at runtime Stop unset constrained Star data object Star data object Demo experiment - execution user GridSpace Ruby script (snippet 1) Jruby script (snippet 2) Jruby script (snippet 3) Dynamics view Evolution view Jruby script (snippet 4) start Out/err start stop unset regulation stop Evolution HLAComponent Dynamics HLAComponent H2O kernel H2O kernel node A node B HLA communication

  18. Delete job stop H2O kernels release nodes Demo experiment – cleaning up user GridSpace Ruby script (snippet 1) Ruby script (snippet 5) Ruby script (snippet 1) Ruby script (snippet 1) Ruby script (snippet 5) PBS Delete job ( stop H2O kernels) H2O kernel H2O kernel node A node B

  19. Recorded demo: HLA Components in GridSpace VL

  20. Summary • Presented HLA component modelenables theuser to dynamically compose/decompose distributed simulations from multiscaleelements residing on the Grid • Architecture ofthe HLA component supports steering of interactions with other componentsduring simulation runtime • The presented approach differs from that inoriginal HLA, where all decisions about actual interactions are made by federatesthemselves. • The functionality of the prototype is shown on the example ofmultiscale simulation of a dense stellar system – MUSE environment. • Experiment results show that that grid and component layers do not introducemuch overhead. • HLA components can be run and managed within GridSpace Virtual Laboratory

  21. For more information see: http://dice.cyfronet.pl https://gs2.cyfronet.pl http://www.mapper-project.eu

More Related