1 / 30

Adaptive Multiscale Simulation Infrastructure - AMSI

Adaptive Multiscale Simulation Infrastructure - AMSI. Overview: Industry Standards AMSI Goals and Overview AMSI Implementation Supported Soft Tissue Simulation Results. W.R. Tobin, D. Fovargue , D. Ibanez, M.S. Shephard Scientific Computation Research Center

walda
Download Presentation

Adaptive Multiscale Simulation Infrastructure - AMSI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Multiscale Simulation Infrastructure - AMSI • Overview: • Industry Standards • AMSI Goals and Overview • AMSI Implementation • Supported Soft Tissue Simulation • Results W.R. Tobin, D. Fovargue, D. Ibanez, M.S. Shephard Scientific Computation Research Center Rensselaer Polytechnic Institute

  2. Current Industry Standards – Physical Simulations • Overwhelming majority of numerical simulations conducted in HPC (and elsewhere) are single scale • Continuum (e.g. Finite Element, Finite Difference) • Discrete (e.g. Molecular Dynamics) • Phenomena at multiple scales can have profound effects on the eventual solution to a problem (e.g. fine-scale anisotropies)

  3. Current Industry Standards – Physical Simulations Geometric Model Partition Model • Typically a physical model or scale is simulated using a Single Program Multiple Data (SPMD) style of parallelism • Quantities of interest (mesh, tensor fields, etc.) distributed across the parallel execution space Distributed Mesh

  4. Current Industry Standards – Physical Simulations • Interacting physical models and scales introduce a much more complex set of requirements in our use of the parallel execution space • Writing a new SPMD code for each new multiscalesimulation would require intense reworking of legacy codes used for single-scale simulations (possibly many times over) • Need approach which can leverage the work that has gone into creating and perfecting legacy simulations in the context of massively parallel simulations with interacting physical models auxiliary spmd code auxiliary spmd code primary spmd code auxiliary spmd code

  5. AMSI Goals • Take advantage of proven legacy codes to address the needs of multimodel problems • Minimize need to rework legacy codes to execute in more dynamic parallel environment • Only desired edit/interaction points are those locations in the code where the values produced by multiscale interactions are needed. • Allow dynamic scale load-balancing and process scale reassignment to reduce process idle time when a scale is blocked or underutilized

  6. AMSI Goals simulation goals physical attributes • Hierarchy of focuses • Abstract-Level: Support for implementing multi-model simulations on massively parallel HPC machines • Simulation-Level: Allow dynamic runtime workflow management to implement versatile adaptive simulations • Theory-Level: Provide generic control algorithms (and hooks to allow specialization) supported by real-time minimal simulation meta-modeling • Developer-Level: Facilitate all of the above while minimizing AMSI system overheads and maintaining robust code simulation initialization physics analysis simulation state control scale/physics linking models adaptive simulation control discretization, model, linking error estimates model hierarchy control limits based on measured parameters discretization, model, scale linking improvement

  7. AMSI Goals • Variety of end-users targeted • Application Experts: • Simulation end-users who want answers to various problems • Modeling Experts: • Introduce codes expressing new physical models • Combine proven physical models in new ways to describe multiscale behavior • Computational Experts: • Introduce new discretization methods • Introduce new numerical solution methods • Develop new parallel algorithms

  8. AMSI Overview math and computational models math and computational models model relationships explicit and computational domains geometric interactions explicit and computational domains field transformations explicit and computational tensor fields explicit and computational tensor fields scale linking scaleY scaleX • General meta-modeling services • Support for modeling computational scale-linking operations and data • Model of scale-tasks and task-relations denoting multiscale data transfer • Specializing this support will facilitate interaction with high-level control and decision-making algorithms

  9. AMSI Overview • Dynamic management of the parallel execution space • Process reassignment will use load balancing support for underlying SPMD distributed data as well as the implementation of state-specific entry/exit vectors for scale-tasks. • Load balancing support of scale-coupling data is supported by the meta model of that data in the parallel space • Other data requires support for dynamic load balancing in any underlying libraries • Can be thought of as a hierarchy of load-balancing operations • Multiple scale-task communication/computation balancing • Single scale-task load balancing (standard SPMD load balancing operators)

  10. AMSI Implementation • AMSI::ControlService • Primary application interaction point for AMSI, tracks the overall state of the simulation. • Higher-level control decisions use this object to implement those decisionsand update the simulation meta-model. • AMSI::TaskManager • Maintain the computational meta-model of the parallel execution space and various simulation models. • AMSI::RelationManager • Manage computational scale-linking communication and load balancing required for dynamic management of parallel execution space.

  11. AMSI Implementation • Real-time minimal simulation meta-model • Initialization actions • Scale-tasks and their scale-linking relations • Runtime actions • Data distributions representing discrete units of generic scale-linking data • Communication patterns determining distribution of scale linking communication down to individual data distribution units • Shift to more dynamic scale management will require new control data to be reconciled across processes and scales • Change initialization actions to be (allowable) runtime actions Runtime Initialization scaleX communication patterns scaleZ scaleX scaleY scaleY scale-linking data

  12. AMSI Implementation • Two forms of control data parallel communication • Assembly is a scale-task collective process. • Reconciliation is collective on the union of two scale-tasks associated by a communication relation. scaleX Assembly scaleY scaleX Reconciliation scaleY

  13. AMSI Implementation • Scale linking communication patterns • Constructed via standard distribution algorithms, or • Hooks provided for user-implemented pattern construction, unique to each data distribution CommPatternAlgo_Register(relation_id, CommPatternCreate_FuncPtr); CommPattern_Create(dataDist_id, owner_scale_id, foreign_scale_id); scaleX scaleY

  14. AMSI Implementation • Scale linking communication is handled, on both sides, via a single function call • Determines whether the process belongs to the sending or recving scale-task • Communicates scale-linking quantities guided by a communication pattern • Buffer is contiguous memory segment packed with POD data, MPI_Datatype must describe that datatype • At present a data distribution is limited to one POD representation Communicate(relation_id, pattern_id, buffer, MPI_Datatype);

  15. AMSI Implementation • Shift to phased communication and dynamic scale-task management will introduce new requirements • Will reduce number of explicit control data reconciliations • Will require the introduction of implicit control data reconciliations during scale-linking operations • Primary simulation control points scaleX assemble reconcile communicate scaleY

  16. AMSI Implementation • Shift to phased communication and dynamic scale-task management will introduce new requirements • Will reduce number of explicit control data reconciliations • Will require the introduction of implicit control data reconciliations during scale-linking operations • Primary simulation control points scaleX assemble reconcile / communicate compute scaleY

  17. Biotissue • Multiscalesoft-tissue mechanics simulation • Engineering Scale: • Macroscale (Finite Element Analysis) • Fine Scale controlling engineering scalebehavior: • Microscale Fiber-Only-RVE (Quasistatics) • Microscale Fiber-Matrix-RVE (FEA) • (future project) Additional cellular scale(s) (FEA) • Intermediate scale between current scales • Scale linking • Deformations to RVE • Force/displacement to the engineering scale Macroscale Fiber-Only

  18. Biotissue Implementation • Scalable implementation with parallelized scale-tasks Macroscale macroN macro1 macro2 macro0 micro0 micro1 micro2 microM-1 microM Microscale

  19. Biotissue Implementation • Scalable implementation with parallelized scale-tasks Macroscale macroN macro1 macro2 macro0 micro0 micro1 micro2 microM-1 microM Microscale

  20. Biotissue Implementation • Scalable implementation with parallelized scale-tasks • Ratio of macroscale mesh elements per macroscale process to number of microscale processes determines neighborhood of scale-linking communication Macroscale Microscale

  21. Biotissue Implementation • Macroscale - Parallel Finite Element Analysis • Distributed partitioned mesh, distributed tensor fields defined over the mesh, distributed linear algebraic system • Stress field values characterize macro-micro ratio • Fiber-only microscale - Quasistatics code • ~1k Nodes per RVE • Rapid assembly and solve times per RVE in serial implementation • Strong scaling with respect to macroscale mesh size • Initial results use fiber-only at every macroscale integration point to generate stress field values • Fiber-matrix microscale – Parallel FEA • Order of magnitude more nodes per RVE (~10k-40k) • More complex linear system assembly and longer solve times (nonlinear) necessitate parallel implementation per RVE

  22. Biotissue Implementation • Incorporating fiber-and-matrix microscale RVEs • Hierarchy of parallelism • Macroscale SPMD code • Microscale fiber-only code • Microscale fiber-matrix SPMD code • Nonlinear problem • Macroscale to auxiliary scales relation more complex • Constitutive relation • Fiber-only RVE • Fiber-matrix RVE • Adaptive processes allow these relations to change over time • Intermediate cellular scale will introduce even further complexities to this situation

  23. Results • Biotissue simulation was run with a test problem • Standard tensile test macroscale geometry (dogBone) • Various discretizations of the geometry • Current results for 20k and 200k, working on memory issues (microscale) for 2m elements and higher • Holding macroscale count fixed, varying microscale • Holding micrsocale count fixed, varying macroscale • Varying both scales

  24. Results

  25. Results 1st iteration of multiscale solver 20k mesh – 2 macro processes Time (s) # of processes on microscale

  26. Results 1st iteration of multiscale solver 20k mesh – 2 macro processes Time % # of processes on microscale

  27. Results 1st iteration of multiscale solver 200k mesh – 7680 macro processes • (varying macroscale while holding micro fixed) Time (s) # of processes on macroscale

  28. Results 1st iteration of multiscale solver 200k mesh – time ratios Arrows indicate increasing macro size (4,8,16,32,64) Time % Communication # of processes on microscale

  29. Results • (weak scaling results) # of processes on macroscale Time (s) # of processes on macroscale

  30. Closing Remarks • Results are just starting to come out of the implementation • Need to identify critical areas of each scale code to improve overall performance of multiscale code • Shift to phased communication will allow macroscale to process microscale results as they arrive, increasing computation communication overlap • Contributing microscale code needs memory footprint improvements to mitigate running out of memory during longer runs (larger meshes)

More Related