1 / 22

Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations

Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations. Josh Hursey. Villin Folding. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Z. Overview.

jalia
Download Presentation

Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Harnessing Grid-Based Parallel Computing Resources for Molecular Dynamics Simulations Josh Hursey

  2. Villin Folding

  3. Z Z Z Z Z Z Z Z Z Z Z Z

  4. Z Z Z Z Z Z Z Z Z Z Z Z

  5. Overview Folding@Clusters, is an adaptive framework for harnessing low latency parallel compute resources for protein folding research. It combines capability discovery, load balancing, process monitoring, and checkpoint/re-start services to provide a platform for molecular dynamics simulations on a range of grid-based parallel computing resources including clusters, SMP machines, and clusters of SMP machines (sometimes known as constellations).

  6. Design Goals • Provide an easy to use, open source interface to significant computing resources for scientists performing molecular dynamics simulations on large biomolecular systems. • Automate process of running molecular systems on a variety of parallel computing resources • Handle failures gracefully & automatically • Don’t hinder performance possibilities • Ease of use for scientists, sys. admin.s, & contributors • Provide low friction install, configuration, & run-time interfaces • Sustain tight linkage with Folding@Home project.

  7. Open Source Building Blocks • GROMACS: Molecular dynamics software package. Primary Scientific Core • FFTW: Fast Fourier Transform Library. Used internally in GROMACS. • LAM/MPI: Message Passing Interface implementation. Supports the MPI-2.0 specification. • COSM: Distributed computing library to aid in portability. Provides capability discovery, logging, & base utilities. • NetPipe: Common tool for measuring bandwidth & latency. Used in capability discovery. • Folding@Home: Large-scale distributed computing project. Foundation for this project.

  8. Contributor Setup Create a user to run Folding@Clusters Download & unpack the distribution Confirm LAM/MPI installation & configuration Start LAM/MPI: $ lamboot Configure Folding@Clusters using mother.conf Start Folding@Clusters: $ mpirun -np 1 bin/mother $ lamnodes n0 c1.cluster.earlham.edu:2:origin,this_node n1 c2.cluster.earlham.edu:2: n2 c3.cluster.earlham.edu:2: n3 c4.cluster.earlham.edu:2: n4 c5.cluster.earlham.edu:2: n5 c6.cluster.earlham.edu:2: n6 c7.cluster.earlham.edu:2: n7 c8.cluster.earlham.edu:2: n8 c9.cluster.earlham.edu:2: n9 c10.cluster.earlham.edu:2: $ cat conf/mother.conf [Network] LamHosts=n0,n1,n2,n3,n4,n5,n6 LamMother=n0

  9. Testing Environment: Cairo • Network Fabric: 2 Netgear GSM712 1000 MB Switches Linked together by dual GBIC/1000 BT RJ45 modules • OS: Yellow Dog Linux (4.0 Release, 2.6.8-1 SMP Kernel) • GCC: 3.3.3-16

  10. Testing Environment: Molecules

  11. Performance Proteasome (Stable) DPPC Villin

  12. Future Directions • New scientific cores (Amber, NAMD, etc…) • Remove dependencies on pre-installed software • Extend testing suite of molecules • Extend range of parallel compute resources used in testing • Abstract the @Clusters framework • Investigate load balancing & resource usage improvements • Architecture addition: Grandmothers • Beta Release!

  13. Future Directions

  14. About Us Josh Hursey Charles Peck Josh McCoy John Schaefer Vijay Pande Erik Lindahl Adam Beberg

  15. Questions

  16. Speedup Proteasome (Stable) DPPC Villin

  17. Testing Environment: Bazaar • Network Fabric: 2 Switches (3Com 3300XM 100 MB, 3Com 3300 100 MB) Linked together by a 3Com MultiLink cable • OS: SuSE Linux (2.6.4-52 SMP Kernel) • GCC: 3.3.3

  18. Motivation

More Related