1 / 62

Simulating materials with atomic detail at IBM: From biophysical to high-tech applications

Simulating materials with atomic detail at IBM: From biophysical to high-tech applications. Application Team Glenn J. Martyna, Physical Sciences Division, IBM Research Dennis M. Newns, Physical Sciences Division, IBM Research

avery
Download Presentation

Simulating materials with atomic detail at IBM: From biophysical to high-tech applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simulating materials with atomic detail at IBM: From biophysical to high-tech applications Application Team Glenn J. Martyna, Physical Sciences Division, IBM Research Dennis M. Newns, Physical Sciences Division, IBM Research Jason Crain, School of Physics, Edinburgh University Andrew Jones, School of Physics, Edinburgh University Razvan Nistor, Physical Sciences Division, IBM Research Ahmed Maarouf, Physical Sciences Division, IBM Research and EGNC Marcelo Kuroda, Physical Sciences Division, IBM and CS, UIUC Methods/Software Development Team Glenn J. Martyna, Physical Sciences Division, IBM Research Mark E. Tuckerman, Department of Chemistry, NYU. Laxmikant Kale, Computer Science Department, UIUC Ramkumar Vadali, Computer Science Department, UIUC Sameer Kumar, Computer Science, IBM Research Eric Bohm, Computer Science Department, UIUC Abhinav Bhatele, Computer Science Department, UIUC Ramprasad Venkataraman, Computer Science Department, UIUC Anshu Arya, Computer Science Department, UIUC Funding : NSF, IBM Research, ONRL, …

  2. “The quote” (Lincoln on Parallel Computing) One score years ago, our fore-PPLers brought forth upon this continent a new programming language, conceived in data driven execution and dedicated to the proposition that all chares are instantiated equal ... On the field of applications research, we are testing whether such a programming language or any language so conceived and so dedicated, can long endureor shall perish from the earth.

  3. IBM : A history of multidisciplinary research 2009Nano MRI 2008World’s First Petaflop Supercomputer 2005Cell 2004 Blue Gene/L(National Medal of Technology) 2003Carbon Nanotubes 1997Copper Interconnect Wiring 1997Deep Blue 1994Silicon Germanium (SiGe) 1987High-Temperature Superconductivity (Noble) 1986Scanning Tunneling Microscope (Noble) 1980RISC 1971Speech Recognition 1970Relational Database 1967Fractals 1966One-Device Memory Cell 1957FORTRAN (A mixed blessing) 1956RAMAC

  4. Materials Science Exploratory Materials Synthesis and Processing Advanced Instrumentation for Materials Characterization Theory and Computational Modeling Chemistry New Methods for Synthesis of Electronic Materials Physics Nanoscale Device and Materials Physics Thermal Physics Statistical Physics Biophysics : Theory and Simulation Physics of Information; Quantum Information Theory Nanoscale Engineering Exploratory Process Integration, Exploratory Device Test and Characterization IBM’s Physical Sciences: We’re not dead yet! 4

  5. IBM’s Blue Gene/L network torus supercomputer Our fancy apparatus: New models on the way or here. BG/Q, BlueWaters, 2-steps to Exascale ...

  6. Goal : The accurate treatment of complex heterogeneous systems to gain physical insight.

  7. Characteristics of current models Empirical Models: Fixed charge, non-polarizable, pair dispersion. Ab Initio Models: GGA-DFT, Self interaction present, Dispersion absent.

  8. Problems with current models (empirical) Dipole Polarizability : Including dipole polarizability changes solvation shells of ions and drives them to the surface. Higher Polarizabilities: Quadrupolar and octapolar polarizabilities are NOT SMALL. All Manybody Dispersion terms: Surface tensions and bulk properties determined using accurate pair potentials are incorrect. Surface tensions and bulk properties are both recovered using manybody dispersion and an accurate pair potential. An effective pair potential destroys surface properties but reproduces the bulk. Force fields cannot treat chemical reactions:

  9. Problems with current models (DFT) • Incorrect treatment of self-interaction/exchange: • Errors in electron affinity, band gaps … • Incorrect treatment of correlation: Problematic • treatment of spin states. The ground state of transition metals (Ti, V, Co) and spin splitting in Ni are in error. Ni oxide incorrectly predicted to be metallic when magnetic long-range order is absent. • Incorrect treatment of dispersion : Both exchange • and correlation contribute. • KS states are NOT physical objects : The bands of the exact DFT are problematic. TDDFT with a frequency • dependent functional (exact) is required to treat • excitations even within the Born-Oppenheimer approximation.

  10. Conclusion : Current Models • Simulations are likely to provide semi-quantitative accuracy/agreement with experiment. • Simulations are best used to obtain insight and examine physics .e.g. to promote understanding. Nonetheless, in order to provide truthful solutions of the models, simulations must be performed to long time scales!

  11. Limitations of ab initio MD (despite our efforts/improvements!) • Limited to small systems (100-1000 atoms)*. • Limited to short time dynamics and/or sampling times. • Parallel scaling only achieved for • # processors <= # electronic states • until recent efforts by ourselves and others. • Improving this will allow us to sample longer and learn new physics. *The methodology employed herein scales as O(N3) with system size due to the orthogonality constraint, only.

  12. Solution: Fine grained parallelization of CPAIMD. Scale small systems to 105 processors!! Study long time scale phenomena!! The charm++ QM/MM application is work in progress with Chris Harrison

  13. IBM’s Blue Gene/L network torus supercomputer The worlds fastest supercomputer! Its low power architecture requires fine grain parallel algorithms/software to achieve optimal performance.

  14. Density Functional Theory : DFT

  15. Electronic states/orbitals of water Removed by introducing a non-local electron-ion interaction.

  16. Plane Wave Basis Set: The # of states/orbital ~ N where N is # of atoms. The # of pts in g-space ~N.

  17. Plane Wave Basis Set: Two Spherical cutoffs in G-space n(g) gz gz y(g) gy gy gx gx n(g) : radius 2gcut y(g) : radius gcut g-space is a discrete regular grid due to finite size of system!!

  18. Plane Wave Basis Set: The dense discrete real space mesh. n(r) z z y(r) y y x x n(r) = Sk|yk(r)|2 n(g) = 3D-IFFT{n(r)} exactly! y(r) = 3D-FFT{ y(g)} Although r-space is a discrete dense mesh, n(g) is generated exactly!

  19. Simple Flow Chart : Scalar Ops Memory penalty Object : Comp : Mem States : N2 log N : N2 Density : N log N : N Orthonormality : N3 : N2.33

  20. Flow Chart : Data Structures

  21. Parallelization under charm++ Transpose Transpose Transpose Transpose Transpose RhoR

  22. Effective Parallel Strategy: • The problem must be finely discretized. • The discretizations must be deftly chosen to • Minimize the communication between processors. • Maximize the computational load on the processors. • NOTE , PROCESSOR AND DISCRETIZATION ARE • SEPARATE CONCEPTS!!!!

  23. Ineffective Parallel Strategy • The discretization size is controlled by the number of physical processors. • The size of data to be communicated at a given step is controlled by the number of physical processors. • For the above paradigm : • Parallel scaling is limited to # processors=coarse grained parameter in the model. • THIS APPROACH IS TOO LIMITED TO ACHIEVE FINE GRAINED PARALLEL SCALING.

  24. Virtualization and Charm++ • Discretize the problem into a large number of very fine grained parts. • Each discretization is associated with some amount of computational work and communication. • Each discretization is assigned to a light weight thread or a ``virtual processor'' (VP). • VPs are rolled into and out of physical processors as physical processors become available (Interleaving!) • Mapping of VPs to processors can be changed easily. • The Charm++ middleware provides the data structures and controls required to choreograph this complex dance.

  25. Parallelization by over partitioning of parallel objects : The charm++ middleware choreography! Decomposition of work Charm++ middleware maps work to processors dynamically Available Processors On a torus architecture, load balance is not enough! The choreography must be ``topologically aware’’.

  26. Challenges to scaling: • Multiple concurrent 3D-FFTs to generate the states in real space require AllToAll communication patterns. Communicate N2 data pts. • Reduction of states (~N2 data pts) to the density (~N data pts) in real space. • Multicast of the KS potential computed from the density (~N pts) back to the states in real space (~N copies to make N2 data). • Applying the orthogonality constraint requires N3 operations. • Mapping the chare arrays/VPs to BG/L processors in a topologically aware fashion. Scaling bottlenecks due to non-local and local electron-ion interactions removed by the introduction of new methods!

  27. Topologically aware mapping for CPAIMD Density N1/12 States ~N1/2 Gspace ~N1/2 (~N1/3) • The states are confined to rectangular prisms cut from the torus to • minimize 3D-FFT communication. • The density placement is optimized to reduced its 3D-FFT communication and the multicast/reduction operations.

  28. Topologically aware mapping for CPAIMD : Details Distinguished Paper Award at Euro-Par 2009

  29. Improvements wrought by topological aware mappingon the network torus architecture Density (R) reduction and multicast to State (R) improved. State (G) communication to/from orthogonality chares improved.‘’Operator calculus for parallel computing”, Martyna and Deng (2011) in preparation.

  30. Parallel scaling of liquid water* as a function of system size on the Blue Gene/L installation at YKT: *Liquid water has 4 states per molecule. • Weak scaling is observed! • Strong scaling on processor numbers up to ~60x the number of states! • IBM J. Res. Dev. (2009).

  31. Scaling Water on Blue Gene/L

  32. Topo-aware Spanning tree Results from 32 water molecule simulation

  33. Software : Summary • Fine grained parallelizationof the Car-Parrinello ab initio MD method demonstrated on thousands of processors : • # processors >> # electronic states. • Long time simulations of small systems are now possible on large massively parallel supercomputers.

  34. Instance parallelization • Many simulation types require fairly uncoupled instances of existing chare arrays. • Simulation types is this class include: 1) Path Integral MD (PIMD) for nuclear quantum effects. 2) k-point sampling for metallic systems. 3) Spin DFT for magnetic systems. 4) Replica exchange for improved atomic phase space sampling. • A full combination of all 4 simulation is both physical and interesting

  35. Replica Exchange : M classical subsystems each at a different temperature acting indepently Replica exchange uber index active for all chares. Nearest neighbor communication required to exchange temperatures and energies

  36. PIMD : P classical subsystems connect by harmonic bonds Classical particle Quantum particle PIMD uber index active for all chares. Uber communication required to compute harmonic interactions

  37. K-points : N-states are replicated and given a different phase. k0 k1 Atoms are assumed to be part of a periodic structure andare shared between the k-points (crystal momenta). The k-point uber index is not active for atoms and electron density. Uber reduction communication require to form the e-density and atom forces.

  38. Spin DFT : States and electron density are given a spin-up and spin-down index. Spin up Spin dn The spin uber index is not active for atoms. Uber reduction communication require to form the atom forces

  39. ``Uber’’ charm++ indices • Chare arrays in OpenAtom now posses 4 uber ``instance’’ indices. • Appropriate section reductions and broadcasts across the ‘’Ubers’’ have been enabled. • All physics routines are working. • We expect full results this month.

  40. Goal : The accurate treatment of complex heterogeneous systems to gain physical insight.

  41. 2004 Frequency Extrapolation CMOS clock speed no longer scales!! Oh my! Clock speed of Microprocessors (Haensch, 2008)

  42. CMOS Constant Field Scaling Rules R. H. Dennard et al., IEEE J. Solid State Circuits, (1974). SCALING: Voltage: V/a Oxide: tox /a Wire width: W/a Gate length: L/a Diffusion: xd /a Substrate: aNA RESULTS: Higher Density: ~a2 Higher Speed: ~a Power/ckt: ~1/a2 Active Power Density: ~Constant!

  43. New device concepts are required for truly low-voltage or low-power digital logic. The energy dissipated in each switching operation is ½CV2 To reduce C: Make the device smaller. To reduce V: We must now change the device physics! Why???

  44. New device concepts are required for truly low-voltage or low-power digital logic. Log Current Ion Sub- Threshold slope Ioff 0 Vdd Voltage For transport controlled by thermal emission over a barrier, -- subthreshold current is proportionaltoexp[-ys(e/kT)] -- dys / d(log10 I) = 60 mV/decade at room temp. Oh no! I am stuck at ~1V or I leak like crazy!!!

  45. How low can the supply voltage be? Supply Voltage Based on Voltage Noise Constraints minimum 8s Johnson-Nyquist noise criterion ( ) forstatic memorylogic Theis and Solomon, Proceedings of the IEEE, Dec. 2010. Estimated average capacitance for logic includes both device and wiring capacitance, and for static memory, the internal capacitance of a 6-device cell.

  46. Proposed Paths to Low-Voltage SwitchingTheis and Solomon, Proceedings of the IEEE, Dec. 2010 • Energy Filtering • The Tunnel-FET cuts off the high-energy tail of the Boltzmann distribution. • Internal Potential Step-up • The Ferroelectric-Gate (or Negative Capacitance) FET allows a small gate voltage swing to produce a larger swing in ys. • Internal Transduction • The Spin-FET transduces a voltage to a spin state which is gated and then transduced back to a voltage state. The NEMS switch transduces a voltage to a mechanical state and back. These concepts are more general than the “exemplar” devices! Wow! A green light to explore new physics from the CMOS guys!

  47. Novel Switches by Phase Change Materials The phases are interconverted by heat!

  48. Novel Switches by Phase Change Materials Ge0.15Te0.85 or Eutectic GT

  49. Novel Switches by Phase Change Materials GeSb2Te4 or GST-124

  50. Piezoelectrically driven fast, cool, memory?? A pressure driven mechanism would be fast, cool, and scalable! Can we find suitable materials using a combined exp/theor approach?

More Related