1 / 25

DOE HENP SciDAC Project:

Electromagnetic Systems Simulation-1 (ESS). DOE HENP SciDAC Project: “Advanced Computing for 21 st Century Accelerator Science and Technology”. Cho Ng. Advanced Computations Department Stanford Linear Accelerator Center. SciDAC Meeting 8/11 – 8/12, 2004 at Fermilab.

finn
Download Presentation

DOE HENP SciDAC Project:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Electromagnetic Systems Simulation-1 (ESS) DOE HENP SciDAC Project: “Advanced Computing for 21st Century Accelerator Science and Technology” Cho Ng Advanced Computations Department Stanford Linear Accelerator Center SciDAC Meeting 8/11 – 8/12, 2004 at Fermilab * Work supported by U.S. DOE ASCR & HENP Divisions under contract DE-AC0376SF00515

  2. Outline of Presentation • Overview • Parallel Code Development • Accelerator Modeling • ISICs and SAPP Collaborations (Rich Lee’s presentation)

  3. SciDAC ESS Team Advanced Computations Department Accelerator Modeling V. Ivanov, A. Kabel, K. Ko, Z. Li, C. Ng Computing Technologies N. Folwell, A. Guetz, G. Schussman, R. Uplenchwar Computational Mathematics R. Lee, L. Ge, L. Xiao M. Kowalski, S. Chen (Stanford) ISICs Collaborators & SAPP Partners LBNL - E. Ng, W. Guo, P. Husbands, X. Li, A. Pinar, C. Yang, Z. Bai LLNL - L. Diachin, K. Chand, B. Henshaw, D. White SNL - P. Knupp, T. Tautges, K. Devine, E. Boman Stanford – G. Golub UCD – K. Ma, H. Yu, E. Lum RPI – M. Shephard, E. Seol, A. Bauer Columbia – D. Keyes Carnegie Mellon U – O. Ghattas, V. Akcelik U. of Wisconsin – H. Kim

  4. Generalized Yee Grid Finite-Element Discretization S3P Omega3P Tau3P/T3P Time Domain Simulation With Excitations Eigenmode Calculation Scattering Matrix Evaluation Track3P – Particle Tracking with Surface Physics V3D – Visualization/Animation of Meshes, Particles & Fields Parallel Unstructured EM Codes

  5. Accelerator Modeling The SciDAC tools are being used to improve existing machines and to optimize future facilities across the Office of Science PEP-II IR (HEP) NLC Structures (HEP) RIA RFQ (NP) PSI Ring Cyclotron LCLS RF Gun (BES)

  6. Outline of Presentation • Overview • Parallel Code Development • Accelerator Modeling • ISICs and SAPP Collaborations (Rich Lee’s presentation)

  7. Omega3P – Progress & Plans Progress include: • Complex solver to model lossy cavities • Linear solver framework to facilitate direct methods • Hierarchical preconditioner(up to 6th order bases) • New eigensolvers (SAPP/TOPS) • AMR to accelerate convergence (TSTT) • Shape optimization (TOPS/TSTT) Next steps: • Waveguide boundary conditions leading to quadratic • (nonlinear) eigenvalue problem (SAPP/TOPS) • Adaptive p refinement and combine with AMR

  8. Omega3P – Solving Large Problems SLAC, Stanford and LBL are developing new algorithms to improve the accuracy, convergence and scalability of Omega3P/S3P to solve increasingly large problem size. Advances Largest problem solved is 93 million DOF’s • Use of direct solvers provides 50-100x faster solution time, • Higher order hierarchical bases (up to p=6) and preconditioners increase accuracy and convergence, • AV FEM formulation accelerates CG convergence by 5x to 10x. Degrees of Freedom

  9. T3P – Progress & Plans Progress include: • Unconditionally stable time-stepping scheme • Mesh-independent particle trajectories • High-order spatial discretizations on tetrahedral elements Next steps: • Waveguide boundary conditions • Model entire PEP-II IR beam line complex • Runtime parallel performance tuning • Particle-in-cell capability

  10. T3P – Null Space Suppression • Standard formulations excite modes in the null space of the curl-curl operator. • T3P, in contrast, correctly models the null space of the curl-curl operator, removing the need for a Poisson-type correction to the static electric field. Standard Formulation T3P

  11. Tau3P – Progress & Plans Progress include: • Curved beam paths for application to PEP-II IR • Dielectric and lossy materials • Checkpoint and restart capabilities • Parallel performance improvement through partitioning • schemes in Zoltan (SAPP) Next steps: • Further advances in parallel performance through Zoltan • Apply curved beam paths to model entire IR complex

  12. Tau3P – Curved Beam Paths Snapshots of beam transits Curved section of beam line • Curved beam paths are needed to accurately model beam transits in the PEP-II IR which consists of two beam lines with a finite crossing angle at the Interaction Point.

  13. Outline of Presentation • Overview • Parallel Code Development • Accelerator Modeling • ISICs and SAPP Collaborations (Rich Lee’s presentation)

  14. Transverse wakefield DS DS DS DDS wakefields DDS DDS wakefields NLC – DDS Wakefields from Tau3P H60VG3 55-cell DDS w/ power and HOM couplers Tau3P simulation with beam • 1st wakefield analysis of an • actual DDS prototype, • 1st direct verification of DDS • wakefield suppression by • end-to-end simulation.

  15. Omega3P/complex solver is used to compute modes in 1st and 3rd dipole bands with HOM damping included: • Quadratic elements • 2.3 million DOFs • 94.5 million non-zeros • 3179 sec for 120 eigenpairs on 512 CPUs. . 1st Band 3rd Band Mode with high Q that couples to the upstream damping manifold Mode with low Q that couples downstream to the HOM coupler NLC – DDS Wakefields from Omega3P

  16. NLC – DDS Wakefields Comparison Wakefields behind leading bunch Mode Spectrum Omega3P Tau3P

  17. 10 ns rise time 15 ns rise time 20 ns rise time NLC - Dark Current from Track3P Track3Pis used to model the dark current in the X-Band 30-cell constant impedance structure for comparison with experiment Dark current @ 3 pulse risetimes -- 10 nsec -- 15 nsec -- 20 nsec Track3P Data

  18. PEP- II – IR Heating with Tau3P/Omega3P Tau3P/Omega3P are being used to study beam heating in the Interaction Region: And absorber design for damping trapped modes: Wall Loss Q Absorber Damped Q

  19. LCLS – Coupler Design with S3P Dual-feed racetrack Coupler Design Modeled with S3P Iris Racetrack Shape • Dual-feed to remove dipole fields • Racetrack cavity shape to minimize quadrupole fields • Rounded coupling iris to reduce RF heating Dual-feed S3P Model of S-band Structure

  20. Quad (βr)/mm LCLS – RF Gun Design with Omega3P/S3P Rounded Iris to lower pulse heating Racetrack dual-feed coupler design to minimize dipole and quadrupole fields 1.6-cell Cubit mesh model

  21. RIA – RFQ with Omega3P • RIA will consists of many RFQs in its low frequency linacs • Tuners are needed to cover cavity frequency deviations • of 1% for lack of better predictions • Omega3P improves accuracy by an order of magnitude • This can significantly reduce the number of tuners and • their tuning range, helping to lower the linac cost   Omega3P Solid Model Mesh & Wall Loss df/f : MWS = 1.5%, Omega3P = -0.2%

  22. RIA – Hybrid RFQ with Omega3P Hybrid RFQ has disparate spatial scales which are difficult to model, requiring high mesh adaptivity and higher-order elements to obtain fast convergence  h-p refinement Freq vs 4th power of grid size

  23. PSI – Ring Cyclotron with Omega3P (Lukas Stingelin PhD work – PSI) First ever mode analysis of entire ring cyclotron – only possible with parallel computing and unstructured grids Omega3P model 1.2 M elements, 6.9 M DOFs Layout – Top view

  24. 44 modes • Low-frequency • High Gap-Voltage • 18 modes • Medium-frequency • Low Gap-Voltage • Vertical electric field • 218 modes • High-frequency • Low Gap-Voltage PSI – Ring Cyclotron with Omega3P (Lukas Stingelin PhD work – PSI) • Omega3P finds the tightly clustered modes in 45 min for 20 modes on 32 CPUs (IBM-SP4) using about 120GB • 280 eigenmodes are calculated with eigenfrequencies close to beam-harmonics CAVITY VACUUM CHAMBER & MIXED MODES_____

  25. Outline of Presentation • Overview • Parallel Code Development • Accelerator Modeling • ISICs and SAPP Collaborations (Rich Lee’s presentation)

More Related