1 / 18

Setting up Small Grid Testbed & Using Globus, MPICH-G2

Computational Fluid Dynamics Lab. Setting up Small Grid Testbed & Using Globus, MPICH-G2 Korea Advanced Institute of Science and Technology Div. of Aerospace Engineering Dehee Kim 2002. 9. 25. Contents. Introduction to GT 2.0 and MPICH-G2 How to install Globus CFD Lab. Grid Testbed

kirk
Download Presentation

Setting up Small Grid Testbed & Using Globus, MPICH-G2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Fluid Dynamics Lab. Setting up Small Grid Testbed & Using Globus, MPICH-G2 Korea Advanced Institute of Science and Technology Div. of Aerospace Engineering Dehee Kim 2002. 9. 25

  2. Contents • Introduction to GT 2.0 and MPICH-G2 • How to install Globus • CFD Lab. Grid Testbed • Numerical Test on Testbed • About Network Bandwidth • Concluding Remarks

  3. GT 2.0 • Globus Toolkit 2.0 • Major improvements over the Globus Toolkit 1.1.3 and 1.1.4 releases • Data Grid Components • MDS Components • GRAM Components • Packaging Technology • Security Components • Various changes for supporting MPICH-G2

  4. MPICH-G2 ●What is MPICH-G2? • grid-enabled implementation of the MPI v1.1 standard • converts data in messages sent between machines of different architectures • supports multiprotocol communication ●How does MPICH-G2 differ from MPICH-G? • Increased bandwidth • Reduced latency for intra-machine messaging • Increased latency for inter-machine (TCP) messaging

  5. Construction of Grid Testbed • Installation Procedure 1. Set up small PC cluster system - rsh, NFS, automount, ntp, … - back end nodes with hard disk 2. Install F77, F90 compiler - Absoft F90, pgf90, etc. - Set environment variables and path

  6. Construction of Grid Testbed • If you does not install a Fortran compiler before the installation of GT 2.0, you will see following message ….. Checking for minix/config.h Checking for volatile… yes Running device-specific setup program *#Globus device overrode C compiler setting *#F90 compiler not present; disabling F90 support Disabling long double(not supported by Globus data Conversion library) Checking whether cross-compiling… …..

  7. Construction of Grid Testbed 3. Install Job Queuing System - PBS, CONDOR, LSF, etc. - 2 rpm files(for PBS) - Front end : /usr/spool/pbs/server_priv/nodes - Back end : /usr/spool/pbs/mom_priv/config $clienthost cluster.hpcnet.ne.kr /usr/spool/pbs/default_server 4. Install GT 2.0 - Using simple CA 5. Install MPICH-G2 ./configure –device=globus2 \ -fc=/opt/absoft/bin/f77 \ -f90=/opt/absoft/bin/f90 \ : -flavor=gcc32dbg \ --prefix=/usr/local/mpich-1.2.4-g2

  8. Construction of Grid Testbed • A trial and error /etc/xinetd.d/globus-gatekeeper Service globus-gatekeeper { socket_type = stream ….. } Service globus-gatekeeper { socket_type=stream ….. } O.K. Parsing Error!

  9. Construction of Grid Testbed 6. Modify some scripts if necessary - If various environment variables related with jobmanager were not set, set the variables in some files $GLOBUS_LOCATION/libexec/globus_sh.tools.sh $GLOBUS_LOCATION/libexec/globus_gram-job-manager-tools.sh ….. GLOBUS_GRAM_JOB_MANAGER_MPIRUN=/usr/local/mpich-1.2.4-g2/bin/mpirun GLOBUS_GRAM_JOB_MANAGER_QDEL=/usr/local/bin/qdel GLOBUS_GRAM_JOB_MANAGER_QSTAT=/usr/local/bin/qstat GLOBUS_GRAM_JOB_MANAGER_QSUB=/usr/local/bin/qsub GLOBUS_GRAM_JOB_MANAGER_QSELECT=/usr/local/bin/qselect …… 7. Configure the firewall policy for Globus

  10. CFD Lab. Grid Testbed • OS-Linux 2.4.x, 2.2.x • KAIST CFD Lab. – 1 Front-end, 4-execution nodes(1.8GHz, 512M RAM) • KISTI supercomputing center – 1 Front-end, 4-execution nodes(450MHz, 256M RAM) • Globus Toolkit 2.0, MPICH-G2, ABSOFT F90 • Job Scheduler – Portable Batch System

  11. CFD Lab. Grid Testbed

  12. Numerical Test on Testbed Design Optimization : 2-D design • 2-D adjoint sensitivity analysis • 2-D airfoil design • Design for drag minimization of RAE 2822 airfoil • Grid system : 383 x 65 C type • Flow conditions : M=0.729, AoA=2.31o, Re = 6.5 x 106 • 10 Hicks-Henne functions Pressure distribution airfoil before and after design

  13. Numerical Test on Testbed Design Optimization : 3-D design • 3-D adjoint sensitivity analysis • 3-D wing design • Design for drag minimization of ONERA M6 wing • Grid system : 193 x 49 x 33 C-O type • Flow conditions : M=0.84, AoA=3.06o, Re = 11.7 x 106 • 50 Hicks-Henne functions ONERA M6 Designed wing

  14. device resource Flow analysis (ch_p4) Design (ch_p4) Flow analysis (globus2) Design (globus2) I 158.0 467.7 158.7 478.9 II 388.8 1166.7 392.9 1170.4 III 410.9 1432.1 Computation Time Design Optimization : Computation Time • Flow analysis around 2-D airfoil and design optimization I : Pentium 4 1.7 GHz CPU, 4 nodes, 512M RAM II : Pentium 2 450 MHz CPU, 4 nodes, 256M RAM III : I &II • Flow analysis around 3-D wing and design optimization(case III) Flow analysis : 2047.7 Design : 15674.0

  15. DFVLR Axial Fan – Dr. J. S. Yoon • 3-D Compressible Navier-Stokes Solver • k-ω Turbulent Modeling • 3 Stage Runge-Kutta Time Marching & Central Scheme • 28 Blade(45*19*19) • MPICH-G Surface Pressure Contours

  16. Computation Time at various NetworkBandwidth

  17. Varying efficiency on time of day • PC Cluster front ↔ IBM SP2 • 1:1 CPU, 200 iterations • Seriously varying efficiency on time of day • Need for properQoS andCPU Reservation Variation of computation time

  18. Concluding Remarks • Setup of small testbed • Test for design applications • Need for obtaining vast computing resources • Implementation to diskless cluster - public IP, private IP(e.g. pacx-mpi)

More Related