1 / 85

Grid Computing in Multidisciplinary CFD optimization problems

Grid Computing in Multidisciplinary CFD optimization problems. The challenge of Multi-physics Industrial Applications. Parallel CFD Conference, Moscow (RU). Toan NGUYEN. Project OPALE. May 13-15th, 2003. OUTLINE. • INRIA. • STATE OF THE ART. • PARALLEL CFD OPTIMIZATION.

kineta
Download Presentation

Grid Computing in Multidisciplinary CFD optimization problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Computing in Multidisciplinary CFD optimization problems The challenge of Multi-physics Industrial Applications Parallel CFD Conference, Moscow (RU) Toan NGUYEN ProjectOPALE May 13-15th, 2003

  2. OUTLINE • INRIA • STATE OF THE ART • PARALLEL CFD OPTIMIZATION • MULTIDISCIPLINARY APPLICATIONS • CURRENT ISSUES • FUTURE TRENDS & CONCLUSION

  3. PART 1 http://www.inria.fr

  4. INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE National Research Institute for Computer Science and Automatic Control Created 1967 French Scientific and Technological Public Institute Ministry of Research and Ministry of Industry

  5. INRIA MISSIONS • Fundamental and applied research • Design experimental systems • Technology transfer to industry • Knowledge transfer to academia • International scientific collaborations • Contribute to international programs • Technological assessment • Contribute to standards organizations

  6. PERSONNEL 2.500 in six Research Centers Rocquencourt • 900 permanent staff • 400 researchers • 500 engineers, technicians and administrative pers. • 500 researchers from other organizations • 600 trainees, PhD and post-doctoral students • 100 external collaborators • 400 visiting researchers from abroad Lorraine Rennes Rhône-Alpes Futurs Sophia Antipolis Budget 120 MEuros (tax not incl.) 25% self-funding through 600 contracts

  7. CHALLENGES • Expertise to program, compute and communicate using the Internet and heterogeneous networks • Design new applications using the Web and multimedia databases • Expertise to develop robust software • Design and master automatic control for complex systems • Combine simulation and virtual reality

  8. APPLICATIONS • Telecommunications and multimedia • Healthcare and biology • Engineering • Transportation • Environment

  9. RESEARCH PROJECTS • Teams of approx. 20 researchers • Medium-term objectives and work program (4 years) • Scientific and financial independence • Links and partnerships with scientific and industrial partners on national and international basis • Regular assessment of results during given time-scale

  10. PROJECTS • 99 Projects in four themes: • 1 . Networks and Systems • 2 . Software Engineering and Symbolic Computing • 3 . Human-Computer Interaction, Image Processing,Data Management, Knowledge Systems • 4 . Simulation and Optimizationof Complex Systems

  11. INTERNATIONAL COOPERATION • Develop collaborations with European research centres andindustries & strengthen the European scientific community in Information & Communication Technologies • Increase international collaborations and enhance exchanges • Cooperations with the United States, Japan, Russia • Relations with China, India, Brazil, etc. • Partnerships with developing countries • World Wide Web Consortium (W3C) • Work with the best industrial partners worldwide

  12. OPALE • INRIA project (January 2002) • Follow-up SINUS project • Located Sophia-Antipolis & Grenoble • Areas NUMERIC OPTIMISATION (genetic, hybrid, …) MODEL REDUCTION (hierarchic, multi-grids, …) INTEGRATION PLATFORMS Coupling, distribution, parallelism, grids, clusters, ... APPLICATIONS : aerospace, electromagnetics, …

  13. PART 2 STATE OF THE ART

  14. GRID COMPUTING • THE GRIDBUS PROJECT (Univ. Melbourne, Australia)

  15. GRID COMPUTING • RESOURCE MANAGEMENT • INFORMATION SERVICES • DATA MANAGEMENT

  16. APPLICATIONS National Partnership for Advanced Computational Infrastructure

  17. GRID COMPUTING • HIGH PERFORMANCE COMPUTING • HIGH THROUGHPUT COMPUTING • PETA-DATA MANAGEMENT • LONG DURATION APPLICATIONS

  18. GRID COMPUTING • HIGH-PERFORMANCE PROBLEM SOLVING ENVIRONMENTS • BUSINESS TO BUSINESS & E-COMMERCE • LARGE SCALE SCIENTIFIC APPLICATIONS • ENGINEERING, BIO-SCIENCES, EARTH & CLIMATE MODEL. • AFFORDABLE HIGH-PERFORMANCE COMPUTING • IRREGULAR AND DYNAMIC BEHAVIOR APPLICATIONS

  19. GRID COMPUTING • OPTIMALGRID PROJECT (IBM Almaden Resarch Center)

  20. GRID COMPUTING PERFORMANCE DIRECTED MANAGEMENT • DISCOVERY, SHARING, COORDINATED USE, MONITORING • DISTRIBUTED HETERO. DYNAMIC RESOURCES & SERVICES • PERFORMANCE, SECURITY, SCALABILITY, ROBUSTNESS • DYNAMIC MONITORING • ADAPTIVE RESOURCE CONTROL • ERROR AMPLIFIER SYNDROM

  21. GRID COMPUTING • PLANNING & ADAPTING DISTRIBUTED APPLICATIONS LOCATION TRANSPARENCY, MULTIPLE PROTOCOL BINDINGS CREATE & COMPOSE DISTRIBUTED SYSTEMS • NEED ENQUIRY, REGISTRATION PROTOCOLS GRID SERVICES (OGSA) • BROKERING, FAULT DETECTION & TROUBLESHOOTING COMPATIBLE UNDERLYING PLATFORMS • CACHING, MIGRATING, REPLICATING DATA APPLICATIONS : HIGH ENERGY PHYSICS (DATAGRID, PPDG, GriPhyN)

  22. GRID COMPUTING GRID Research, Integration, Deployment & Support center • NSF Middleware Initiative : Globus, Condor-G, NWS, KX509, GSI-SSH, GPT • ISI, Univ. Chicago, NCSA, SDSC, Univ. Wisconsin Madison • NSF, Dept Energy, DARPA, NASA GOAL : « national middleware infrastructure to permit seamless resource sharing across virtual organizations » PHILOSOPHY : « the whole is greater than the sum of its parts » APPLICATIONS : NEES, GriPhyN, Intl Virtual Data Grid Lab (ATLAS)

  23. GRID COMPUTING Incentives Incentives • SOFTWARE DEV. : FREE OPEN SOURCE (Linux, FreeBSD) • PARALLEL & DISTRIBUTED PROGRAMMING • BEOWULF CLUSTERS • HIGH-SPEED GIGABITS/SEC NETWORKS • COMPONENT PROGRAMMING • DEVELOPMENT LARGE DISTRIBUTED DATA FILE SYTEMS

  24. BEOWULF CLUSTER PC-cluster at INRIA Rhône-Alpes (216 Pentium III procs.)

  25. GRIDS vs. CLUSTERS «Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across multiple administrative domains, based on their (resources) availability, capability, performance, cost and users' quality-of-service requirements. If distributed resources happen to be managed by a single, global centralised scheduling system, then it is a cluster. In cluster, all nodes work cooperatively with common goal and objective as the resource allocation is performed by a centralised, global resource manager. In Grids, each node has its own resource manager and allocation policy. » Rajkumar Buyya (Grid Infoware)

  26. DISTRIBUTION vs. PARALLELISM • PARALLELISM IS NOT DISTRIBUTION YOU CAN RUN SEQUENTIALLY PARALLEL CODES • DISTRIBUTION SUPPORTS A LIMITED FORM PARALLELISM YOU CAN DISTRIBUTE SEQUENTIAL CODES YOU CAN RUN SEQUENTIAL CODES IN « PARALLEL » • PARALLELISM ALLOWS DISTRIBUTION YOU CAN DISTRIBUTE PARALLEL CODES • GLOBUS WILL NOT PARALLELIZE YOUR CODE

  27. WHERE WE ARE TODAY Bits and pieces… • 1980 : one year CPU time • 1992 : one month «  » • 1997 : four days «  » • 2002 : one hour «  » Moore’s law results… • Earth Sim (Japan) : 5.120 NEC procs • ASCI Q (LANL) : 11.968 HP Alpha procs • ASCI White (LLNL) : 8.192 IBM SP Power 3 procs • MCR Linux (LLNL) : 2.304 Intel 2.4 GHz Xeon procs

  28. DISTRIBUTED SIMULATION PLATFORM What is required... • • MULTI-DISCIPLINE PROBLEM SOLVING ENVIRONMENTS • • HIGH-PERFORMANCE & TRANSPARENT DISTRIBUTION • • USING CURRENT COMMUNICATION STANDARDS • • USING CURRENT PROGRAMMING STANDARDS • • WEB LEVEL USER INTERFACES • • OPTIMIZED LOAD BALANCING & COMMUNICATION FLOW

  29. INTEGRATIONPLATFORMS What they are... • COMMON DEFINITION, CONFIGURATION, DEPLOYMENT, EXECUTION & MONITORING ENVIRONMENT • COLLABORATIVE APPLICATIONS Distributed tasks interacting dynamically in controlled and formally provable way • CODE-COUPLING FOR HETEROGENEOUS SOFTWARE • DISTRIBUTED : LAN, WAN, HSN... • TARGET HARDWARE : NOW, COW, PC clusters, ... • TARGET APPLICATIONS : multidiscipline engineering, ...

  30. DISTRIBUTED OBJECTS ARCHITECTURE SOFTWARE COMPONENTS • COMPONENTS ENCAPSULATE CODES • COMPONENTS ARE DISTRIBUTED OBJECTS • WRAPPERS AUTOMATICALLY (?) GENERATED • DISTRIBUTED PLUG & PLAY

  31. « CAST » INTEGRATION PLATFORM CAST OPTIMIZERS SOLVERS Modules Modules Server Wrapper Wrapper CORBA

  32. SOFTWARE COMPONENTS • BUSINESS COMPONENTS LEGACY SOFTWARE • OBJECT-ORIENTED COMPONENTS C++, PACKAGES, ... • DISTRIBUTED OBJECTS COMPONENTS Java RMI, EJB, CCM, ... • CASUAL METACOMPUTING COMPONENTS ?

  33. DISTRIBUTED OBJECTS ARCHITECTURE SOFTWARE CONNECTORS • COMPONENTS COMMUNICATE THROUGH SOFTWARE CONNECTORS • CONNECTORS ARE SYNCHRONISATION CHANNELS • SEVERAL PROTOCOLS - SYNCHRONOUS METHOD INVOCATION - ASYNCHRONOUS EVENT BROADCAST • CONNECTORS = DATA COMMUNICATION CHANNELS

  34. PARALLEL APPLICATIONS The good news…. • PARALLEL and/or DISTRIBUTED HARDWARE • // SOFTWARE LIBRARIES : MPI, PVM, SciLab //, ... • NEW APPLICATION METHODOLOGIES DOMAIN DECOMPOSITION GENETIC ALGORITHMS GAME THEORY HIERARCHIC MULTI-GRIDS • NESTING SEVERAL DEGREES PARALLELISM

  35. NESTING PARALLELISM LEVERAGE OPTIMISATION STRATEGIES • COMBINE SEVERAL APPROACHES DOMAIN DECOMPOSITION GENETIC ALGORITHMS … • // SOFTWARE LIBRARIES : MPI, ... • GRIDS PC-CLUSTERS

  36. ADVANCES IN HARDWARE The best news…. • HIGH-SPEED NETWORKS : ATM, FIBER OPTICS... Gigabits/sec networks available (2.5, 10, …) • PC & Multiprocs CLUSTERS : thousands GHz procs... • Lays the ground for GRIDS and METACOMPUTING GLOBUS, LEGION CONDOR, NETSOLVE

  37. CLUSTER COMPUTING PC-cluster at INRIA Rhône-Alpes (216 Pentium III + 200 Itanium procs. Linux)

  38. PART 3 PARALLEL CFD OPTIMIZATION

  39. « CAST » INTEGRATION PLATFORM COLLABORATIVE APPLICATIONS SPECIFICATION TOOL GOALS • “DECISION” CORBA INTEGRATION PLATFORM COLLABORATIVE MULTI-DISCIPLINE OPTIMISATION • DESIGN FUTURE HPCN OPTIMISATION PLATFORMS • TESTBED GENETIC & PARALLEL OPTIMISATION ALGORITHMS CODE COUPLING FOR CFD, CSM SOLVERS & OPTIMISERS

  40. The front stage….

  41. PROCESS ALGEBRA

  42. TEST CASE • SHOCK-WAVE INDUCED DRAG REDUCTION • WING PROFILE OPTIMISATION (RAE2822) • Euler eqns (Mach 0.84, aoa = 2°) + BCGA (100 gen.) • 2D MESH : 14747 nodes, 29054 triangles • 4.5 hours CPU time (SUN Micro SPARC 5, Solaris 2.5) • 2.5 minutes CPU time (PC cluster 40 bi-procs, Linux)

  43. TEST CASE WING PROFILE OPTIMISATION

  44. CAST DISTRIBUTED INTEGRATION PLATFORM RENNES n CFD solvers PC cluster VTHD Gbits/s network GA optimiser CAST PC cluster PC cluster software GRENOBLE NICE

  45. APPLICATION EXAMPLE MULTI-ELEMENT WING PROFILE OPTIMISATION

  46. APPLICATION EXAMPLE WING GEOMETRY

  47. APPLICATION EXAMPLE OPTIMISATION STRATEGY

  48. APPLICATION EXAMPLE PERFORMANCE DATA 1h 35 mn 6 mn

  49. APPLICATION EXAMPLE PERFORMANCE DATA

  50. APPLICATION EXAMPLE PERFORMANCE DATA

More Related