1 / 77

High-Performance Scientific Computations: State of the Art

High-Performance Scientific Computations: State of the Art. Professor Vladimir A. Cheverda Head of Department of Computational Methods in Geophysics, Trofimuk Institute of Petroleum Geology and Geophysics, Siberian Branch of RAS (Novosibirsk). Outline . Preliminaries Top 500 list

sydney
Download Presentation

High-Performance Scientific Computations: State of the Art

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Performance Scientific Computations: State of the Art Professor Vladimir A. Cheverda Head of Department of Computational Methods in Geophysics, Trofimuk Institute of Petroleum Geology and Geophysics, Siberian Branch of RAS (Novosibirsk)

  2. Outline • Preliminaries • Top 500 list • Science paradigms: from the first to the fourth Kazakh-Britain Technical University

  3. Outline • Preliminaries • Top 500 list • Science paradigms: from the first to the fourth Kazakh-Britain Technical University

  4. Definitions • Flop = floating-point operation; • Flops = Flop/sec; • Byte=8 bit; • Single precision = 4 bytes per floating point number; • Double precision = 8 bytes per floating point number. Kazakh-Britain Technical University

  5. Parallel computing Kazakh-Britain Technical University

  6. Hardware: memory and communication A shared memorysystem of three computers Kazakh-Britain Technical University

  7. Hardware: memory and communication A distributed memorysystem of three computers Kazakh-Britain Technical University

  8. Hardware: memory and communication Non-uniform memory access (NUMA): 2 nodes each of 4 processor units Kazakh-Britain Technical University

  9. Hardware: classes of parallel computers A multi-core computing is doing by multicore processors. Multi-core processor a single computing component with two or more independent actual central processing units (called "cores") An AMD Athlon X2 6400+ dual-core processor. Kazakh-Britain Technical University

  10. Hardware: classes of parallel computers Kazakh-Britain Technical University

  11. Hardware: classes of parallel computers A computer cluster consists of a set of loosely connected or tightly connected computers that work together so that in many respects they can be viewed as a single system. Kazakh-Britain Technical University

  12. Hardware: classes of parallel computers Massively parallelcomputing refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel. Infiniband switcher 3D torus interconnect topology for 8 processors Kazakh-Britain Technical University

  13. Kazakh-Britain Technical University

  14. Hardware: general view Kazakh-Britain Technical University

  15. Hardware: Peak Performance How to calculate peak flop/s Multiply together the number of processors, the number of cores per processor, the number of floating point operations that can be done in each core on each clock cycle, and the clock rate in cycles per second: 16K quadcore dual FMADD 850 MHz (16 x1024) x4 x2 x2 x(850 x10^6 ) =0.222 x10^15 Kazakh-Britain Technical University

  16. Software OpenMP (Multicore Programming) is an  Application Programming Interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++ and Fortran. OpenMP is an implementation of multithreading, a method of parallelizing whereby a master thread (a series of instructions executed consecutively) forks a specified number of slave threads and a task is divided among them. The threads then run concurrently, with the runtime environment  allocating threads to different processors. Kazakh-Britain Technical University

  17. Software Message Passing Interface (MPI) is a language independent communications protocol used to program parallel computers. Both point-to-point and collective communication are supported. MPI "is a message-passing application programmer interface, together with protocol and semantic specifications for how its features must behave in any implementation."MPI's goals are high performance, scalability, and portability. MPI remains the dominant model used in high-performance computing today. Kazakh-Britain Technical University

  18. Outline • Preliminaries • Top 500 list • Science paradigms: from the first to the fourth Kazakh-Britain Technical University

  19. Kazakh-Britain Technical University

  20. Top 500 list (after prof. Jack Dongarra) Kazakh-Britain Technical University

  21. Top 500 list (after prof. Jack Dongarra) Kazakh-Britain Technical University

  22. Top 500 list (after prof. Jack Dongarra) Kazakh-Britain Technical University

  23. Top 500 list (after prof. Jack Dongarra) Kazakh-Britain Technical University

  24. Top 500 list (after prof. Jack Dongarra) Kazakh-Britain Technical University

  25. Top 500 list (after prof. Jack Dongarra) Kazakh-Britain Technical University

  26. Zetta-scale, anyone? Kazakh-Britain Technical University

  27. Outline • Preliminaries • Top 500 list • Science paradigms: from the first to the fourth Kazakh-Britain Technical University

  28. Science paradigms Kazakh-Britain Technical University

  29. Science paradigms The first paradigm: Many hundreds of year ago the main matter of the science was description of natural phenomena Kazakh-Britain Technical University

  30. Science paradigms The second paradigm: Last few hundreds years ago theoretical branch had been come with generalizations of experimental data and mathematical models. The main breakthrough at the time was development of differential and integral calculus. Kazakh-Britain Technical University

  31. Science paradigms The third paradigm: Next, the theoretical models grew too complicated to solve analytically. Last few decades has come intensive computational branch. Computational science is a third leg now: experiment, theory, computations have equal input! Kazakh-Britain Technical University

  32. Illustration Small Scale Subsurface Heterogeneities: Seismic Modelling and Imaging Kazakh-Britain Technical University

  33. Galina Reshetova: Institute of Computational Mathematics and Mathematical Geophysics, Novosibirsk; VadimLisitsa and Vladimir Tcheverda: Institute of Petroleum Geology and Geophysics, Novosibirsk; Vladimir Pozdnyakov: Siberian Federal University, Krasnoyarsk; Valery Shilikov: KrasNIPIneft, Krasnoyarsk. Kazakh-Britain Technical University

  34. Content • Motivation. • Presentation of the modelling and imaging techniques. • Synthetic examples. • Field examples. • Conclusion and road map. Kazakh-Britain Technical University

  35. 1. Motivation Kazakh-Britain Technical University

  36. Cavernous/fractured reservoirs Common situation for reservoirs in the carbonate environment: oil is accumulated in caverns, but permeability is determined mainly by fractures. Rock matrix is not permeable. Kazakh-Britain Technical University

  37. Cavernous/fractured reservoirs: core sample Kazakh-Britain Technical University

  38. Uncovered fracture corridor Kazakh-Britain Technical University

  39. Fracture corridors Recovery of fracture corridors is of great importance for effective oil field development. Kazakh-Britain Technical University

  40. Variety of fractures in the carbonate environment (following J.-P.Petit et al.) FC – fracture corridors BFC – bed controlled fracture MBF – multibed fractures HPF – highly persistent fractures Kazakh-Britain Technical University

  41. Multiscale 3D heterogeneous model Kazakh-Britain Technical University

  42. What is our goal? Simulation of wave propagation in realistic 3D heterogeneous media taking into account microstructure (fractures, cracks, caverns etc.) to get a knowledge about propagation of a scattered energy. How are we going to do this? Time domain explicit finite-differences methods with local grid refinement in time and space. Kazakh-Britain Technical University

  43. Artifacts Estimated amplitude ofscattered waves (theory of single scattering approximation)is about 0.001 – 0.01 with respect to the incident one!! Artificial reflections must be around Kazakh-Britain Technical University

  44. 2. Presentation of the method for finite-difference modelling Kazakh-Britain Technical University

  45. Local grid refinement • Fine grid should be used only where \caverns\cracks\fractures are presented in order to avoid unrealistic demands on computer resources. • Different grids cause artificial interface reflections due to different numerical dispersion. • These artificial reflections must be around 10-3 - 10-4 with respect to incident wave. • Finite-difference scheme must be stable. Kazakh-Britain Technical University

  46. Parallel implementation via domain decomposition Fine-grid area can be placed anywhere within the reference model regardless to the specific domain decomposition used in coarse-grid model. Kazakh-Britain Technical University

  47. Декомпозиция расчётной области Kazakh-Britain Technical University

  48. Экспериментальная сильная и слабая масштабируемости Сильная масштабируемость Слабая масштабируемость Kazakh-Britain Technical University

  49. 3. Synthetic example. Kazakh-Britain Technical University

  50. Model Kazakh-Britain Technical University

More Related