1 / 10

NCCS Hardware

NCCS Hardware. Jim Rogers Director of Operations National Center for Computational Sciences. CRAY XIE PHOENIX. TRACK II. BLUE/GENE P. IBM LINUX NSTG. VISUALIZATION CLUSTER. IBM HPSS. 4512 quad-core processors @ 2.3 GHz 18,048 GB Memory. 2048 quad-core processors @ 850 MHz

sprague
Download Presentation

NCCS Hardware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NCCS Hardware Jim Rogers Director of OperationsNational Center for Computational Sciences

  2. CRAY XIE PHOENIX TRACK II BLUE/GENE P IBM LINUX NSTG VISUALIZATION CLUSTER IBM HPSS 4512 quad-core processors @ 2.3 GHz 18,048 GB Memory 2048 quad-core processors @ 850 MHz 4096 GB Memory (56) 3 GHz 76 GB Memory (128) 2.2 GHz 128 GB Memory (1,024) 0.5 GHz 2 TB Memory 44 TB 4.5 TB 5 TB 9 TB NCCS resources October 2007 summary Control network 1 GigE Network routers 7 Systems 10 GigE UltraScience CRAY XT4 JAGUAR Supercomputers 51,110 CPUs 68 TB Memory 336 TFlops (23,662) 2.6GHz 45 TB Memory Many storage devices supported Total shared disk 250.5 TB 900 TB Scientific visualization lab Backup storage Test systems Evaluation platforms 5 PB data storage • 96-processor Cray XT3 • 144-processor Cray XD1 with FPGAs • SRC Mapstation • Clearspeed • BlueGene (at ANL) 35 megapixels Powerwall 5 PB

  3. Hardware roadmap As it looks to the future, the National Center for Computational Sciences expects to lead the accelerating field of high-performance computing. Upgrades will boost Jaguar’s performance to 250 teraflops by the end of 2007, followed by installation of two separate petascale systems in 2009.

  4. Jaguar system specifications

  5. Phoenix – Cray X1E CRAY X1E: 1,024 vector processors, 18.5 TF • Ultra-high bandwidth • Globally addressable memory • Addresses large-scale problems that cannot be done on any other computer

  6. Jaguar – Cray XT4 Today: • 120 TF Cray XT4 • 2.6 GHz dual-core AMD Opteron processors • 11,508 compute nodes • 900 TB disk • 124 cabinets • Currently partitioned as 96 cabinets of catamount and 32 cabinets of compute node linux • November 2007: All cabinets compute node linux

  7. Jaguar – Cray XT4 architecture Cray XT4 scalable architecture • Designed to scale to 10,000s of processors • Measured MPI bandwidth of 1.8 GB/s • 3-D torus topology

  8. Phoenix – Cray X1E Phoenix Utilization by Discipline Astro 100 Atomic Biology 90 Chemistry 80 Climate Fusion 70 Industry 60 Materials Utilization (%) Other 50 40 30 20 10 0 Oct Dec Feb Apr Jun Aug

  9. Jaguar – Cray XT4 Jaguar Utilization by Discipline Astro Biology 100 Chemistry 90 Climate Combustion 80 Fusion Industry 70 Materials Nuclear 60 Other Utilization (%) 50 40 30 20 10 0 Oct Dec Feb Apr Jun Aug

  10. Contact • Jim Rogers • Director of Operations • National Center for Computational Sciences • (865) 576-2978 • jrogers@ornl.gov 10 Rogers_NCCS_Hardware_SC07

More Related