1 / 5

Computational Infrastructure

Computational Infrastructure. Ion I. Moraru. UConn Health HPC Facility. Originated out of the computational needs of another NIH P41 grant (NRCAM, continuously funded since 1998)

luce
Download Presentation

Computational Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computational Infrastructure Ion I. Moraru

  2. UConn Health HPC Facility • Originated out of the computational needs of another NIH P41 grant (NRCAM, continuously funded since 1998) • Developed large scale biological simulation service well before SaaS approaches (Virtual Cell, a.k.a. VCell, distributed architecture) • Incorporates Enterprise IT services and support, including extensive virtualization infrastructure for both mission-critical and research applications • Since 2010 has dedeicated • New datacenter, major upgrades, virtualization • 2014 – Science DMZ • Cross campus, 100GbE, private cloud

  3. Scope • Kinetic modeling and simulation platform • Compartmental or spatially-resolved (1D/2D/3D) • Stochastic, deterministic, hybrid • Reaction-diffusion, advection, electrophysiology • Major emphasis on reuse and reproducibility • SaaS: VCell simulations from 2001 still 100% reproducible • Standards development: SBML, SED-ML Editors, HARMONY Service Total Registered VCell Users Users Who Ran Simulations Currently Stored Models Currently Stored Simulations Publicly Available Models 17,048 4,030 58,798 353,603 597

  4. HPC Facility Resources • Storage (> 1 PB): • Main shared scale-out storage cluster (330 TB EMC2Isilon “SmartPools”) • Multiple dedicated 30/50 TB “scratch storage areas” for primary applications (VCell, NGS pipeline, etc.) • Private cloud object store (650 TB Amplistor), geo-dispersed across 3 datacenters • Compute (> 40 Tflops): • Large CPU-only and hybrid CPU/GPGPU compute clusters + OSG cluster • Currently 40+ Tflop compute capacity and 5.8 TB RAM • Choice of 3 batch scheduler systems (PBS, SGE, MJS) • Virtualization Infrastructure: • Redundant VMWare server and desktop virtualization hosts(456 CPU cores, 1 TB RAM) hosting 100+ Windows/Linux virtual machines • SSD high IOPS performance cache tier • Datacenter Infrastructure: • UPS generator backed power (160 kW), redundant cooling (50 tons) • Dedicated 3x40 GbE dark fiber connection to off-site DR location • Network (100+ GbE): • Full non-oversubscribed 10/40 GbE datacenter network core layer • BioScienceCTResearch Network – 100 GbE to Storrs and Internet2 • NSF CC-NIE Science DMZ – low latency, non-firewalled

  5. Science DMZ and UConn Cloud

More Related