1 / 21

Tier 2 Prague Institute of Physics AS CR

Tier 2 Prague Institute of Physics AS CR. Status and Outlook J. Chudoba , M. Elias, L. Fiala, J. Horky, T . Kouba, J. Kundrat , M. Lokajicek , J . Svec , P. Tylka. Outline. Institute of Physics AS CR ( FZU ) Computing Cluster Networking LHCONE Looking for new resources

snana
Download Presentation

Tier 2 Prague Institute of Physics AS CR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tier 2 PragueInstitute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka NEC2013 Varna M. Lokajicek

  2. Outline • Institute ofPhysics AS CR (FZU) • Computing Cluster • Networking • LHCONE • Looking for new resources • CESNET National Storage Facility • IT4I supercomputing project • Outlook

  3. Institute of Physics AS CR (FZU) • Institute ofPhysicsofthe Academy of the Czech Republic • 2 locations in Prague, 1 in Olomouc • In 2012: 786 employees (281 researchers + 78 doctoral students) • 6 Divisions • Division of Elementary Particle Physics • Division of Condensed Matter Physics • Division of Solid State Physics • Division of Optics • Division of High Power Systems • ELI Beamlines Project Division • Department of Networking and Computing Techniques (SAVT)

  4. FZU - SAVT • Institute’s networking and computing service department • Several server rooms • Computing clusters Golias – Particle physics, Tier2 • Few nodes from already before EDG • WLCG iMoU 4 July 2003 (interim) • New server room 1 November 2004 • WLCG MoU from 28 April 2008 • Dorje – solid state, condensed matterLuna, Thsun, smaller group clusters

  5. Main server room • Main server room (in FZU, Na Slovance) • 62 m2, ~20 racks, 350 kVA motor generator, 200 + 2 x 100 kVA UPS, 108 kW air cooling, 176 kW water cooling • continuous changes • hosts computing servers and central services

  6. Cluster Golias • Upgraded every yearseveral (9) sub-clusters of the identical HW • 3800 cores, 30 700 HS06 • 2 PB disk space • Tapes used only for local backups (125 LTO4, max 500 cassettes) • Serving: ATLAS, ALICE, D0 (NOvA), Auger, STAR, … • WLCG Tier2Golias@FZU + xrootdservers@REZ (NPI)

  7. Utilization • Very high average utilization • Several different projects, different tools for production • D0 – production submitted locally by 1 user • ATLAS – panda, ganga, local users; DPM • ALICE – VO box; xrootd 3.5 k D0 ATLAS ALICE

  8. „RAW“ Capacities

  9. 2012 D0, ATLAS and ALICE usage • D0 • 290 M tasks • 90 MHEPSPEC06 hours • 13% contribution to D0 2012 Data transfers inside farm - month means to and from working nodes in TB • ATLAS • 2,2 M tasks • 90 MHEPSEPC06 hours,1,9 PB disk space • Data transfer 1,2 PB tofarm0,9 PB fromfarm • 2% contribution to ATLAS • ALICE • 2 M simulation tasks • 60 MHEPSEC06 hours • Data transfer 4,7 PB tofarmand 0,5 PB fromfarm • 5% our contribution to ALICE tasks processing • 140TB disk space in INF (Tier3)

  10. Network -CESNET, z. s. p. o. • 1 Gbps FNAL, BNL, Taipei • 10 Gbps to commodity network • 1-10 Gbps to Tier3 collaborating institutes • FZU Tier2 Network connections • 10 Gbps LHCONE (GEANT), 18 July 2013 • 10 Gbps KIT from 1st Sept 2013 http://netreport.cesnet.cz/netreport/hep-cesnet-experimental-facility2/

  11. LHCONE - Network transition • Link to KIT saturated at 1 Gbps E2E line • LHCONE from 18 July 2013 over 10 Gbps infrastructure • Relieves also the commodity network 10 Gbps

  12. Atlas tests • Testing upload speed of files > 1 GB to all Tier1 centra • After LHCONE connection only 2 sites with < 5MB/s • Prague Tier2 ready for validation as T2D 30 60

  13. LHCONE – trying to understand monitoring • L Prague – DESY Very asymmetric throughput LHCONE line cut DESY – Prague LHCONE optical line cut At 4:00One way latency improved

  14. International contribution of Prague center toATLAS + ALICE centra T2 LCG • http://accounting.egi.eu/ • Grid + localtasks • Long term slide down until we received regular financing in 2008 • Original 3% target is not achievable with current financial resources • Necessary to look for other resources

  15. Remote storage • CESNET - Czech NREN + other services • New project: National storage facility • Three distributed HSM based storage sites • Designed for research and science community • 100TB forboth ATLAS and Auger experiments offered • Implemented as remote Storage Element with dCache • disk <-> tape migration FZU Tier-2 in Prague 100km FZU<->Pilsen - 10Gbit link with ~3.5ms latency CESNET storage site in Pilsen

  16. Remotestorage Influence of distributing a Tier-2 data storage on physics analysis • TTreeCache in ROOT helps a lot – both for local and for remote transfers • TTreeCachedremote jobs faster than local ones without the cache

  17. Outlook • In 2015 after LHC start up • Higher data production • Flat financing not sufficient • Computingcan become an item of M&O A (Maintenance &Operations cat. A) • Search for new financial resources or new unpaid capacities necessary • CESNET • Crucial free delivery of network infrastructure • Unpaid External storage, how long? • IT4I, Czech supercomputing project search for computing capacities (free cycles), relying on other project to find the way how to use them

  18. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT) • http://www.particle.cz/acat2014 • Topics • Computing Technology for Physics Research • Data Analysis - Algorithms and Tools • Computations in Theoretical Physics: Techniques and Methods

  19. Backup

More Related