1 / 18

Status Report on LHC Computing at ICEPP, University of Tokyo

This report discusses the computing infrastructure at the International Center for Elementary Particle Physics (ICEPP) at the University of Tokyo for the ATLAS experiment. It covers data transfer, data access, upgrades, and future plans.

raygsmith
Download Presentation

Status Report on LHC Computing at ICEPP, University of Tokyo

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Status Report onLHC_2 : ATLAS computing International Center for Elementary Particle Physics (ICEPP), the University of Tokyo Hiroyuki Matsunaga Workshop FJPPL’09 May 20,2009 @Tsukuba

  2. NG PIC RAL SARA CNAF CERN ASGC LYON FZK TRIUMF BNL ATLAS distributed computing • Cloud model • Each cloud consists of 1Tier-1 + nTier-2s • Tier-2s are associated with only one Tier-1 • ICEPP (Tokyo) site is a large and the farthest Tier-2 in FR cloud Tier-0 Clermont Tokyo LAPP Romania GRIF Beijing FR Cloud 10 Tier-1 sites (and clouds)

  3. LCG-France • Foreign sites in ATLAS French cloud: • Tokyo • Beijing • Romania

  4. Members in 2008 (*leader)

  5. Budget Plan in 2008

  6. Visits in 2008 • Visit to ICEPP (Feb. 2008) • E. Lançon, G. Rahal • Visit to LAPP Tier-2 (Annecy) (Apr. 2008) • I. Ueda, H. Matsunaga • Visit to IRFU and LAL Tier-2’s (Paris).Participation inFJPPL WS (May 2008) • T. Mashimo, I. Ueda, T. Kawamoto • Visit to ICEPP (Dec. 2008) • E. Lançon, S. Jézéquel, F. Chollet, E. Fede • I. Ueda is a visiting researcher at LAPP since July • Strengthen the communication • Email is the main communication tool.

  7. Activities in 2008 • Data transfer in the ATLAS framework • MC and cosmic data • Participation in ATLAS and WLCG events • Milestone run • Full Dress Rehearsal (FDR) • Common Computing Readiness Challenge (CCRC 08) • User analysis tests • Stress tests on file servers • Data access from many clients in LAN

  8. Network path between Lyon and Tokyo • Academic network (SINET + GEANT + RENATER) • 10Gbps bandwidth for the entire path • RTT (round trip time) ~290ms GEANT SINET RENATER New York Lyon Tokyo

  9. Traceroute • 1 Lyon-OPN (193.48.99.100) 0.288 ms 0.180 ms 1.036 ms • 2 Lyon-INTER (134.158.224.4) 0.232 ms 0.182 ms 0.187 ms • 3 vl3114-paris1-rtr-021.noc.renater.fr (193.51.186.178) 5.427 ms 46.457 ms 27.939 ms • 4 vl89-te0-0-0-3-paris1-rtr-001.noc.renater.fr (193.51.189.37) 5.824 ms 5.523 ms 5.779 ms • 5 renater.rt1.par.fr.geant2.net (62.40.124.69) 5.577 ms 5.523 ms 5.480 ms • 6 so-3-0-0.rt1.lon.uk.geant2.net (62.40.112.106) 12.914 ms 12.910 ms 12.918 ms • 7 so-2-0-0.rt1.ams.nl.geant2.net (62.40.112.137) 20.999 ms 21.048 ms 20.954 ms • 8 nyc-gate1-RM-GE-7-2-0-207.sinet.ad.jp (150.99.188.201) 104.516 ms 104.661 ms 104.621 ms • 9 tokyo1-dc-RM-P-2-3-0-11.sinet.ad.jp (150.99.203.57) 296.255 ms 296.252 ms 296.210 ms • 10 UTnet-1.gw.sinet.ad.jp (150.99.190.102) 297.354 ms 296.754 ms 296.957 ms • 11 bwtest1.icepp.jp (157.82.112.61) 296.706 ms 296.652 ms 296.661 ms

  10. Upgrades in 2008 • SINET-GEANT link in New York (Feb. 2008) • 2.4 to 10 Gbps • Grid middleware upgraded at ICEPP (May 2008) • Improved GridFTP performance • Added file servers at ICEPP (May 2008) • Increased number of parallel files (20) and streams (10) • Configured in FTS (File Transfer Service), a scheduler of GridFTP file transfer

  11. FTS monitor for IN2P3-TOKYO • Channel managers can change channel parameters and status • Recent transfer details and statistics are shown in the monitor page

  12. Data transfer from Lyon to Tokyo • Data export from T0 (CERN) -> Tier-1 (Lyon) -> Tier-2 (Tokyo) “quasi-online” • In May 2008 • In part of CCRC08 activity • >500MB/s achieved • In May 2009 • Normal activity of MC data export • ~400MB/s sustained for many hours

  13. Data access in LAN • User analysis is becoming more important towards the LHC start-up • Mostly performed at the Tier-2 sites • Tests have been performed in the French cloud for direct IO and file copy modes • Direct IO (rfio) • High load on the Data Storage System • Troublesome to use in ATLAS software • File copy (gridFTP) • Retrieval of a whole file before processing • Need scratch space on worker node

  14. Stress test at ICEPP (rfio mode) • Submit user analysis jobs • reading many input data files • High load on name service (BDII, SE) at the job start-up • Physical address should be obtained from logical address • ~2.8GB/s of peak transfer rate (with 13 file servers)

  15. rfio vs. GridFTP • In case of GridFTP, performance can be improved by local cache on worker node • There may be room for improvement in rfio performance with optimization • ICEPP is the best performing site • Event processing rate: 15Hz vs. 20 Hz • CPU/Walltime: 70% vs. >90% • Probably due to good hardware configuration rfio GridFTP

  16. More test on rfio reading • Data access to one file server • ~750MB/s is the system limit (network, disk) • Number of parallel clients (rfcp) increased: • 1, 2, 4, 8, 16, 32, 64, and 128 • Performance degradation seen for >32 clients

  17. Activities in 2009 • LHC will start collecting data this year • More users come into Grid • Consolidate the system • Stability and reliability • Monitoring • Major upgrade of computer system at ICEPP in coming winter • More realistic R&D from the point of view of the physics analysis

  18. Plan In 2009 • New member: C. Biscarat (IN2P3)

More Related