1 / 8

Legnaro T2 status M. Biasotto, S. Fantinel – INFN Legnaro

Legnaro T2 status M. Biasotto, S. Fantinel – INFN Legnaro. Manpower. Manpower: 1.5 FTE (M. Biasotto, S. Fantinel) site-responsible, CMS-operations and LCG-operations (SC3 included) 0.5 FTE (S. Badoer, M. Dalla Silvestra) Hardware and system management

qamar
Download Presentation

Legnaro T2 status M. Biasotto, S. Fantinel – INFN Legnaro

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Legnaro T2 status M. Biasotto, S. Fantinel – INFN Legnaro

  2. Manpower • Manpower: • 1.5 FTE (M. Biasotto, S. Fantinel)site-responsible, CMS-operations and LCG-operations (SC3 included) • 0.5 FTE (S. Badoer, M. Dalla Silvestra)Hardware and system management • 0.2 FTE (L. Berti)Network administration

  3. Hardware resources • Computing: ~100 WNs • 5 Blade Centers, each with 14 dual Xeon 2.xGHz and 2GB RAM • ~ 30 older machines of various types • Network • central switch HP-J4904A with 48 GE ports • 1 GE link to each Blade Center, upgradable to 4 • older nodes connected to two 24 ports Fast Ethernet switches • to WAN: connected to GARR Giga-POP, 1 Gb/s max speed • peak of 400Mb/s reached during Phedex test

  4. Storage resources • Disk storage: ~25TB • Direct Attached Storage: standard PATA or SATA disks (200-250 GB) with 3ware RAID controllers • 10 servers, each with two or three 8 or 12 ports controllers, configured in RAID-5 • 2 servers configured as LCG Storage Element • 1 partition per host due to middleware limitations • others used for local access only, mainly via rfio (sometimes nfs) • plan to add other 2-3 servers as SE

  5. Current storage allocation • 6 TB CMS data (available to users for analysis) • 6 TB backup on disk of CMS data • part of these may be deleted, but some datasets are not replicated anywhere else; now that Phedex is running, we would really like to copy them outside and get rid of the backup • 1 TB in the old LCG-2.3.1 SE • 2 TB dedicated to LCG-2.4.0 SE • 8 TB free • But we already have a request to import some datasets which are currently being produced on LCG (~2TB?)

  6. Tools and components • Standard LCG-2.4.0 middleware • We still support also LCG-2.3.1 because there is a lot of activity. We foreseen to switch all resources to 2.4.0 when we will be sure of the stability of all CMS applications on that. • CMS specific tools: • Phedex installed and running (but used only for testing up to now) • PubDB • SRM not yet installed: only “classic SE” at the moment • We are investigating to add SRM interfaced storage (dCache,DPM,…)

  7. Current activities • LCG production • supported VOs: cms, atlas, alice, lchb • Local CMS Montecarlo production • a lot of activity in the past, but now most of CMS production has moved to LCG, so local production should be limited to a few special cases • CMS analysis • 6 TB of cms data stored on the servers • available for analysis via CMS tools (CRAB, PubDB) • Will SC3 interfere with these activities?

  8. SC3 dedicated resources • If necessary, we can dedicate some resources exclusively for SC3, decoupled from the production farm • standalone machines for services • specific queue in the local batch system with dedicated WNs • 1-2 disk servers, configured with SRM support • Availability of storage space for SC3 depends on the possibility of free up space currently occupied by CMS data backups

More Related