1 / 9

IdF Tier2

IdF Tier2. Michel Jouvin LAL / IN2P3 jouvin@lal.in2p3.fr. Background. Need to start now setup of Tier2 to be ready on time LCG (France) effort concentrated on Tier1 till now Technical and financial challenges require time to be solved French HEP institutes are quite small

darryl
Download Presentation

IdF Tier2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IdF Tier2 Michel Jouvin LAL / IN2P3 jouvin@lal.in2p3.fr

  2. Background • Need to start now setup of Tier2 to be ready on time • LCG (France) effort concentrated on Tier1 till now • Technical and financial challenges require time to be solved • French HEP institutes are quite small • 100-150 persons, small computing manpower • IdF (Ile-de-France, Paris region) has a large concentration of big HEP labs and physicists • 6 labs among which DAPNIA : 600, LAL: 350 • DAPNIA and LAL involved in Grid effort since beginning of EDG • 3 EGEE contracts (2 for operation support) IdF Tier2 - HEPix - FZK 2005

  3. Objectives • Build a Tier2 facility for simulation and analysis • 80% LHC 4 experiments, 20% EGEE and local • 2/3 LHC analysis • Analysis require a large amount of storage • Be ready at LHC startup (2nd half of 2007) • Resource goals • Based on experiments computing models, be greater than any experiment requirement for a Tier2 (in fact 2/3 of the total) • CPU : 1500 1kSI2K (~ P4 Xeon 2,8 Ghz), max = CMS:800 • Storage : 350 TB of disks (no MSS planned), max= CMS:220 • Network : 10 Gb/s backbone inside Tier2, 1 or 10 Gb/s external link IdF Tier2 - HEPix - FZK 2005

  4. Organization • Partnership of 3 labs : DAPNIA (CEA, Saclay), LAL (IN2P3, Orsay), LPNHE (IN2P3, Paris) • LHC experiments coverage : all • Resources (CPU, storage) distributed over the 3 sites • 10 Gb/s inter-site links • One technical team with people from the 3 labs • 5 FTE in 2005 and 2006, more in 2007 • Currently 10-15 people involved (several part time) • M. Jouvin (chairman), P. Micout, P.F. Honoré… • One scientific committee • J.P. Meyer (DAPNIA/Atlas, chairman), G. Wormser (LAL), … IdF Tier2 - HEPix - FZK 2005

  5. Finances • Total budget estimated to 1,6 M€ (2005-2007) • ½ from Region council • 1/6 from CEA • 1/6 from LAL (CNRS and Paris 11 University) • 1/6 from Paris 6 university (LPNHE) • No significant support from IN2P3 / LCG France (focused on Tier1) • Progressive investment : no HW replacement before 2009 • 2005 : 150 K€ • 2006 : 450 K€ • 2007 : 1 M€ • 2009 and + : 300 K€/year expected from IN2P3/LCG France IdF Tier2 - HEPix - FZK 2005

  6. Storage Challenge • Efficient use and management of a large amount of storage seen as the main challenge • Decided to start partnership with HP on SFS/LUSTRE in the Grid (LCG) context • Performance with a large number of clients • Replication of critical datas (metadatas) among sites • Multi-sites configuration : WAN impact for perfs and operations • Integration with SE : coexistence with / replacement for DPM • HP partnership will provide free licenses and access to support • Interested by others experience with LUSTRE • CASPUR, ??? IdF Tier2 - HEPix - FZK 2005

  7. 2005 : Prototype • Main goals • Build a multi-site configuration and technical team • Gain experience with gLite middleware • Evaluate LUSTRE as a candidate for storage component • Number of gatekeepers (1 vs. 3) • Multi-site SEs (1SE / exp) vs. 1 SE per site (1 site = 1 experiment ?) • Resources (currently being acquired) • 4-8 TB (dedicated to Tier2) per site • 40 CPUs per site • 1 Gb/s inter-site link (already there, non dedicated to Tier2) • Miscellanous • Batch scheduler review (LSF experience at LAL, interested by SGE….) • Batch session welcome • Participation to SC3 IdF Tier2 - HEPix - FZK 2005

  8. 2006 : Mini Tier2 • Main goal : build a mini Tier2 (~1/4 of final config) • Final choice for batch scheduler • Final choice for SE architecture (going with LUSTRE ?) • Setup of monitoring tools • Integration with local operations on each site • Evaluation of 10 Gb/s link feasibality and effectiveness • Resources expected per site • Storage : 25 TB • CPUs : 120 • Network : 1 Gb/s inter-site link • Miscellanous • Active participitation to SC • Impact on computer rooms (electrical power, air cooling…) IdF Tier2 - HEPix - FZK 2005

  9. 2007 : Production Tier2 • Main goal : have the final configuration ready at end of ’07 • Deployment of final configuration • Operations : full integration with local forces at each site, integration into LCG support model • Resources per site • Storage : 120 TB • CPUs : 500 • Network : 10 Gb/s backbone at each site and inter-site, may be 10 Gb/s external link (probably a cost based decision) • Not clear if every site will have exactly the same size… • Manpower availability shoud be taken into account IdF Tier2 - HEPix - FZK 2005

More Related