1 / 10

Overview of UK HEP Grid

Overview of UK HEP Grid. S.L.Lloyd Grid/Globus Meeting Oxford 21/2/01. Historical Overview Current Facilities Proposed evolution Proposal to PPARC. Historical Perspective. Historical Development of Computing Facilities for HEP in the UK:

mari-west
Download Presentation

Overview of UK HEP Grid

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of UK HEP Grid S.L.Lloyd Grid/Globus Meeting Oxford 21/2/01 • Historical Overview • Current Facilities • Proposed evolution • Proposal to PPARC Overview of UK HEP Grid

  2. Historical Perspective • Historical Development of Computing Facilities for HEP in the UK: • For many years - Central Facility at RAL (funded via CNAP) plus Group facilities at the Institutes (funded by Grants). • Recently supplemented by large facilities at some Universities for specific experiments (funded by large awards from other funding schemes - JIF, JREI etc). Overview of UK HEP Grid

  3. Current Facilities • Central Facilities at RAL: • 20 dual 450 MHz + 40 dual 600 MHz + (March) 30-40 dual 1GHz • About to install a 330 TB Capacity Robot initially using 30-40 TB • 2 TB disk space • Supports all UK HEP Experiments (to varying degrees) • Liverpool (MAP) • 300 350 MHz + 4 TB disk • Monte Carlo production for ATLAS, LHCb, DELPHI . . Overview of UK HEP Grid

  4. BaBar • At RAL • 1 x 6 CPU Sun E6500 + 5 TB disk • 1 x 4 CPU Sun E4500 + 4 x 4 CPU Sun 450 • At Edinburgh • 1 x 2 CPU Sun e420 + 0.5 TB disk • 1 x Sun Ultra50 + 4 x 4 CPU Sun Ultra80 • At IC, Liverpool, Manchester, Bristol • 1 x 3/4 Sun e450 + 1TB disk • 1 x 80 CPU Linux farm • At Birmingham, QMW, RHUL • 1 x 2 CPU Sun e420 + 0.5 TB disk • 1 x 80 CPU Linux farm • At Brunel • 1 x 2 CPU Sun e420 + 0.5 TB disk Overview of UK HEP Grid

  5. FermiLab Experiments • CDF/Minos • At Oxford, Glasgow, UCL • 1 x 8 cpu + 1 TB Disk • At RAL • 2 x 8 cpu + 5 TB Disk • D0 • At Lancaster • 200 733 MHz + 2.5 TB • Tape robot - 600 TB Capacity, 30 TB loaded Overview of UK HEP Grid

  6. LHC • In Scotland • 128 CPU at Glasgow • 5 TB Datastore + server at Edinburgh • ATLAS/LHCb • At IC • 80 CPU + 1 TB • CMS • At Birmingham • 13 cpu + 1 TB disk • ALICE • . . . Overview of UK HEP Grid

  7. Current Summary • In summary we have shared central facilities at RAL and several distributed facilities for specific experiments • In general not yet Grid aware. • In addition many groups have fledgling grid nodes - a few PCs as Gateways, CPU servers and disk servers running Globus. • Aim is to integrate all these facilities into one 'Grid for Particle Physics' • Prepare Middleware and Testbeds for LHC • Stress test using LHC mock data challenges and real analysis of current experiments. Overview of UK HEP Grid

  8. Evolving Model • Prototype Tier-1 Regional Centre at RAL (Old numbers need updating) : • 10% of 2001 on order. • ~ 4 Regional Tier-2 Centres. • Scotland, North West, 'Midlands', London. • ~ 16 Tier-3 Centres at each Institute. Overview of UK HEP Grid

  9. Grid Collaboration • A Collaboration Board (One/Institute) has been formed to bid against PPARC's (£26M) e-Science Money (Chair SLL). • Proposal Board to write the proposal - Experimental Representatives + Work Group Contacts (Chair PWJ). • Expect to form a number of Work Groups to develop the tools (middleware etc) required • probably based on DataGrid WPs (but not necessarily). Overview of UK HEP Grid

  10. Proposal • Aim to include: • All UK HEP Computing activities • DataGrid contributions • Collaboration with CERN • Collaboration with NSF in US (ex DataGrid) • Cross disciplinary activities, CS, Industry etc • Timescale is very short - Submit by 2nd April. • Expected that most of the money will be manpower + Tier-1 hardware. • Important to get as much Tier-2/3 hardware from SRIF, JREI etc as possible. Overview of UK HEP Grid

More Related