1 / 22

The LHC Computing Challenge

The LHC Computing Challenge Tim Bell Fabric Infrastructure & Operations Group Information Technology Department CERN 2 nd April 2009. The Four LHC Experiments…. ATLAS General purpose Origin of mass Supersymmetry 2,000 scientists from 34 countries. CMS General purpose

sheryl
Download Presentation

The LHC Computing Challenge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The LHC Computing Challenge Tim BellFabric Infrastructure & Operations GroupInformation Technology Department CERN 2nd April 2009

  2. The Four LHC Experiments… • ATLAS • General purpose • Origin of mass • Supersymmetry • 2,000 scientists from 34 countries • CMS • General purpose • Origin of mass • Supersymmetry • 1,800 scientists from over 150 institutes • ALICE • heavy ion collisions, to create quark-gluon plasmas • 50,000 particles in each collision • LHCb • to study the differences between matter and antimatter • will detect over 100 million b and b-bar mesons each year

  3. … generate lots of data … The accelerator generates 40 million particle collisions (events) every second at the centre of each of the four experiments’ detectors

  4. … generate lots of data … Which are recorded on disk and magnetic tapeat 100-1,000 MegaBytes/sec ~15 PetaBytes per year for all four experiments reduced by online computers toa few hundred “good” eventsper second.

  5. simulation CERN Data Handling and Computation for Physics Analysis reconstruction event filter (selection & reconstruction) detector analysis processed data event summary data raw data batch physics analysis event reprocessing analysis objects (extracted by physics topic) event simulation interactive physics analysis

  6. … leading to a high box count ~2,500PCs Another~1,500boxes CPU Disk Tape

  7. Computing Service Hierarchy Tier-0 – the accelerator centre • Data acquisition & initial processing • Long-term data curation • Distribution of data  Tier-1 centres Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschunszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 – “online” to the data acquisition process  high availability • Managed Mass Storage • Data-heavy analysis • National, regional support Tier-2 – ~100 centres in ~40 countries • Simulation • End-user analysis – batch and interactive

  8. The Grid • Timely Technology! • Deploy to meet LHC computing needs. • Challenges for theWorldwideLHCComputingGrid Project due to • worldwide nature • competing middleware… • newness of technology • competing middleware… • scale • …

  9. Interoperability in action

  10. Reliability Site Reliability Tier-2 Sites 83 Tier-2 sites being monitored

  11. Why Linux ? • 1990s – Unix wars – 6 different Unix flavours • Linux allowed all users to align behind a single OS which was low cost and dynamic • Scientific Linux is based on Red Hat with extensions of key usability and performance features • AFS global file system • XFS high performance file system • But how to deploy without proprietary tools? See EDG/WP4 report on current technology (http://cern.ch/hep-proj-grid-fabric/Tools/DataGrid-04-TED-0101-3_0.pdf) or “Framework for Managing Grid-enabled Large Scale Computing Fabrics”(http:/cern.ch/quattor/documentation/poznanski-phd.pdf) for reviews of various packages.

  12. Deployment • Commercial Management Suites • (Full) Linux support rare (5+ years ago…) • Much work needed to deal with specialist HEP applications; insufficient reduction in staff costs to justify license fees. • Scalability • 5,000+ machines to be reconfigured • 1,000+ new machines per year • Configuration change rate of 100s per day See EDG/WP4 report on current technology (http://cern.ch/hep-proj-grid-fabric/Tools/DataGrid-04-TED-0101-3_0.pdf) or “Framework for Managing Grid-enabled Large Scale Computing Fabrics”(http:/cern.ch/quattor/documentation/poznanski-phd.pdf) for reviews of various packages.

  13. Dataflows and rates Remember this figure Scheduled work only! 700MB/s 420MB/s 700MB/s 1120MB/s (1600MB/s) (2000MB/s) Averages! Need to be able tosupport 2x for recovery! 1430MB/s

  14. Volumes & Rates • 15PB/year. Peak rate to tape >2GB/s • 3 full SL8500 robots/year • Requirement in first 5 years to reread all past data between runs • 60PB in 4 months: 6GB/s • Can run drives at sustained 80MB/s • 75 drives flat out merely for controlled access • Data Volume has interesting impact on choice of technology • Media use is advantageous: high-end technology (3592, T10K) favoured over LTO.

  15. Castor Architecture Stager DB Svc Job Svc Qry Svc Error Svc Mover DB Disk cache subsystem Mover Disk Servers Tape archive subsystem Central Services TapeDaemon Tape Servers RH Client Scheduler RR GC StagerJob MigHunter NameServer RTCPClientD VMGR RTCPD VDQM Detailed view

  16. Castor Performance

  17. Long lifetime • LEP, CERN’s last accelerator, started in 1989 and shutdown 10 years later. • First data recorded to IBM 3480s; at least 4 different technologies used over the period. • All data ever taken, right back to 1989, was reprocessed and reanalysed in 2001/2. • LHC starts in 2007 and will run until at least 2020. • What technologies will be in use in 2022 for the final LHC reprocessing and reanalysis? • Data repacking required every 2-3 years. • Time consuming • Data integrity must be maintained

  18. Disk capacity & I/O rates 4GB 10MB/s 50GB 20MB/s 500GB 60MB/s  1TB  250x10MB/s2,500MB/s 20x20MB/s400MB/s 2x60MB/s120MB/s I/O 1996 2000 2006 • CERN now purchases two different storage server models: capacity oriented and throughput oriented. • fragmentation increases management complexity • (purchase overhead also increased…)

  19. .. and backup – TSM on Linux • Daily Backup volumes of around 18TB to 10 Linux TSM servers

  20. Capacity Requirements

  21. Power Outlook

  22. Summary • Immense Challenges & Complexity • Data rates, developing software, lack of standards, worldwide collaboration, … • Considerable Progress in last ~5-6 years • WLCG service exists • Petabytesof data transferred • But more data is coming in November… • Will the system cope with chaotic analysis? • Will we understand the system enough to identify problems—and fix underlying causes ? • Can we meet requirements given power available?

More Related