1 / 29

Andrew J. Lankford University of California, Irvine

Trigger & Data Acquisition for LHC Upgrades International Workshop on Future Hadron Colliders Fermilab October 16-18, 2003. Andrew J. Lankford University of California, Irvine. Acknowledgements. My presentation draws heavily upon:

lenora
Download Presentation

Andrew J. Lankford University of California, Irvine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trigger & Data Acquisitionfor LHC UpgradesInternational Workshop on Future Hadron CollidersFermilabOctober 16-18, 2003 Andrew J. Lankford University of California, Irvine

  2. Acknowledgements • My presentation draws heavily upon: • Work performed by contributors to the preliminary studies reported in: Physics Potential and Experimental Challenges of the LHC Luminosity Upgrade (hep-ph/020487). • A recent presentation by Nick Ellis at Erice. Lankford – Trigger & Data Acquisition

  3. SLHC: Implications of Higher Luminosity • Higher Luminosity => • Increased detector occupancy • Increased trigger rates • Increased radiation effects • Assumed SLHC luminosity = 1035 cm-2s-1 Lankford – Trigger & Data Acquisition

  4. SLHC: Implications of Higher Luminosity - 2 • Higher Luminosity => Increased detector occupancy • Assuming 12.5 ns bunch spacing, • 125 interactions/crossing • Occupancy of tracks increases 5-fold (10-fold for some sub-detectors) • Radiation-induced hits increase 10-fold. • Pile-up noise increases 2.2-fold (3-fold for some sub-detectors) • Pile-up degrades performance of trigger algorithms • Reduction in efficiency of e/gamma isolation cuts • Increased muon candidate rates due to radiation-induced accidentals • Larger event size to read out • Increased by factor between 5 and 10 • Demands reduced trigger rate or increased data bandwidth Lankford – Trigger & Data Acquisition

  5. SLHC: Implications of Higher Luminosity - 3 • Higher Luminosity => Increased trigger rates • Arising from: • Increased interaction rates • Occupancy/pile-up induced trigger degradation • Less rejection at fixed efficiency • Need more selective triggers for same trigger rate • Increased thresholds • More exclusive selection (less inclusive trigger) • Fortunately, more selective triggers are okay • (see later) • Need even more data acquisition bandwidth if higher trigger rate. Lankford – Trigger & Data Acquisition

  6. SLHC: Implications of Higher Luminosity - 4 • Higher Luminosity => Increased radiation effects • Directly affects on-detector trigger logic • Mechanisms: • Permanent damage • Single-event-upset effects • (Note that radiation effects upon detectors and their electronics must be handled by trigger and by data acquisition.) Lankford – Trigger & Data Acquisition

  7. Enabling Technologies • Enabling technologies for readout and trigger: • Integrated circuits • Custom ICs, FPGAs, memories, etc. • Commodity computing • processors, networking, memory, storage, fiberoptics • These technologies enabled current generation of expts. • Including the nearly-deadtimeless, integrated trigger and data acquisition systems that are now standard. • These technologies will also be the technologies that enable successful upgrades & future experiments. • R&D programs to develop these technologies for HEP applications were crucial to current generation of expts. • A vital R&D program will be crucial to SLHC upgrades. Lankford – Trigger & Data Acquisition

  8. Data Transfer & Data Processing • Trigger & Data Acquisition systems provide: • Data transfer • Data processing • Limitations or costs of these functions define limitations and costs of Trigger/DAQ systems. • e.g. First Level Triggers(FLT) • Data input to FLT limits density of FLT electronics. • Causes extensive interconnections betw. modules and complex backplanes. • Needed to connect steps in FLT selection. • Needed to seamlessly find tracks and clusters in different sections of detector (to avoid loss of efficiency Lankford – Trigger & Data Acquisition

  9. SLHC Trigger Menu • Need for 3 types of triggers foreseen: • Discovery physics • Very high-pT(thresholds as high as hundreds of GeV) • Completion of LHC physics program • e.g. precise measurements of Higgs sector • Lepton/photon/jet thresholds as low as for LHC • Final states known => exclusive selection possible • Control / Calibration triggers • e.g. W’s, Z’s, top • Low thresholds needed, but can be pre-scaled • None of the above pose rate problems. Lankford – Trigger & Data Acquisition

  10. Inclusive Triggers: samples & rates Note that inclusive e/γ trigger dominates rate. (†Added degradation from pile-up not included above) Lankford – Trigger & Data Acquisition

  11. FLT: Importance of FLT upgrades • Upgrades to First Level Trigger (FLT) are of central importance. • They can reduce new demands on: • Front-end electronics (FEE) • Retaining 100 kHz FLT rate avoids changes to FEE systems that do not require upgrade for other reasons. • Data Acquisition (DAQ) • By reducing new demands for data bandwidth. • High Level Triggers (HLT) • By reducing new demands for algorithms and processing • FLT upgrades deserve early consideration. Lankford – Trigger & Data Acquisition

  12. FLT: New Demands on Processing • New demands on FLT processing posed by: • Increased event pile-up & occupancy • New sub-detectors or readout • 12.5 ns crossing period • Some (at least) new processing will be needed. Lankford – Trigger & Data Acquisition

  13. FLT: Impact of 12.5 ns crossings • LHC FLTs: • pipelined processors • driven at 40 MHz LHC crossing rate • identify crossing of interest • Should SLHC FLTs run at 80 MHz Xing rate? • Can FLTs remain at 40 MHz ? • Can FLTs work at both 40 & 80 MHz ? Lankford – Trigger & Data Acquisition

  14. FLT: Can SHLC FLTs operate at 40 MHz ? • Concept of readout ‘time frames’ • FLT identifies pair of crossings (or more) for read out • e.g. PEP II / BABAR • PEP II runs at 250 MHz (4 ns). • BABAR system clock = 60 MHz (16 ns). • L1T runs at multiples of 16 ns. • DAQ reads out time frames as large as few microsec, as appropriate to subdetector technology. • May not be possible for many FEE systems • Increases data volume on FEE links & through DAQ • Yes, SLHC FLTs could operate at 40 MHz, • But, 40MHz FLTs will generally suffer more from pile-up. Lankford – Trigger & Data Acquisition

  15. FLT: Should SLHC FLTs operate at 80 MHz ? • Advantages: • Reduced pile-up effects • more effective algorithms • less data volume (for detectors that identify Xing during r/o) • Ability to identify crossing that caused trigger • Allows more processing steps within fixed latency • 80 MHz electronics feasible (Portions already at 80MHz.) • Disadvantages: • Requires FLT upgrades • Increased data bandwidth into & within FLT • FEE may not be able to deliver data to FLT at 80MHz • Study (cost/benefit) needed. Lankford – Trigger & Data Acquisition

  16. FLT: Can SLHC FLTs operate at both 40 & 80 MHz ? • Portions of FLT could operate at 40 MHz while other portions operate at 80 MHz. • Note: Identification of 12.5 ns crossing can be derived from 40 MHz samples for calorimeters. • Time resolution of calorimeter pulses is much better than 25 ns. • Timing of pulses is derived by digital filtering of multiple samples. • Such a scheme already used in ATLAS CSCs w/ sampling at 20 MHz • Some such hybrid likely to be a good solution. Lankford – Trigger & Data Acquisition

  17. High-Level Triggers & Data Acquisition • Commercial computing & networking technologies will provide the advances necessary for High Level Triggers (HLTs) and Data Acquisition (DAQ) systems to perform at SHLC rates. • e.g. Moore’s Law will provide x10 improvement in price:performance ratio wrt LHC start-up ~5 years after start-up. • e.g. Appetite for high-bandwidth graphical computing applications will drive networking capabilities up and costs down. • R&D is necessary to develop technology advances for HEP applications. • Data transfer: • Data links • HLT/DAQ networks • Data Sources / Readout Buffers • Data processing: • New HLT algorithms to maintain HLT selectivity with increased pile-up • “Complexity handling” Lankford – Trigger & Data Acquisition

  18. Data Links • New challenges at SLHC for data links from Front-end Electronics to Trigger & DAQ • Higher bandwidths due to higher occupancies (& FLT rates?) • Radiation effects at transmitting end for some subdetectors, e.g. systems optimized for LHC • Also to increase input capabilities of FLTs • Limitations on data input often limit FLT capability. • R&D needed • Applications of commercial developments • Possible custom developments Lankford – Trigger & Data Acquisition

  19. HLT/DAQ Networks • Networks connect HLT processors to data sources. CMS/Cittolin • Every processor connects to every source. • Data moves from sources to processors in parallel (“parallel event building”) to handle high trigger rates. Lankford – Trigger & Data Acquisition

  20. HLT/DAQ Networks • Commercial networking equipment provides the infrastructure for interconnection (switches) and data transfer (links). • At present, the cost of this equipment determines how much data experiments can afford to move to HLT processors. • Thus, cost can limit trigger rate capability. • ATLAS has adopted an “RoI-based” Level 2 trigger in order to reduce overall data bandwidth requirements. • CMS has developed a scaleable event building architecture. • SLHC network bandwidth at least 5-10 times LHC b/w. • Even if SLHC FLTs can provide same rate as at LHC • Network bandwidth requirements grow with occupancy. Lankford – Trigger & Data Acquisition

  21. HLT/DAQ Networks Complete HLT/DAQ systems require large networks. Individual network switches not yet of size required for full interconnectivity. Lankford – Trigger & Data Acquisition

  22. HLT/DAQ Networks R&D • R&D should track evolution of network technology • Seek switches with requisite number of ports to avoid multiple switches & extra ports • Technical challenge to manufacture of switches with many high-speed ports is bandwidth capability of switch ‘fabric’ (backplane) that interconnects ports. • Switches from different vendors behave differently within HEP systems • Due to internal differences (e.g. buffer sizes) • R&D will sometimes require very large-scale testbeds, which could be provided by large farms foreseen for LHC computing and Grid projects. • If we desire to use commodity technologies, then anticipate use of 10 Gbit Ethernet, as well as Gigabit Ethernet in SLHC systems. Lankford – Trigger & Data Acquisition

  23. Data Sources / Readout Buffers • This is electronics that buffers detector data at the input of HLT/DAQ systems and that sources data into HLT/DAQ networks. • They tend to operate at highest rates (relative to other HLT/DAQ components). • Cannot benefit from ‘event parallelism’, as exploited by other components • Each data source must function at full FLT rate. • Increased occupancy (and increased FLT rate) increase data source internal bandwidth requirements. • These elements likely to need upgrade for SLHC rates. • Compare performance with SLHC requirements Lankford – Trigger & Data Acquisition

  24. Data Sources / Readout Buffers R&D • Upgrade directions: • Internally, these elements tend to employ busses for data transfer. • Reasonable for O(80Mword/sec) on circuit board • Challenging for higher bandwidths between groups of modules (e.g. on backplanes) • High-speed serial connections may provide better data transfer in upgrades. • Technology trends in this direction • FEE inputs to these elements already on serial links. • Serial output links to HLT/DAQ network with bandwidth comparable to bandwidth of input links will remove bottleneck. • I.e., push network technology closer to FEE, to exploit: • High-speed serial links • Parallelism afforded by networking Lankford – Trigger & Data Acquisition

  25. HLT Algorithms • New HLT algorithms needed To maintain selectivity in face of increased occupancy and pile-up • e.g. Electron triggers • Dominate FLT output rate, unless thresholds raised high • FLT rate = 22 kHz @ 30 GEV threshold at LHC • FLT selectivity degraded because pile-up blurs isolation • HLT algorithms must recovery selectivity despite degraded isolation. • How can this be accomplished? Will refined tracking information need to brought to bear early in selection sequence? • e.g. Muon triggers • Degraded by occupancy • Degraded by loss of momentum-resolution at higher threshold • HLT algorithms must recover selectivity. • HLT algorithms must accommodate upgraded muon detectors. • Moore’s Law will provide data processing power for new algorithms. Lankford – Trigger & Data Acquisition

  26. Complexity Handling • HLT/DAQ systems are extremely complex. • Large numbers (thousands) of processors • Even larger numbers of software processes & tasks • Highly distributed, heterogeneous system • “Real-time” demands not present in offline systems • Complex control (e.g. startup & shutdown) procedures • Remote access required for monitoring & troubleshooting • Very high reliability required • Robustness, redundancy, fault tolerance • Including robustness of complex selection algorithms • Note: Technology evolution may mean that SLHC HLT/DAQ systems are no larger than for LHC. In this case, SLHC ‘complexity’ stays same as LHC Lankford – Trigger & Data Acquisition

  27. Complexity Handling R&D • R&D can develop solutions to manage complexity. • R&D can track development of tools for ‘complexity handling’ in very large commercial and other applications. • E.g. web tools from e-commerce for high-level controls and user interfaces • Similar remote access, security, database issues Lankford – Trigger & Data Acquisition

  28. Summary • Higher Luminosity => • Increased event pile-up & detector occupancy • Increased trigger rates & data volume • Increased radiation effects • Enabling technologies: • Integrated circuits: custom, FPGAs, memories, etc. • Commodity computing & networking • Challenges arise in data transfer & in data processing • Processing challenges felt mainly in First Level Triggers • Data transfer challenges felt in both FLTs and HLT/DAQ • Trigger rates manageable by: • Increasing thresholds on inclusive triggers • Using exclusive triggers where low thresholds needed • Technical solutions exist, but R&D is required. Lankford – Trigger & Data Acquisition

  29. R&D Summary • First Level Triggers • Operation with 80 MHz crossing rate • Coping with increased occupancy (muons) & pile-up (calorimeter) • Adapting to new sub-detectors or readout • Radiation-tolerant on-detector FLT electronics • Input data links from Front-end Electronics • High Level Triggers & Data Acquisition • Coping with increased data bandwidth • Input data links, HLT/DAQ networks, data sources / readout buffers • Coping with increased occupancy & pile-up (& new detectors or r/o) • New HLT algorithms • Complexity handling Lankford – Trigger & Data Acquisition

More Related