1 / 18

ATLAS Trigger Development

ATLAS Trigger Development. Corrinne Mills Harvard DOE Review August 14-15, 2008. The ATLAS Trigger System. Level One hardware: calo + muon (75 kHz). triggers. Level Two software: partial reco. (3.5 kHz). Region of Interest Builders. triggers. Event Filter software: full reco.

chelsey
Download Presentation

ATLAS Trigger Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS Trigger Development Corrinne Mills Harvard DOE Review August 14-15, 2008

  2. The ATLAS Trigger System Level One hardware: calo + muon (75 kHz) triggers Level Two software: partial reco. (3.5 kHz) Region of Interest Builders triggers Event Filter software: full reco. (200 Hz) Event Builder to tape c. mills

  3. The ATLAS Trigger System SCT and pixel data Fast TracKer (FTK) (proposed) Level One hardware: calo + muon (75 kHz) triggers triggers Level Two software: partial reco. (3.5 kHz) tracks Region of Interest Builders High Level Trigger (HLT) triggers Event Filter software: full reco. (200 Hz) Event Builder to tape c. mills

  4. A Fast Tracker for ATLAS • “Free” tracks at start of Level 2 processing • Tracking exists in L2, but strongly limited by bandwidth • Exacerbated by high-luminosity conditions • FTK frees up CPU time for other tasks or more events • Applications: displaced tracks from b hadrons, tau leptons, track-based isolation for leptons • Ongoing R&D project • Harvard (Franklin, Mills, Morii) joined last summer • Collaborating institutions: Chicago, Frascati, Illinois, Pisa • Cleared for year-long study leading to TDR in March 2009 • Timeline: installation in 2012 (LHC  SLHC) • Past year: intensive effort to develop and validate simulation c. mills

  5. How does FTK work?

  6. Simplifed Pattern Recognition Group hits into “superhits” (100/ module rather than 100s) particle track c. mills

  7. The Associative Memory Parallel processing • Pre-loaded bank of most probable hit patterns • Dedicated hardware • Look for a match Like a bingo game: c. mills

  8. Linearized Track Finding • Within roads, track-finding problem simplified • Tractable combinatorics • Linear approximations to 2 and track parameter extraction • The Associative Memory associates as set of these linear maps with each pattern • FTK Track finding within road: • Calculate 2 for all permutations of full-resolution hits within road, allowing one layer to have a missing hit • Take hit pattern with best 2, preferentially with no missing layers • Compute track parameters from hit pattern • This step done by DSPs • Output to Level 2 is a list of tracks c. mills

  9. Forward Tracking Performance

  10. Harvard FTK work • Trigger simulation development effort this year • CM extended the trigger simulation to the disks • Simulation software originally developed for the SVT • CDF geometry is purely cylindrical • Tracking looks for hit on each layer • Extend layers forward using the disks • Ideal is to have one hit per “layer” for each track • Difficulty not from software modification, but description of geometry = 2.5 R (mm) z (mm) c. mills

  11. FTK Tracking Efficiency • Efficiency is truth tracks matched to FTK reconstructed tracks, divided by all truth tracks • Theoretical maximum to efficiency: fraction of tracks crossing enough layers eta c. mills

  12. Track Parameter Resolution • Impact parameter (d0) resolution not significantly degraded • b hadrons produce tracks with large d0 (> 0.05 cm) • z0 resolution worsens with dip angle but remains good • Compare to size of interaction region: 10 cm (but many interactions…) • track isolation for leptons c. mills

  13. Other HLT Work

  14. High Level Trigger Event size (~100kB total) • Trigger efficiency measurement • Unified structure to monitor all triggers • Quick turnaround for early data validation • Could develop into working group • “Navigation” structure • Information leading to trigger decision • Ideally store for end user analysis, but: > 10 kB/event, where the whole event is 100 kB or smaller) • Task: trim out unneeded objects • Example: group using high-pT leptons does not need jet trigger details • Short-term project, quick payoff, could lead to more work with HLT group (incl. Navigation) c. mills

  15. Summary and Outlook • Significant contribution to FTK development • Coding and validation of forward tracking in trigger simulation, doubling acceptance • Good tracking efficiency through full pseudorapidity range • Track parameter resolution comparable to central tracks • Cleared for TDR, working toward 2012 installation • Starting up other HLT work • Franklin, Mills, Morii, Smith, potentially more grad students • Potential for real impact on ATLAS performance on a short (~ 1 yr) timescale • Synergy between working knowledge of existing trigger infrastructure and FTK development and integration efforts c. mills

  16. Backup

  17. FTK Architecture Pixels & SCT PIPELINEDAM EVENT # 1 RODs EVENT # N 50~100 KHz event rate HITS Data Formatter (DF) S-links SUPER BINS DATA ORGANIZER TRACK FITTER ROADS cluster finding + split by layer ROADS + HITS Raw data ROBs Track data ROB c. mills

  18. Global Design Considerations Bank size for 1/4 of detector • Tradeoffs: • Road size  bank size • big roads = more computation for track finding (time) • narrow roads = bigger pattern banks / AM (cost) • Efficiency • Minimum reconstructable pT (0.5 GeV? 1 GeV?) 10 boards 1 AM board 2 AM boards c. mills

More Related