1 / 21

ALICE – First paper

ALICE – First paper. ALICE Set-up. Size : 16 x 26 meters Weight : 10,000 tons. TOF. TRD. HMPID. ITS. PMD. Muon Arm. PHOS. TPC. Large volume gas detector Drift volume and MPWC at the end caps 3-dim. “continuous” tracking device for charged particles x,y of pad

lpippen
Download Presentation

ALICE – First paper

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ALICE– First paper

  2. ALICE Set-up Size: 16 x 26 meters Weight: 10,000 tons TOF TRD HMPID ITS PMD Muon Arm PHOS TPC

  3. Large volume gas detector Drift volume and MPWC at theend caps 3-dim. “continuous” tracking device for charged particles x,y of pad z derived from drift time Designed to record up to 20000 tracks Event rate: about 1 kHz Typical event size for a central Pb+Pb collision:about 75 MByte ALICE TPC 3

  4. ALICE TPC: 5 years of construction

  5. Trigger system • Minimal requirements • Detect collisions • Initialise readout of detectors • Initialise data transfer to data acquisition (DAQ) • Protection against pile-up • High level requirements • Select interesting events • Needs real-time processing of raw data and extraction of physics observables trigger detector trigger system Detectors Why? interaction rate (e.g. 8 kHz for Pb+Pb) > detector readout rate (e.g. 1 kHz for TPC) > DAQ archiving rate (50 - 100 Hz) Readout electronics raw data high-level trigger trigger DAQ processed data

  6. What to trigger on? • Every central Pb+Pb collisions produces a QGP - no need for a QGP-trigger • But hard probes are (still) rare at high momentum • In addition, the reconstructionefficiency of heavy quark probes is very low • E.g. Detection of hadronic charm decays: D0 K– + + • about 1 D0 per event (central Pb-Pb) in ALICE acceptance • after cuts • signal/event = 0.001 • background/event = 0.01 trigger

  7. PHOS L0 trigger Array of crystals + APD + preamp + trigger logic + readout DAQ • L0 trigger • tasks • shower finder • energy sum • implementation • FPGA • VHDL firmware L0/L1 trigger PbO4W- crystal calorimeter for photons, neutral mesons, 1 to > 100 GeV 8

  8. PHOS – muon tracks

  9. D0 trigger • Detection of hadronic charm decays: D0 K– + + (6.75%), c = 124 m • HLT code • D0 finder: cut on d0(K)*d0() TPC tracker displaced decay vertex finder TPC+ITS track fitter ITS TPC Preliminary result: invariant mass resolution is within a factor of two compared to offline

  10. Introducing the High Level Trigger ALICE data rates (example TPC) TPC is the largest data source with 570132 channels, 512 timebins and 10 bit ADC value. • Central Pb+Pb collisions • event rates: ~200 Hz (past/future protected) • event sizes: ~75 Mbyte(after zero-suppression) • data rates: ~15 Gbyte/sec TPC data rate alone exceeds by far the total DAQ bandwidth of1.25 Gbyte/sec • HLT tasks • Event selection based on software trigger • Efficient data compression

  11. HLT requirements • Full event reconstruction in real-time • Main task: reconstruction of up to 10000 charged particle trajectories • Method:Pattern recognition in the TPC • Cluster finder • Track finder • Track fit Global track fit ITS-TPC-TRDVertex finder • Event analysis • Trigger decision

  12. HLT architecture • HLT is a generic high performance cluster Detectors HLT DAQ Mass storage

  13. HLT building blocks (1) • Hardware • Nodes • Sufficient computing power for p+p • 121 Front-End PCs:968 CPU cores, 1.935 TB RAM, equipped with custom PCI card for receiving detector data • 51 Computing PCs:408 CPU cores,1.104 TB RAM • Network • Infiniband backbone, GigaBit ethernet • Infrastructure • 20 redundant servers for all critical systems

  14. HLT building blocks (2) • Software • Cluster management and monitoring • Data transport and process synchronisation framework • Interfaces to online systems:Experiement control system,Detector control system,Offline DB,... • Event reconstruction and trigger applications

  15. First paper

  16. First paper

  17. First paper

  18. First paper

  19. First paper

  20. Planning pp run • November200 collision @ 900 GeV • December106 collisions @ 900 GeV • Some collisions @ 2.4 TeV • February-> collisions @ 7 TeV

More Related