180 likes | 320 Views
The ANTARES Data Acquisition System. S. Anvar, F. Druillole, H. Le Provost , F. Louis, B. Vallage (CEA). ACTAR Workshop, 2008 June 10. The ANTARES detector. http://antares.in2p3.fr. Acquisition nodes. Slow-control. Data. Clock. Energy. The "0.1 km2« project. Offshore. Onshore.
E N D
The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10
The ANTARES detector http://antares.in2p3.fr
Acquisition nodes Slow-control Data Clock Energy The "0.1 km2« project Offshore Onshore Electromechanical cable Processing nodes • 12 detection lines (400 m) • 300 acquisition nodes (25 / line) • 900 photomultipliers (3 / node) • 1800 data sources @ 20 Mb/s max • System spread over 30000000 m3 @2500 m depth • Onshore Farm of 80 processing nodes Line Control photomultiplier Junction box Electro-optical cable
Photomultipliers counting rates kHz kHz seconds bioluminescence burst (April 2003)* High fluctuations in counting rates days Base line (April 2003)* *data published on the ANTARES site http://antares.in2p3.fr
The Photomultiplier signal processing 1 Single Photon Electron (SPE) : Charge/Time stamp = 6 bytes 1 full waveform (Anode): Anode + clock samples = 263 bytes • Waveform used for detector calibration / PM signal analysis (very useful) • On line trigger (Neutrino track finder) only uses SPE events. • Detector operates in SPE mode • ~ 10 Mb/s per PM in average • *900 PM = ~9 Gb/s for the full detector ARS ASIC : Analog Ring Sampler
Offshore: on-board system • Thermal dissipation (titanium container) • Limited room (~15 boards of 12 cm ) • Limited electrical power (~35 W / storey) • Very limited access (1 / 3 years) • Numerous modules: MTBF problem( 400 spatial « satellites »)
LCM LCM LCM MLCM LCM SCM Shore station Junction Box Detector readout : an Ethernet network Responsibilities : IN2P3 : Slow-control/data base CEA : Sub-marines data NIKHEF : Data wavelength multiplexing For each sector (DWDM) NIKHEF : Run control/on shore acquisition Ethernet 100 Mb/s (1 fibre WDM/45 m max) Data Slow control Ethernet 1 Gb/s (2 fibres DWDM/330 m max) Processor board 60x Ethernet 1 Gb/s 12x Ethernet 100 Mb/s (24 fibres DWDM/40 Km) 5 Sectors Switch board LCM ; Local Control Module 5x Ethernet 1 Gb/s 1x Ethernet 100 Mb/s (2 fibres DWDM/100 m) 12 Lines MLCM ; Master LCM SCM : String control Module A separate proprietary Network to time stamp the PM signals (1 ns precision)
Processor (Motorola MPC860P@80MHz) Boot / Local File System RTOS Slow Control Task Slow Control Flash Memory (4 MB) SDRAM Memory(64 MB) Slow control for the storey Ethernet Link 100Mb/s To Shore station Offshore Acquisition nodeA dedicated processor board Data from the storey(ASIC ARS) Programmable Logic FPGA Data Data Task
LCM data/slow control processing ARS data Dynamic Memory Logic Device (FPGA) 1/Status 2/RAZ Time 3/Counters 4/SPE 5/Anode Waveform 6/Anode Waveform +Dynode Counters ARS 0 SPE ARS 0 ARS 1 Anode Waveform ARS 2 Buffers Multiplexor ARS 3 Counters ARS 4 ARS 5 ARS 1 104 ms Clock Experiment Periodic Slow control/on demand/configuration 100 Mb/s Ethernet Port Slow Control object 104 ms Data 104 ms Data Processor To shore PC 1 To shore PC 2
Offshore Processor board • Power 4W full charge • TCP/IP Network throughput • Running Linux/MontaVista : 25 Mb/s • Running vxWorks/windRiver : 30 Mb/s, 50 Mb/s With « Zero Copy Buffer » option • Operational Configuration • Selected RTOS vxWorks • No DAQ performances drop due to slow-control • Measured data rate : 50 Mb/s
Off-shore : robustness/hardware frozen Ethernet switch routing Global DAQ Architecture • Off-shore : • 300 Detection nodes • No trigger / All data to shore • Nodes synchronised by a global clock (Physics event time stamp) • Each node send 104 ms of data to the same On-shore processing node On-shore : Processing power/upgradable • On-shore : • 80 Processing nodes • Each node treat a full 104 ms detector view. Neutrino Track finding. • Full software trigger
On-line Trigger principle • No dead time • “Self triggered” SPE signals send to shore • All data send to shore. No hardware trigger (local storey level1 trigger implemented but not used) • High fluctuations (bioluminescence) absorbed in the front end processor board Max storey rates: 120 Mb/s • High Rate Veto applied to a PM, typically 400 kHz
LCM LCM LCM LCM LCM Network topology LCM LCM LCM LCM LCM ~21 Mb/s SECTOR SWITCH (in: 5x100, out: 1x1000) SECTOR SWITCH (in: 5x100, out: 1x1000) Used as data concentratormultiples input ports one single output port congestion risk intelligent control ON-SHORE SWITCH: (in: 60x1000, out: 100x1000) PC PC PC PC PC PC PC
Acquisition Data flow High rate Veto @400 kHz PM x3 Offshore processor x5 50 M bit/s max Off shore Ethernet Switch x60 Offshore 250 M bit/s max Onshore Ethernet Backbone x1 Data Filter x80 Counting rate (12 G Bytes/day) Offline : ~3 ascending Neutrino/day 10 Hz Neutrino Candidates (2,6 G Bytes/day)
Conclusion • No hardware trigger. All Data to shore concept (pushed by NIKHEF/DWDM) • Offshore system fully configurable from shore (firmware, software, RTOS image) • Onshore trigger fully upgradable • Concept is working fine. 12 lines currently operating by 2500 m depth. • Development time not reduced because software development and debugging may be as time consuming as hardware.