1 / 37

LHC Controls Infrastructure and the timing system

LHC Controls Infrastructure and the timing system. Hermann Schmickler on behalf of the CERN Accelerator and Beams Controls Group. Outline. LHC controls infrastructure – overview

Download Presentation

LHC Controls Infrastructure and the timing system

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHC Controls Infrastructureand the timing system Hermann Schmickleron behalf ofthe CERN Accelerator and BeamsControls Group LHC-MAC

  2. Outline LHC controls infrastructure – overview Readiness report on- generic services (logging, alarms, software interlocks, sequencer…) + LSA core applications- LHC applications (LSA)  talk of Mike Lamont- industrial controls, mainly cryogenics controls- machine protection systems- databases- post mortem system- hardware installations, front end computers Controls Security The timing system Outlook, injector renovation, summary LHC-MAC

  3. The 3-tier LHC Controls Infrastructure LHC-MAC

  4. The Resource Tier VME crates dealing with high performance acquisitions and real-time processing e.g. the LHC beam instrumentation and the LHC beam interlock systems use VME front-ends PC based gateways interfacing systems where a large quantity of identical equipment is controlled through field buses (LHC power converters and LHC Quench Protection System) Programmable Logic Controllers (PLCs) driving various sorts of industrials actuators and sensors for systems( LHC Cryogenics systems or the LHC vacuum system) Supported FieldBuses for local connections ( Mil1553, WorldFIP, Profibus) LHC-MAC

  5. The Middle Tier • Application servers hosting the software required to operate the LHC beams and running the Supervisory Control and Data Acquisition (SCADA) systems • Data servers containing the LHC layout and the controls configuration as well as all the machine settings needed to operate the machine or to diagnose machine behaviors. • File servers containing the operational applications • Central timing which provides the cycling information of the whole complex of machines involved in the production of the LHC beam and the timestamp reference LHC-MAC

  6. The Presentation Tier At the control room level, consoles running the Graphical User Interfaces (GUI) will allow machine operators to control and optimize the LHC beams and to supervise the state of key industrial systems. Software for operations (LSA , generic services, SCADA systems…) Dedicated fixed displays will also provide real-time summaries of key machine parameters Operational consoles specifications :- PCs with 2 Gbyte RAM running either LINUX or WINDOWS- One PC, one keyboard, one mouse, and up to 3 screens- Console capable of running any type of GUI software for LHC (and PS and SPS) such as JAVA, WEB, SCADA or X-MOTIF. LHC-MAC

  7. Outline LHC controls infrastructure – overview Readiness report on- generic services (logging, alarms, software interlocks, sequencer…) + LSA core applications- LHC applications (LSA)  talk of Mike Lamont- industrial controls, mainly cryogenics controls- machine protection systems- databases- post mortem system- hardware installations- front end computers (FEC) Controls Security The timing system Outlook, injector renovation, summary LHC-MAC

  8. Applications- The LSA core framework LHC-MAC

  9. Applications Readiness • The core controls software is in place • The technology is well established and tested • The core controls have been tested and deployed successfully: • LEIR • SPS transfer lines: TT40, CNGS, TI8, TI2 • Hardware Commissioning • LHC power converter tests • SM18 • Generic Services • Systems for Alarms, Software Interlocks, Fixed Displays, SDDS, Logging are already deployed and attained operational state in other machines • Operational Applications • Generic Equipment State and Equipment Monitoring in place • Core control applications for Settings Generation and Management, Trims, Machine Sequencer, Beam Steering are operational LHC-MAC

  10. Generic utilities example: logging • Reconstruction of beam incident from logged data • SPS at 450 GeV (some 10^13 protons) Conclusion : MSE current appears to be ~2.5% low at extraction LHC-MAC

  11. LHC Hardware Commissioning LHC-MAC

  12. Industrial Controls have reached a high level of maturity and the systems deployed for HWC in 2007 are presently consolidated. The SCADA applications will be ported to Linux and also front-end software will be ported from LynxOS to Linux. • Framework UNICOS:- used by several CERN groups under the guidance of AB-CO- used by AB-CO for • Machine protection (PIC, WIC, QPS, Circuits) • Collimator environment PackageA first application using PLC and PVSS has been deployed and qualified during the collimator test. This solution is now deployed in the LHC • Cryogenic controls (next page) LHC-MAC

  13. UNICOS Framework LHC-MAC

  14. Cryogenic controls • Cryogenic process Plants and Sector • The first cool down of sector 7-8 has shown a large dependence of the cryogenics controls on the availability of the technical network. This situation is considered too high in risk, so new hardware is presently installed for running the essential cryogenics control loops locally. • The Control system deployed using Siemens PLC and holding the complete control for the sector has also been successfully tested. • Cryogenic instrument expert system • The FESA development to interface all the WorldFip sensors and actuators and an expert tool for cryogenics instrumentation engineers (CIET) has been completed and homogenized with the QPS model to ease the maintenance. The new version was deployed successfully in the LHC sector 78. All the software production chain is also handled by automated production tools using data from the LHC instrument DB LHC-MAC

  15. LHC sector 7-8 : Successful cooldown • The Cryogenics controls system has been fully operational to achieve this first cooldown. LHC-MAC

  16. LHC-MAC

  17. LHC-MAC

  18. Machine Interlocks for Protecting Supra-ConductingMagnets and Normal Conducting Magnets for Protecting the Equipments for Beam Operation Beam Interlock System (VME based) Powering Interlock System (PLC based) + Safe Machine Parameters system (VME based) Warm Magnet Interlock System (PLC based) LHC-MAC Fast Magnet Current change Monitor (FMCM)

  19. Powering Interlock System & Warm Magnets Interlock System In accordance with the LHC co-ordination schedule, the installation and the individual system tests of both systems have been nearly completed: Powering Interlock Controllers: 36 units of PLC based system protecting ~800 LHC electrical circuits  36 out of 36 are installed and are regularly participating to the different phases of HWC Powering tests Warm magnet Interlock Controllers: 8 units of PLC based system protecting ~150 LHC normal conducting magnets  6 out of 8 units are installed and operational for the HWC (the 2 last ones will be installed in Jan.08) LHC-MAC

  20. Beam Interlock System During 2007, the system has been fully validated for operation in the SPS ring and its extraction lines. With 10 Beam Interlock Controllers managing in total ~120 connections with: BLM, VAC, BPM, Power Converters,…  Ready to be installed in the LHC machine: 19 units are going to deployed in beginning of 2008 ~200 connections with most of the LHC systems  Several weeks for performing the BIS Commissioning (performed in // with HWC) LHC-MAC

  21. FMCM: Fast Magnet Change Current Monitors Fruitful collaboration with DESY (DESY development + CERN adaptation) First units successfully used during the SPS Extraction tests in 2007 ( 10 units already installed on septum magnet and dipole magnets) In total, 12 monitors are going to be deployed in the LHC (+ 14 in Transfer Lines), including ALL septa families  1st version of FESA class and Java supervision available since June 2007 Consolidation in progress. I (A) I (A) 10 ms 500 ms FMCM triggers @ 3984.4 <103 PC current FMCM trigger  0.1% drop ! LHC-MAC time (ms) time (ms)

  22. Post Mortem system overview In place for Hardware Commissioning Data collection and pre-processing LHC equipment PM collector PM converter PM Data storage (SDDS) Logging DB Trigger PM.EB PM Event Builder HWC sequencer Request handler Data viewing and analysis Seq PM.RH PM.EA PM Event Analyser PM.PIC Powering interlock analysis Result of test PM.CB Crowbar analysis MTF PM.QPS Quench Protection analysis PM.PNO Powering to nominal analysis LHC-MAC

  23. Post Mortem readiness • Primary data saving reviewed and implemented (data volume, parallel servers, physical location of servers) • Minimal set of tools ready for Hardware Commissioning. Sector 4-5 is being commissioned now. • Improvements needed in:- Event recognition (Provoked Quench, Training Quench, …).- Well defined GUI for each test step (now some GUI’s are re-used: can confuse).- Test results electronically signed by role (QPS expert, MPP expert, EIC, …), not fully functional yet. • Additional Functionality (Eventbuilder) requested • Present system only covers the systems relevant for HWC. For beam related PM the following strategy is proposed:- Assure data saving- have generic browsing tool- wait 6 months of beam operation to define specs for analysis programs LHC-MAC

  24. Tracking between the three main circuits of sector 78 2ppm Courtesy F.Bordry LHC-MAC post mortem view of PC tracking

  25. Databases • Layout databases:- scope extended from ring to injection and dump lines- Racks & electronics largely incorporated- Layout data is now used as foundation for the controls system- Tools and policy for data maintenance still to be effectively put in place, to keep it up-to-date- More data to be captured i.e. layout + assets (e.g. PLCs in AB, AT, TS) • Operational Settings: - Data model stabilized; foreseen to cover all accelerators • Logging:- Sustained increasing logging requirements for HWC (>10x volume increase per year)- Improved data retrieval tool (TIMBER) in line with end-used needs- Common logging infrastructure for the complete accelerator chain- New database hosting purchased (in place March 2008) LHC-MAC

  26. Hardware Installation, front-end computers • Basically all controls equipment installed following the LHC installation schedule. • Major problems with quality assurance in particular with cabling. 15 man-months! of effort still missing to fix all non-conformities of world-Fip (fieldbus) cabling. • OS of PC gateways moved to Linux (from LynxOS) • VME still on LynxOS; transition to Linux unclear. • Front End Software framework still under development; version 2.10 release in January 2008. • Front-End Software developed by AB equipment groups.AB-CO supports 3 last FESA releases. LHC-MAC

  27. Outline LHC controls infrastructure – overview Readiness report on- generic services (logging, alarms, software interlocks, sequencer…) + LSA core applications- LHC applications (LSA)  talk of Mike Lamont- industrial controls, mainly cryogenics controls- machine protection systems- databases- post mortem system- hardware installations- front end computers (FEC) Controls Security The timing system Outlook, injector renovation, summary LHC-MAC

  28. Communications • Technical Network (TN)For operational equipment • Formal connection and access restrictions • Limited services available (e.g. no mail server, no external web browsing) • Authorization based on MAC addresses • Network monitored by CERN IT Department • General Purpose Network (GPN) • For office, mail, www, development • No formal connection restrictions LHC-MAC

  29. In addition to CNIC : RBAC … for accelerator equipment access:Implement a ‘role-based’ access to equipment in the communication infrastructure Depending on WHICH action is made, on WHO is making the action, from WHERE the action-call is issued and WHEN it is executed, the access will be granted or denied This will allow for filtering,for control and for traceability on the settings modifications to the equipment Implementation state of RBAC: - A1 = authentication process and A2 = Authorization process developed (based on passing encrypted digital keys)- Database for access roles prepared.- System not deployed, Motivation campaign driven by AB-CO will start in January 2008. Still significant loop holes from parallel access to equipment with expert tools. RBAC will NOT protect against sabotage or malicious access! LHC-MAC

  30. Outline LHC controls infrastructure – overview Readiness report on- generic services (logging, alarms, software interlocks, sequencer…) + LSA core applications- LHC applications (LSA)  talk of Mike Lamont- industrial controls, mainly cryogenics controls- machine protection systems- databases- post mortem system- hardware installations- front end computers (FEC) Controls Security The timing system Outlook, injector renovation, summary LHC-MAC

  31. Timing System major components • LHC central timing generation • LHC injector chain central timing • Timing distribution networks • Timing reception in distributed front ends • Transmission of LHC safe machine parameters (SMP) LHC-MAC

  32. SMP: Safe Machine Parameters Generation and distribution of critical parameters over the Timing network: • Beam Energy, • Safe Beam Flags, • Stable Beam Flag, • etc… Interfaces with: - kicker systems (BEM) • BCT system • BLM system • Experiments • Beam Interlock Syst. - Only one controller is needed (VME crate with several dedicated boards) - Two SMP variants: one for SPS and one LHC  - First version (with necessary features) will be both ready in May08 - Final version available in 2009 LHC-MAC

  33. LHC central timing generation • The initial requirements are specified in • https://edms.cern.ch/file/780566/0.1/LHC-CT-ES-0004-00-10.pdf • This specifies the basic functionality of the LHC central timing and its presentation to the LSA layer. • Status: Implemented and functioning OK • All safe beam behavior, beam energy and intensities are simulated and waiting for the SMP hardware. • Delivery and final cabling will be completed before the end of January 2008. • Postmortem and XPOC behavior is implemented but final cabling depends on the SMP module. • Triggering LHC injection from SPS extraction is implemented. • Timing distribution hardware: OK • Timing receivers in FECs: OK LHC-MAC

  34. LHC injector chain central timing • The behavior of the LHC injector chain during LHC filling is defined in • FILL THE LHC” A USE CASE FOR THE LHC INJECTOR CHAIN. EDMS LHC-0000001835 • https://edms.cern.ch/file/839438/1/FS-LHCUseCase.doc • This has been fully integrated into the CBCM, but the complete chain from LSA across the LHC central timing to the CBCM has not been operationally tested. This will not be possible in the dry runs scheduled for December 2007 dry due to missing interlocks conditions in the injectors. • There is still some cabling that will be completed when the SMP hardware arrives. LHC-MAC

  35. One page on Resources/Outlook • Present level of effort of controls enormous in terms of human resources:- AB-CO is about 80 staff and about 60 temporary collaborators- 60% on LHC, i.e 70 FTE • Expect this high level to stay for first year of LHC running • Expect significant problems of continuity when temporary resources will leave • A major problem of legacy controls in the LHC injection chain persists:- Only few staff in AB-CO left knowledgeable about the injector controls- Continuing demands for modifications from OP or driven by hardware modifications by the equipment groups • Creation of new project in 2007 driven by AB-CO:Renovation of injector chain controls by using LHC technology- SPS mostly done “on the fly” in 2006 and 2007- INCA (= Injector Controls Architecture) project for the injectorsexpected duration: 2008 -2010Scope: all 3 tiers • Need for high level of Resources will persist until end of INCA project LHC-MAC

  36. Deployment View AB/CO Technical Committee PSCCSR Results & proposal

  37. Technical Summary • No major issues with LHC controls • Security tools developed, but not deployed. • Major parts of the controls system enter the phase of consolidation, performance tuning and upgrades. This is not true for the applications; development is still needed. • “By doing it” many specifications are revisited and existing solutions have to be re-engineered • We expect another boost of modification requests during LHC beam commissioning • For the past 2 years AB-CO has mainly concentrated on providing functionality in due-time.Documentation “is what it is” and the basis for training a first line intervention team (piquet) is not given. Hence the first year of LHC exploitation will be supported by experts on a best effort basis. • During 2007 AB-CO has launched a group wide effort to produce a diagnostics and monitoring tool (DIAMON), which will allow operations and CO experts to monitor (1st priority) and repair (2nd priority) controls problems from an integrated tool.The first version of this tool is scheduled to be ready in March 2008. LHC-MAC

More Related