180 likes | 452 Views
15 Years of Expert Judgement at TUDelft. Louis Goossens & Roger Cooke and Andrew Hale & Ljiljana Rodic WOS2006, Zeewolde 12-15 September 2006. !!. A TRIBUTE TO ANDREW HALE Professor emeritus in spe in Safety Science at TUDelft. The Delft Method of Expert Judgement Elicitation.
E N D
15 Years ofExpert Judgementat TUDelft Louis Goossens & Roger Cooke and Andrew Hale & Ljiljana Rodic WOS2006, Zeewolde 12-15 September 2006
!! A TRIBUTE TO ANDREW HALE Professor emeritus in spe in Safety Science at TUDelft
The Delft Methodof Expert Judgement Elicitation • Developed by Roger Cooke – early 90-ies • Support from • Ministry of Housing, Physical Planning and the Environment (the Netherlands) • European Commission • Expert Judgement Procedures Guide (EUR 18820, 2000) • Main goal: RATIONAL CONSENSUS in Decision-Making
Applications of the Delft Method • In total (we elicit mostly the 5, 50, 95 percentile assessments of unknown variables) • 880 experts • 4,339 variables (“the unknowns and knowns”) • 82,585 elicitations (total number of questions)
?? WHAT HAS ANDREW HALE GOT TO DO WITH EXPERT JUDGEMENT?
Applications of the PC method Separate assessments (pairwise comparisons) • 293 experts • 202 variables • 14,826 elicitations 2 Major projects: SAFETY MANAGEMENT developments: ANDREW HALE RELIABILITY of LANDFILL LINERS LJILJANA RODIC
How does the PC Method work? If you want to compare 3 attributes (A, B, C) wrt the impact on safety improvement for a particular safety management strategy, you ask the experts to assess each pair of attributes possible: • A and B, A and C, and B and C • Which attribute has the highest safety improvement potential in each pair ? • n attributes require n(n-1)/2 assessments
Safety Management and PC • I-Risk project based safety impact on reducing the risk • The risk was calculated with Master Logic Diagrams containing parameter failure rates, inspection intervals, and maintenance parameters • Safety management can influence the parameter data in the risk calculation by introducing safety measures: and then, which safety measure should have priority? • The PC Method is available to compare priorities for safety measures
One example from Andrew’s work • Maintenance management (MM) in chemical plants • Risk has the following parameters for MM: • Tm (time for maintenance) • Tr (time for repair) • Im (time of maintenance interval) • It (time of test interval) • L/L (probability of failure to replace like with like • RDe (respect of equipment design enveloppe) • Hem (human error in maintenance) • Hei (human error in inspection)
Safety management • Safety management consists of generic management areas, which determine the quality of completion of safety critical tasks • In I-Risk the generic management areas were: • Availability of suitable personnel • Competence of those personnel • their Commitment to safety • Communication and coordination • Conflict resolution (priority of safety vs other goals) • Interface design • Procedures and plans • delivery of correct spares and replacements
Ask the right questions • To connect the 8 generic management areas with the 8 risk parameters a protocol is applied: • detail the 8 parameters • describe relevant scenarios for deviations in the optimal values of these parameters • define a detailed task list to manage these deviations • detail the 8 generic management areas
Example of task list • Time for preventive maintenance is determined by the following main tasks: • decide maintenance concept • plan and prioritise maintenance work • schedule maintenance work • prepare maintenance work • prepare area • do maintenance • recommission plant • record experience • Phrase these main tasks in questions, leading to 8 attributes to compare with expert judgement
Conclusions Although the views on safety management have been changed gradually during the past years, the Paired Comparisons Protocol, developed mainly by Andrew Hale, does not need to change at all.
Performance measures • Calibration (statistical likelihood) • Information (wrt background measure) • Range graphs expertwise
EU-USNRC Dry Deposition • 03/07/2003 ________________________________________________________________________________ • Results of scoring experts • Bayesian Updates: no Weights: global DM Optimisation: yes • Significance Level: 0.00169 Calibration Power: 1 • ______________________________________________________________________________________ • Nr.| Id |Calibr. |Mean relat|Mean relat|Numb|UnNormaliz|Normaliz.w|Normaliz.w • | | | total |realizatii|real|weight |without DM|with DM • ______|________|__________|__________|__________|____|__________|__________|__________ • 1|Expert1 | 3.064E-5| 0.9411| 0.7044| 14| 0| 0| 0 • 2|Expert2 | 0.5271| 0.3593| 0.1661| 14| 0.08754| 0.9339| 0.4675 • 3|Expert3 | 0.00169| 0.679| 0.41| 14| 0.000693| 0.007393| 0.003701 • 4|Expert4 | 0.00169| 0.7177| 0.7231| 14| 0.001222| 0.01304| 0.006527 • 5|Expert5 | 2.054E-8| 0.789| 0.7201| 14| 0| 0| 0 • 6|Expert6 | 0.002203| 1.188| 1.341| 14| 0.002955| 0.03152| 0.01578 • 7|Expert7 | 0.00169| 0.6474| 0.7826| 14| 0.001323| 0.01411| 0.007064 • 8|Expert8 | 0.0008759| 0.9759| 0.5431| 14| 0| 0| 0 • 9|perf wgt| 0.6587| 0.2429| 0.142| 14| 0.09351| | 0.4994 • 10|eq wgt | 0.00169| 0.1524| 0.1677| 14| 0.0002834| | 0.002998 • ______________________________________________________________________________________ • ________________________________________________________________________________ • (c) 1999 TU Delft
Item no.: 61 Item name: DD-E-1 1.6 mu Scale: LOG Experts 1 [-------------------*---------] 2 [-------------------------------------*-------------------] 3 [------------------*------------------] 4 [--------*--------------------] 5 [-------*--------] 6 [---*------] 7 [-------*------------------] 8 [---------------------------*--------------] prf wgt [===================================*=====================] eq wgt [===========================*=============================] Real:::::::::::::::::::::::::::::::::::::::::::#::::::::::::::::::::::::::::::: 0.38 0.002 18 Dry Deposition Range Graphs: itemwise