1 / 39

Reliability Theory and its Application to Healthcare

Reliability Theory and its Application to Healthcare. Aims of session. Introduction to reliability theory – the framework and the three step model Highly reliable organisations – who are they? Can we learn from them?

hhardeman
Download Presentation

Reliability Theory and its Application to Healthcare

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reliability Theory and its Application to Healthcare

  2. Aims of session • Introduction to reliability theory – the framework and the three step model • Highly reliable organisations – who are they? Can we learn from them? • Healthcare as a highly reliable industry – designing reliable systems of care • Care bundles – a reliability approach

  3. Reliability in healthcare • Healthcare is a high hazard industry • We are not able to reliably deliver healthcare to all of our patients all of the time. • Approx. 10% (900,000) of patients admitted to hospital experience an incident. • 72,000 of these incidents/adverse events contribute to the death of patients • Many go unrecognised

  4. Patient safety – a global issue

  5. Impact Direct costs: • in England healthcare associated infections are estimated to cost over £1 billion pounds per year • on average, preventable drug events resulted in an additional 4.6 days in length of stay • estimated cost of preventable adverse events in USA is $10.1 billion (Leape et al 1993)

  6. Is medicine a high-reliability industry? • The practice of medicine involves complex systems in which humans play a key role • Procedures are very technical and sometimes risky • Medicine should be a high-reliability industry • Unfortunately literature shows that it is fraught with error, can be unsafe, and at times is not effective • The potential for error and system failure is always there • Things happen on a daily basis: staff go off sick, equipment doesn’t work, people forget to do something - we are all human no matter how diligent • This is a normal part of a complex healthcare system

  7. What is reliability science? • Reliability principles are used successfully in industries such as manufacturing and air travel to help evaluate, calculate and improve the overall reliability of complex systems • These can be used to design systems that compensate for the limits of human ability, can improve safety and the rate at which a system consistently produces the desired outcomes

  8. How is it measured? • Reliability is measured as the inverse of the systems failure rate • A system that has a defect rate of one in ten or 10% performs at a level of 10 – 1 • Reliability is defined as failure-free operation over time • Reliability = number of actions that achieve the intended result, divided by total number of actions taken

  9. A reliability framework • 10 – 1 performance on process measures indicates no articulated common process and an emphasis on training and reminders (international studies of adverse events in hospitals shows an error rate of 10% suggesting a level at which most organisations currently perform) • 10 – 2 performance on process measures indicates processes intentionally designed with tools and concepts based on the principles of human factors engineering • 10 – 3 or better performance on process measures indicates a well designed system with attention to processes structure and their relationship to outcomes

  10. Examples • 10-1= 80 or 90% success, 1 or 2 failures out of 10 opportunities ( A chaotic process) • B-blockers after acute MI • 10-2 = 5 failures or less out of 100 opportunities • Mortality in general surgery • 10-3= 5 failures or less out of 1000 opportunities - Mortality in routine anaesthesia • 10-4 = 5 failures or less out of 10,000 opportunities A chaotic process is failure in greater than 20% of opportunities Almost all studies that investigate the reliability of the application of clinical evidence conclude that it is 10-1

  11. Improving reliability Level I Intent, vigilance & hard work Level II Design systems for reliability constraints, decision aids, reminders, checklists, bundles Level III Prevent design for reliability Identify make failures visible Mitigate prevent / treat harm due to failures

  12. How to reduce variability • Standardisation Care bundles ICPs Guidelines • Checklists • Improve access to information • Reduce reliance on memory • Constraints • Reduce handovers • Simplify processes

  13. Standardisation concepts • Standardisation is done to provide the appropriate infrastructure • The ‘what’ we are standardising based on good medical evidence • The ‘how’ does not need to be based on good medical evidence but rather on systems knowledge

  14. In a broader context • Aviation passenger safety is measured at 10-6 • Nuclear power plants must demonstrate a design capable of operating at 10-6 before they can be built

  15. IHI three-tiered strategy for designing reliable care systems 1. Prevent failure 2. Identify and mitigate failure – identify failure when it occurs and intercede before harm is caused, or mitigate the harm caused by failures that are not detected 3. Redesign the process based on the critical failures identified

  16. Designing effective and reliable systems • Have simple rules – complex systems best handled by this • Feature redundancy – offers multiple layers of defence from error • Incorporate forcing functions – a mechanism that makes it easy to do the right thing and hard to do the wrong thing (i.e. on a plane the toilet light cannot be turned on without locking the door first) • Ensure people cannot work around the system first – understand why people develop workarounds • Minimise reliance on human memory • Allow the expertise of the people performing the work to be used – standardised protocols provide a systematic approach • Incorporate technology where possible • Communicate the advantages of the system to clinicians – if staff do not see this they will develop workarounds • Consider what happens if the system fails – be prepared

  17. How Hazardous Is Healthcare?(Leape and Amalberti)

  18. Highly reliable organisations? • A definition of a HRO is one that is known to be complex and risky, yet safe and effective • These organisations acknowledge the complexity of their systems create an environment in which individuals can communicate openly about concerns and design systems that make it difficult for failures to occur • HROs ask ‘what happens when the system fails?’, not ‘What if the system fails?’

  19. Examples of highly reliable organisations • Aviation • Nuclear power plants • Air traffic control centre • Nuclear aircraft carriers

  20. Learning from highly reliable organisations • Other highly technical industries bear a similarity to medicine • Airline industry - thousands of flights take place every day in varying weather conditions. If a significant error occurred the consequences would be dire • So why is the error rate in aviation not the subject of public and media interest?

  21. Lessons learned the hard way!

  22. The airline industry • Aviation industry recognised years ago that human error is an inevitable part of doing business • The industry chose to address error prevention and safety by improving communication, flattening team hierarchy and implementing fail safe systems • These actions have made aviation a highly reliable industry

  23. High reliability organisations • Strong organisational culture of reliability • Continuous learning • Effective and varied patterns of communication • Human resource management practices that support reliability • Adaptable decision-making dynamics • Managing technology • System and human redundancy

  24. The need to apply a Systems Approach • Failure is predictable and can be detected • Failure arises out of systematic and organisational factors – not just erratic behaviour of individuals • High reliability departments create safety by anticipating and planning for unexpected events and future surprises

  25. Can reliability be applied to healthcare? • Although healthcare is not currently highly reliable, it has the potential to be • IHI and others believe that applying reliability principles to healthcare has the potential to reduce defects in care or care processes, increase the consistency with which appropriate care is delivered, and improve patient outcomes • To move in that direction we must overcome one of the largest barriers – the culture of medicine

  26. There is hope • One bright light in the field of healthcare with regard to high reliability – anaesthetics • No other medical discipline has come as close • Realisation that the weak link in the process was the people not the technology (1984 Cooper published his study – review of 329 incidents involving anaesthesia in a Massachusetts Hospital identified that nearly 70% of these incidents related to human error • They have learned lessons and implemented changes that the rest of the healthcare field are just beginning to acknowledge • In 1954, one out of every 1,500 patients died as a result of problems with their anaesthetic • In 2001 that risk has dropped to one in every 250,000

  27. Using care bundles to improve reliability • Bundles demand ‘all or none’ thinking and measurement • Bundles facilitate identifying failures • Failures are actively used to redesign the process • Team work and communication proven to improve

  28. What are they? • A series of interventions relating to a treatment or intervention - ventilator bundle - central Line bundle - tracheostomy bundle etc • When implemented together will achieve significantly better outcomes than when implemented individually (IHI 2005)

  29. Why? • A way of reducing the gap between research and practice in clinical areas • Promotes evidence-based change • The bundle of care will have a greater effect on the positive outcome of the patient than if used in isolation • Reduces variation from unit to unit or clinician to clinician

  30. Care bundles Based on reliability principles – all or nothing compliance: • Plane takes off ok, one engine fails during flight, descends ok, lands ok = 75% • Plane takes off ok, one engine fails during flight, descends badly, crashes on landing= 25% • Plane takes off ok, engines ok during flight, descends ok, lands ok = 100% • Overall flight compliance – 66% Would you want to travel on this airline?

  31. Evidence • IHI estimates that it could be possible to achieve an 80% reduction in Surgical Site Infections (of which 3% could be fatal) and a 50% reduction in deaths from Acute Myocardial Infarction • They also estimate that an average bed sized U.S. hospital could save 18 lives from SSI and 108 lives from AMI each year as a result of implementing care bundles

  32. Level of reliability of all 4 elements of ventilator bundle < 95% compliance > 95 % compliance Reduction of Ventilator Acquired Pneumonia 46 % 59 % An example

  33. Outcomes • Evidence that the unit is achieving quality care and doing the right thing for the right patient • Average length of stay is reducing • Sedation costs reduced – financial savings

  34. Central line bundle

  35. Central line infection rate

  36. Making the move • Need to move towards a culture focused on safety and reliability • Leadership driven with staff focused on safe and reliable care • Adoption of standardised methods of communication and in the creation of an environment in which people interact collaboratively and feel free to speak up if they see something worrying • Engineer systems with redundancy and safeguards that make doing the wrong thing difficult • Create a learning environment in which little problems are seen as indicators of deeper potential faults to be addressed proactively

More Related