1 / 17

Designing Reliable Health Care Systems

Designing Reliable Health Care Systems. No longer Murphy’s law but Langewiesche’s “Everything that can go wrong usually goes right, and then we draw the wrong conclusion.” ’ . Definition Of “Reliability” for Health Care. Reliability is failure free operation over time.

mai
Download Presentation

Designing Reliable Health Care Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Designing Reliable Health Care Systems

  2. No longer Murphy’s law but Langewiesche’s “Everything that can go wrong usually goes right, and then we draw the wrong conclusion.”’

  3. Definition Of “Reliability” for Health Care Reliability is failure free operation over time. Healthcare: The measurable capability of a process, procedure or health service to perform its intended function in the required time under existing conditions. Berwick/Nolan 12/03

  4. Three-level Design of Safe and Reliable Systems of Care Prevent Design the system to prevent failure Identify Design procedures and relationships to make failures visible when they do occur so that they may be intercepted before causing harm Mitigate Design procedures and build capabilities for fixing failures when they are identified or mitigating the harm caused by failures when they are not detected and intercepted Earl Weiner, U of Miami Nolan. BMJ March 2000 Espinosa/Nolan, BMJ March 2000

  5. René Amalberti Increasing safety margins No limit on discretion Becoming team player Excessive autonomy of actors Agreeing to become « equivalent actors » Craftmanship attitude Accepting the residual risk Ego-centered safety protections, vertical conflicts Accepting that changes can be destructive Loss of visibility of risk, freezing actions Blood transfusion Fatal Iatrogenic adverse events Anesthesiology ASA1 Cardiac Surgery Patient ASA 3-5 Medical risk (total) No system beyond this point Himalaya mountaineering Chartered Flight Civil Aviation Railways (France) Microlight or helicopters spreading activity Road Safety Chemical Industry (total) Nucleur Industry Fatal risk 10-2 10-3 10-4 10-5 10-6 Very unsafe Ultra safe

  6. Design for Reliability • Level 1: Intent, vigilance and hard work • Level 2: Design informed by reliability science and research in human factors • Level 3: Design of integrated systems and high reliability organizations

  7. Improving Level 1 (<90%)Performance Intent, Vigilance and Hard Work: • Common equipment (and other structural standardization) • Standard orders sheets • Personal check lists • Feedback of information on compliance • Awareness and training

  8. Limits of Level 1 Design • There exists a ceiling on human performance • Unconstrained human performance, guided by personal discretion only will get you no higher than 90% • Constrained human performance can reach 95% and higher

  9. Moving to Level 2 (>95%)Performance Level 2: Design informed by reliability science and research in human factors Design Concepts • Standardization of processes • Building decision aids and reminders into the system • Taking advantage of existing habits and patterns • Making the desired action the default (based on evidence) • Creating redundancy • Scheduling using proper operations theory

  10. High Reliability Organizations • Sophisticated design of human interactions and working relationships Weick’s Attributes 1. Preoccupation with failure (Prevent) 2. Sensitivity to operations (Prevent) 3. Reluctance to simplify interpretations (Identify) 4. Deference to expertise (Identify/Mitigate) 5. Commitment to resilience (Mitigate) Weick, KE and Sutcliffe, Managing the Unexpected 2001.

  11. Preoccupation with Failure • Any lapse of any size = symptom that something is wrong with the system • Reaction to even the most minor failures = swift and far-reaching • Success breeds over-confidence of current operations and intolerance to opposing views Weick, KE and Sutcliffe, Managing the Unexpected 2001.

  12. Low reliability organizations regard a near miss as evidence of their successful ability to avert disaster. This reinforces the belief that current operations are adequate and that no systems changes are needed

  13. Reluctance to Simplify • Accept and act as if there is more than one possibility • Encourage diverse opinions and experiences • Understand that simplifications produce blind spots Weick, KE and Sutcliffe, Managing the Unexpected 2001.

  14. Commitment to Resilience • HROs understand that errors and unforeseen situations will arise • It is impossible to write procedures to cover every situation • They have the capacity for swift adaptation, speedy communication and innovative solutions • Build skills in simulation Weick, KE and Sutcliffe, Managing the Unexpected 2001.

  15. Sensitivity to Operations • Understanding of the latent conditions that lead to failure • Clear alignment of purpose throughout the organization • Any difficultyin operations is attended to immediately • Well-developed situational awareness: • Everyone in the organization understands what is happening and why Weick, KE and Sutcliffe, Managing the Unexpected 2001.

  16. Deference to Expertise • HROs disavow the role system hierarchies and power play in safety • Nobody in the organization is reluctant to ask for internal or external help Weick, KE and Sutcliffe, Managing the Unexpected 2001.

  17. References Amalberti, R. (2001). The paradoxes of almost totally safe transportation systems. Safety Science, 37, 109-126. Dekker, Sidney (2005). Ten Questions about Human Error. Lawrence Erlbaum, Mahwah, NJ. Weick, Karl (1995). Sensemaking in Organizations. Sage, Thousand Oaks. Weick, K & Sutcliffe, K. (2001). Managing the Unexpected. Jossey-Bass.

More Related