High Reli­a­bil­ity Organizations

TagLast edit: 17 Mar 2023 21:52 UTC by Raemon

High Reliability Organizations (HROs) are organizations that operate in high-risk domains but reliably avoid catastrophic failures. Examples include nuclear plants, air traffic control, and aircraft carriers. Research into HROs aims to determine how they achieve extreme reliability and whether these lessons apply to AI companies working on dangerous technologies.

Key HRO insights include: tracking failures to learn; avoiding oversimplification; staying operationally sensitive; committing to resilience; deferring to expertise; and an “informed culture” that reports issues, avoids blame, and fosters flexibility and learning.

HRO literature may provide useful principles for AI companies, but different feedback loops and job functions limit applications. Research into fields like biotech may also be relevant.

(Written by Claude, using High Reliability Orgs, and AI Companies as input. Feel free to rewrite)

High Reli­a­bil­ity Orgs, and AI Companies

Raemon4 Aug 2022 5:45 UTC
86 points
7 comments12 min readLW link1 review

“Care­fully Boot­strapped Align­ment” is or­ga­ni­za­tion­ally hard

Raemon17 Mar 2023 18:00 UTC
258 points
22 comments11 min readLW link

What Mul­tipo­lar Failure Looks Like, and Ro­bust Agent-Ag­nos­tic Pro­cesses (RAAPs)

Andrew_Critch31 Mar 2021 23:50 UTC
271 points
64 comments22 min readLW link1 review

Ro­bust Ar­tifi­cial In­tel­li­gence and Ro­bust Hu­man Organizations

Gordon Seidoh Worley17 Jul 2019 2:27 UTC
17 points
2 comments2 min readLW link

Do we have a plan for the “first crit­i­cal try” prob­lem?

Christopher King3 Apr 2023 16:27 UTC
−3 points
14 comments1 min readLW link
No comments.