There’s a crack that the speaker touches on, that I’d like to know more of. None of the people able to understand that the safety reports meant “FIX THIS NOW OR IT WILL FALL DOWN” had the authority to direct the money to fix it. I’m thinking moral mazes rather than cracks.
Applying this to AI safety, do any of the organisations racing towards AGI have anyone authorised to shut a project down on account of a safety concern?
There’s a crack that the speaker touches on, that I’d like to know more of. None of the people able to understand that the safety reports meant “FIX THIS NOW OR IT WILL FALL DOWN” had the authority to direct the money to fix it. I’m thinking moral mazes rather than cracks.
Applying this to AI safety, do any of the organisations racing towards AGI have anyone authorised to shut a project down on account of a safety concern?