(1) update us on your working answers the posed questions in brief?
(2) your current confidence (and if you would like to, by proxy, MIRI’s as an organisation’s confidence in each of the 3:
Elites often fail to take effective action despite plenty of warning.
I think there’s a >10% chance AI will not be preceded by visible signals.
I think the elites’ safety measures will likely be insufficient.
@Lukeprog, can you
(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI’s as an organisation’s confidence in each of the 3:
Thank you for your diligence.