FWIW I also have >10% credence on x-risk this century, but below 1% on x-risk from an individual AI system trained in the next five years, in the sense Eliezer means it (probably well below 1% but I don’t trust that I can make calibrated estimates on complex questions at that level). That may help explain why I am talking about this policy in these harsh terms.
FWIW I also have >10% credence on x-risk this century, but below 1% on x-risk from an individual AI system trained in the next five years, in the sense Eliezer means it (probably well below 1% but I don’t trust that I can make calibrated estimates on complex questions at that level). That may help explain why I am talking about this policy in these harsh terms.