Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer’s assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
Some of us would consider that a good outcome (relative to what we think is likely to happen) because at least humanity does not go extinct (and Carl Shulman made that point on this site already in about 2013). We just consider it unlikely that any person retains enough control to keep even himself and his friends alive as AI becomes sufficiently capable.
To be precise, it is not strictly necessary for any person to retain any degree of control. The crisper way to say it is that for any part of humanity to survive, AI (i.e., all the models considered as a system that has some effect on the world) must care at least a tiny bit about what at least one person wants, but sadly this property of caring at least a tiny bit that IMHO is unlikely to be satisfied—because no one knows how to create an AI with that property and no one is likely to figure out how to in time. We started calling it “AI alignment” about 12 years ago (before “alignment” came to mean “corporate brand safety”) but we could’ve called it “AI caring” or “AI inter-species regard”.
I believe that what I just wrote applies whether the AIs “win out in one crushing step, or win out in a trillion small familiar ways,” to quote from Katja’s final sentence.