Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer’s assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
The trouble with the choice of phrase “hyperintelligent machine sociopath” is that it gives the other side of the argument and easy rebuttal, namely, “But that’s not what we are trying to do: we’re not trying to create a sociopath”. In contrast, if the accusation is that (many of) the AI labs are trying to create a machine smarter than people, then the other side cannot truthfully use the same easy rebuttal. Then our side can continue with, “and they don’t have a plan for how to control this machine, at least not any plan that stands up to scrutiny”. The phrase “unaligned superintelligence” is an extremely condensed version of the argument I just outlined (where the verb “control” has been replaced with “align” to head off the objection that control would not even be desirable because people are not wise enough and not ethical enough to be given control over something so powerful).