I basically fully endorse the full article. I like the concluding bit too.
This brings me to my own contribution to the already-full genre of recommendations for people who want to contribute to AI safety:
Don’t work for a company that’s making frontier fully-autonomous AI capabilities progress even faster
Don’t live in the San Francisco Bay Area.
Cheers,
Gabe
Crossposting Eliezer’s comments.
Thread between Eliezer and Vitalik
Eliezer Yudkowsky replying on Twitter:
Vitalik Buterin:
Eliezer Yudkowsky:
Vitalik Buterin: Agree!
Eliezer Yudkowsky:
Eliezer Yudkowsky used ROT13 to hide an example Rationalization.
After thinking about it yourself, hover over the box below, to reveal his example.
There were a lot of atoms in Jesus’s body and some of them are no doubt in this saltshaker, given Avogadro’s number and the volume of Earth’s surface.
Related writing by Eliezer
https://www.lesswrong.com/posts/f4CZNEHirweN3XEjs/teachable-rationality-skills?commentId=F837zHgkrTA2rqusn
https://www.reddit.com/r/rational/comments/dk4hh8/comment/f4po393
https://x.com/ESYudkowsky/status/1900621719225987188
See Also: Against Devil’s Advocacy
AI risk is not common knowledge. There are many people who do not believe there’s any risk. I really wish people who make arguments of the following form:
would acknowledge this fact. It is simply not true that everyone in frontier labs thinks this way. You can ask them! They’ll tell you!
It would be nice if we were just in a prisoner’s dilemma type co-ordination problem. But when someone is publicly saying “hitting DEFECT has no downsides whatsoever, I plan on doing that as much as possible” you need to take this into consideration.