If you think it’s nonsense, please read it! Because logically:
1. It is currently the NYT bestseller: #7 book. Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it. 2. It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.
So even if you think it is nonsense, do you really want people in one echo chamber to think it is the simple truth acknowledged by experts, and people in another echo chamber think it is nonsense not even worth debunking? The only thing the two sides agree on is that the answer is so obvious it’s not even worth listening to the other side.
Do not let that happen to such an important question under your nose. Make the effort to find out WHY you disagree so intensely with so many smart people!
Especially, if you are a wonderful person who often tries to make the world a better place!
(PS: My personal book review is: the book was preaching to the choir for myself haha, but it was still very interesting to read the history stories, and the occasional humor was well done. I don’t fully agree with solutions in the last chapters, but they feel relatively saner than many other solutions I’ve read about from others in the field.)
On LessWrong I didn’t nitpick this book in particular, but I’ve consistently disagreed with some MIRI positions (e.g. they think it’s futile trying to increase AI alignment spending beyond 0.1% of AI capabilities spending, since the hope that alignment will happen first is completely negligible unless we shut down capabilities).
I gave it a good review on Goodreads haha.
The review
If you think it’s nonsense, please read it! Because logically:
1. It is currently the NYT bestseller: #7 book. Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it.
2. It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.
So even if you think it is nonsense, do you really want people in one echo chamber to think it is the simple truth acknowledged by experts, and people in another echo chamber think it is nonsense not even worth debunking? The only thing the two sides agree on is that the answer is so obvious it’s not even worth listening to the other side.
Do not let that happen to such an important question under your nose. Make the effort to find out WHY you disagree so intensely with so many smart people!
Especially, if you are a wonderful person who often tries to make the world a better place!
(PS: My personal book review is: the book was preaching to the choir for myself haha, but it was still very interesting to read the history stories, and the occasional humor was well done. I don’t fully agree with solutions in the last chapters, but they feel relatively saner than many other solutions I’ve read about from others in the field.)
On LessWrong I didn’t nitpick this book in particular, but I’ve consistently disagreed with some MIRI positions (e.g. they think it’s futile trying to increase AI alignment spending beyond 0.1% of AI capabilities spending, since the hope that alignment will happen first is completely negligible unless we shut down capabilities).