I was pushing back on a similar attitude yesterday on twitter → LINK.
Basically, I’m in favor of people having nitpicky high-decoupling discussion on lesswrong, and meanwhile doing rah rah activism action PR stuff on twitter and bluesky and facebook and intelligence.org and pauseai.info and op-eds and basically the entire rest of the internet and world. Just one website of carve-out. I don’t think this is asking too much!
Yeah, I agree. The audience for this book isn’t LessWrong, but lots of people seem to be acting as if pushing back on LessWrong is a defection that will hurt the book’s prospects.
I’m in the Twitter thread with Steve. I’ll just note that I don’t think it’s realistic to expect the world’s reaction to be more passionate and supportive than the LW community’s signaled reaction.
Are most of the persnickety internal-disagreers actually signalling that they intend to promote the book, or at least not downplay its thesis? I don’t think rationalists at large have a great track record of engaging the outside world on a unified front, or in leaving nuance aside when the nuance would stand in the way of the important parts of the communication. In other words, I don’t think the two types of content are on different platforms. I think it’s usually the same content on both.
In general, I’ve noticed that a lot of people think “scout mindset” means never having to pick up a (metaphorical) rifle. That’s a good way to have a precise model of how you’re going to die, without having any hand in preventing it. The most useful people in the world right now are scouts who are willing to act like soldiers from time to time.
Are most of the persnickety internal-disagreers actually signalling that they intend to promote the book, or at least not downplay its thesis?
One of the persnickety internal disagreers here. I have recommended IABIED to those of my acquaintances who I expect may read it. I don’t really have any other platform to shout about it from, but if I did, I would’ve certainly used it to promote the book, leaving all nitpicking out of it.
I, at least, do explicitly make a distinction between “a place for persnickety internal discussion” and “the public-facing platform”, and would behave differently between the two.
If you think it’s nonsense, please read it! Because logically:
1. It is currently the NYT bestseller: #7 book. Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it. 2. It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.
So even if you think it is nonsense, do you really want people in one echo chamber to think it is the simple truth acknowledged by experts, and people in another echo chamber think it is nonsense not even worth debunking? The only thing the two sides agree on is that the answer is so obvious it’s not even worth listening to the other side.
Do not let that happen to such an important question under your nose. Make the effort to find out WHY you disagree so intensely with so many smart people!
Especially, if you are a wonderful person who often tries to make the world a better place!
(PS: My personal book review is: the book was preaching to the choir for myself haha, but it was still very interesting to read the history stories, and the occasional humor was well done. I don’t fully agree with solutions in the last chapters, but they feel relatively saner than many other solutions I’ve read about from others in the field.)
On LessWrong I didn’t nitpick this book in particular, but I’ve consistently disagreed with some MIRI positions (e.g. they think it’s futile trying to increase AI alignment spending beyond 0.1% of AI capabilities spending, since the hope that alignment will happen first is completely negligible unless we shut down capabilities).
In principle it makes sense. But in reality right now, the only place where there’s a sizable MIRI-aligned community, is the community that’s entirely going the persnickety route. I’m open to different counterfactual comparisons, I’m just noting that compared to the world where there’s a sizable MIRI-aligned community that shows support for MIRI, this world is disappointing.
LessWrong is not an activist community, and should not become one. I think there are some promising arguments for trying to create activist spaces and communities (as well as some substantially valid warnings). I am currently kind of confused about how good it would be to create more of those spaces, but I think if it’s a good idea, people should not attempt to try to make LessWrong into one.
I don’t see “how you express yourself on a highly argumentative web forum” as limiting “how you express yourself at a launch party” or “how you express yourself on a popular podcast” other places.
I was pushing back on a similar attitude yesterday on twitter → LINK.
Basically, I’m in favor of people having nitpicky high-decoupling discussion on lesswrong, and meanwhile doing rah rah activism action PR stuff on twitter and bluesky and facebook and intelligence.org and pauseai.info and op-eds and basically the entire rest of the internet and world. Just one website of carve-out. I don’t think this is asking too much!
Yeah, I agree. The audience for this book isn’t LessWrong, but lots of people seem to be acting as if pushing back on LessWrong is a defection that will hurt the book’s prospects.
That’s fair!
I’m in the Twitter thread with Steve. I’ll just note that I don’t think it’s realistic to expect the world’s reaction to be more passionate and supportive than the LW community’s signaled reaction.
Why not? It seems extremely reasonable to have a place for persnickety internal-ish discussion, and other content somewhere else?
Are most of the persnickety internal-disagreers actually signalling that they intend to promote the book, or at least not downplay its thesis? I don’t think rationalists at large have a great track record of engaging the outside world on a unified front, or in leaving nuance aside when the nuance would stand in the way of the important parts of the communication. In other words, I don’t think the two types of content are on different platforms. I think it’s usually the same content on both.
In general, I’ve noticed that a lot of people think “scout mindset” means never having to pick up a (metaphorical) rifle. That’s a good way to have a precise model of how you’re going to die, without having any hand in preventing it. The most useful people in the world right now are scouts who are willing to act like soldiers from time to time.
One of the persnickety internal disagreers here. I have recommended IABIED to those of my acquaintances who I expect may read it. I don’t really have any other platform to shout about it from, but if I did, I would’ve certainly used it to promote the book, leaving all nitpicking out of it.
I, at least, do explicitly make a distinction between “a place for persnickety internal discussion” and “the public-facing platform”, and would behave differently between the two.
I gave it a good review on Goodreads haha.
The review
If you think it’s nonsense, please read it! Because logically:
1. It is currently the NYT bestseller: #7 book. Nearly every reviewer appears moderately convinced, and far more experts and individuals endorsed it than try to debunk it.
2. It argues (with confidence!) that humanity will die unless WWII level efforts are made against AI risk.
So even if you think it is nonsense, do you really want people in one echo chamber to think it is the simple truth acknowledged by experts, and people in another echo chamber think it is nonsense not even worth debunking? The only thing the two sides agree on is that the answer is so obvious it’s not even worth listening to the other side.
Do not let that happen to such an important question under your nose. Make the effort to find out WHY you disagree so intensely with so many smart people!
Especially, if you are a wonderful person who often tries to make the world a better place!
(PS: My personal book review is: the book was preaching to the choir for myself haha, but it was still very interesting to read the history stories, and the occasional humor was well done. I don’t fully agree with solutions in the last chapters, but they feel relatively saner than many other solutions I’ve read about from others in the field.)
On LessWrong I didn’t nitpick this book in particular, but I’ve consistently disagreed with some MIRI positions (e.g. they think it’s futile trying to increase AI alignment spending beyond 0.1% of AI capabilities spending, since the hope that alignment will happen first is completely negligible unless we shut down capabilities).
In principle it makes sense. But in reality right now, the only place where there’s a sizable MIRI-aligned community, is the community that’s entirely going the persnickety route. I’m open to different counterfactual comparisons, I’m just noting that compared to the world where there’s a sizable MIRI-aligned community that shows support for MIRI, this world is disappointing.
LessWrong is not an activist community, and should not become one. I think there are some promising arguments for trying to create activist spaces and communities (as well as some substantially valid warnings). I am currently kind of confused about how good it would be to create more of those spaces, but I think if it’s a good idea, people should not attempt to try to make LessWrong into one.
I don’t see “how you express yourself on a highly argumentative web forum” as limiting “how you express yourself at a launch party” or “how you express yourself on a popular podcast” other places.