E.g., the book is likely to become a NYT bestseller. The exact position can be improved by more pre-orders. (The figure is currently at around 5k pre-orders, according to the q&a; +20k more would make it a #1 bestseller).
Chat says about that
If preorders = 5k, you’re probably looking at 8k–15k total copies sold in week 1 (preorders + launch week sales).
Recently, nonfiction books debuting around 8k–12k week-1 copies often chart #8–#15 on the NYT list.
Lifetime Sales Ranges
Conservative: 20k–30k copies total (good for a nonfiction debut with moderate buzz).
Optimistic: 40k–60k (if reviews, media, podcasts, or TikTok keep it alive).
Breakout: 100k+ (usually requires either a viral moment, institutional adoption, or the author becoming part of a big public debate).
Is that a lot? I don’t actually know, would be that’s not that many, but a decent number and might get a lot of buzz, commentary, etc. This is a major crux so I’d be interested in take.
If true: there might be an influx of people into this space, or who are hoping to get into this space AND the space could lose a lot of impact if it’s not ready to make use of this pipeline
I think the arguments here are clear but let me know if not
Therefore, people/orgs should be thinking about how to make the best pipelines for the inflow.
e.g.
If you have next steps for people (BlueDot, CEA, MATS), be ready to retweet / restack MIRI’s materials and be like “if you care about this, here’s a way to get involved”
Similarly, maybe pitch MIRI on putting your org / next steps on their landing page for the book and see if they think that makes sense
Landing page / resource hub: “So you just read the MIRI book?” page that curates your content, fellow orgs’ resources, and next steps. Make it optimized for search and linkable.
I also think this is likely to cause folks to look into the situation and ask, “is it really this bad?” I think it’s helpful to point them to the fact that yes, Yudkowsky and Soares are accurately reporting that the AI CEOs think they’re roughly russian-roulette odds gambling with the world [1]. I also think it’s important to emphasize that a bunch of us have a bunch of disagreements, whether nuanced or blunt, with them, and still are worried.
Why? Because lots of folks live in denial that it’s even possible for AI as smart as humans to exist one day, much less superintelligent AI soon. Often their defense mechanism is to pick at bits of the story. Reinforcing that even if you pick at bits of the story you still are worried is a helpful thing.
[1] Not trying to pick round ninety zillion of the fight about whether this is a good or bad idea, etc.!
True, although I wish more people would engage with the common anti-AI-x-risk argument of “tech CEOs are exaggerating existential risk because they think it’ll make their products seem more important and potentially world changing, and so artificially boost hype”. Not saying I agree with this, but there’s at least some extent to which it’s true, and I think this community often fails to appropriately engage with and combat this argument.
In general, this is why “appeal to authority” arguments should generally be avoided if we’re talking about people who are widely seen as untrustworthy and having ulterior motives. At most I think people like Geoffrey Hinton are seen as reputable and not as morally compromised so serve as better subjects for an appeal to authority, but mostly rather than needing to appeal to authority at all we should just try and bring things back to the object-level arguments.
I think this community often fails to appropriately engage with and combat this argument.
What do you think that looks like? To me, that looks like “give object-level arguments for AI x-risk that don’t depend on what AI company CEOs say.” And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven’t yet nailed down in how I approached this, hence the link instead of explanation
I’d currently summarize the view not as “CEOs scare people” but as “any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation”. I suspect that at least part of what’s going on is that when someone doesn’t comprehend the details of an argument, there’s some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author’s-behalf-and-yet-valuable-to-the-egregore lie)?
“if you care about this, here’s a way to get involved”
My understanding is that MIRI expects alignment will be hard, an international treaty will be needed, and believes that a considerable proportion of the work that gets branded as “AI safety” is either unproductive or counterproductive.
MIRI could of course be wrong, and it’s fine to have an ecosystem where people are pursuing different strategies or focusing on different threat models.
But I also think there’s some sort of missing mood here insofar as the post is explicitly about the MIRI book. The ideal pipeline for people who resonate with the MIRI book may look very different than the typical pipelines for people who get interested in AI risk (and indeed, in many ways I suspect the MIRI book is intended to spawn a different kind of community and a different set of projects than the community/projects that dominated the 2020-2024 period, for example.)
Relatedly, I think this is a good opportunity for orgs/people to reassess their culture, strategy, and theories of change. For example, I suspect many groups/individuals would not have predicted that a book making the AI extinction case so explicitly and unapologetically would have succeeded. To the extent that the book does succeed, it suggests that some common models of “how to communicate about risk” or “what solutions are acceptable/reasonable to pursue” may be worth re-examining.
but if what’s actually happening is that people interpret it as cynical dishonesty that does not believe its own doom arguments and thus must be instead whatever the next most likely reason is to make a doom argument, which seems to be a common reaction, then it may be made of backfire. I find it very hard to tell whether this is happening, and I know of many people who think it’s the only thing that happens. I certainly do think it’s a thing that happens ever.
I’ve been thinking this same thing for a while now, but coming at it from a different direction. I’m worried, and I’m not sure what to do about it. I’ve tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I’ll give a vague ramble comment here instead.
--
Yeah, I think it’s possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It’s possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?
In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it’ll be pretty easy to spin up articles dunking on LW. I imagine something like “Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller”.
I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from “Group X wants to kill us all by doing Y!” and more into the realm of “Oh, this is a big deal, and we need to all work together to solve it”?
And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we ‘win’ this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.
Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist “April May” is thrown into the spotlight of the world, after an encounter with a mysterious robot. I’d recommend the book any time, but to me, it feels relevant now.
As stated, I am afraid, and it’s possible my anxieties are projections of my own feelings. I’d be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.
Is anyone using the book as a funnel to LessWrong? I don’t think MIRI are (afaik). The only (again, afaik) event going on in the UK is being joint hosted by Pause AI, Control AI and some other local community members are helping out, which is not going to be a funnel for LW at all. I assume Lighthaven is doing something (haven’t checked) but are they going to say “If you like this book you’ll love our online forum?”
Moreover, is using LessWrong as the default funnel a good idea in the first place? I’d guess not. I know lots of people (notably Oliver Habryka) don’t approve of Pause AI or Control AI, but I assume there must be other directions for suddenly-invigorated normies to be pointed in (though I’ve not actually looked for them).
What are we doing about the MIRI book inbound?
Claim: The MIRI book might be a very big deal, read by lots of people
Mostly this is on vibes, and the MIRI team trying hard and seeming very successful and getting a lot of buzz, great blurbs, some billboards, etc.
I saw this tweet
Chat says about that
Is that a lot? I don’t actually know, would be that’s not that many, but a decent number and might get a lot of buzz, commentary, etc. This is a major crux so I’d be interested in take.
If true: there might be an influx of people into this space, or who are hoping to get into this space AND the space could lose a lot of impact if it’s not ready to make use of this pipeline
I think the arguments here are clear but let me know if not
Therefore, people/orgs should be thinking about how to make the best pipelines for the inflow.
e.g.
If you have next steps for people (BlueDot, CEA, MATS), be ready to retweet / restack MIRI’s materials and be like “if you care about this, here’s a way to get involved”
Similarly, maybe pitch MIRI on putting your org / next steps on their landing page for the book and see if they think that makes sense
Landing page / resource hub: “So you just read the MIRI book?” page that curates your content, fellow orgs’ resources, and next steps. Make it optimized for search and linkable.
Other?
Very interested in takes!
I also think this is likely to cause folks to look into the situation and ask, “is it really this bad?” I think it’s helpful to point them to the fact that yes, Yudkowsky and Soares are accurately reporting that the AI CEOs think they’re roughly russian-roulette odds gambling with the world [1]. I also think it’s important to emphasize that a bunch of us have a bunch of disagreements, whether nuanced or blunt, with them, and still are worried.
Why? Because lots of folks live in denial that it’s even possible for AI as smart as humans to exist one day, much less superintelligent AI soon. Often their defense mechanism is to pick at bits of the story. Reinforcing that even if you pick at bits of the story you still are worried is a helpful thing.
[1] Not trying to pick round ninety zillion of the fight about whether this is a good or bad idea, etc.!
True, although I wish more people would engage with the common anti-AI-x-risk argument of “tech CEOs are exaggerating existential risk because they think it’ll make their products seem more important and potentially world changing, and so artificially boost hype”. Not saying I agree with this, but there’s at least some extent to which it’s true, and I think this community often fails to appropriately engage with and combat this argument.
In general, this is why “appeal to authority” arguments should generally be avoided if we’re talking about people who are widely seen as untrustworthy and having ulterior motives. At most I think people like Geoffrey Hinton are seen as reputable and not as morally compromised so serve as better subjects for an appeal to authority, but mostly rather than needing to appeal to authority at all we should just try and bring things back to the object-level arguments.
What do you think that looks like? To me, that looks like “give object-level arguments for AI x-risk that don’t depend on what AI company CEOs say.” And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven’t yet nailed down in how I approached this, hence the link instead of explanation
I’d currently summarize the view not as “CEOs scare people” but as “any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation”. I suspect that at least part of what’s going on is that when someone doesn’t comprehend the details of an argument, there’s some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author’s-behalf-and-yet-valuable-to-the-egregore lie)?
Yeah, that seems right and good to highlight!
My understanding is that MIRI expects alignment will be hard, an international treaty will be needed, and believes that a considerable proportion of the work that gets branded as “AI safety” is either unproductive or counterproductive.
MIRI could of course be wrong, and it’s fine to have an ecosystem where people are pursuing different strategies or focusing on different threat models.
But I also think there’s some sort of missing mood here insofar as the post is explicitly about the MIRI book. The ideal pipeline for people who resonate with the MIRI book may look very different than the typical pipelines for people who get interested in AI risk (and indeed, in many ways I suspect the MIRI book is intended to spawn a different kind of community and a different set of projects than the community/projects that dominated the 2020-2024 period, for example.)
Relatedly, I think this is a good opportunity for orgs/people to reassess their culture, strategy, and theories of change. For example, I suspect many groups/individuals would not have predicted that a book making the AI extinction case so explicitly and unapologetically would have succeeded. To the extent that the book does succeed, it suggests that some common models of “how to communicate about risk” or “what solutions are acceptable/reasonable to pursue” may be worth re-examining.
but if what’s actually happening is that people interpret it as cynical dishonesty that does not believe its own doom arguments and thus must be instead whatever the next most likely reason is to make a doom argument, which seems to be a common reaction, then it may be made of backfire. I find it very hard to tell whether this is happening, and I know of many people who think it’s the only thing that happens. I certainly do think it’s a thing that happens ever.
I’ve been thinking this same thing for a while now, but coming at it from a different direction. I’m worried, and I’m not sure what to do about it. I’ve tried writing up some suggestions, but nothing has felt useful enough to post. To try and explain my position, I’ll give a vague ramble comment here instead.
--
Yeah, I think it’s possible the book will be a big deal. If it does make a significant splash, the overtone window might take a big knock, all at once. It’s possible that the collective eye of the world, turns onto us. Onto LessWrong. How do we prep for that?
In a way that I adore, this community is a bunch of weirdos. We are not normal. We hold opinions that are vastly different from most of the world. If this book gets the reception it deserves, I think it’ll be pretty easy to spin up articles dunking on LW. I imagine something like “Eugenics loving, Polygamous, vegan, SBF funded, Shrimp obsessed, Harry Potter fanfic, doomsday, sex cult, warns end times are near, in NYTs best seller”.
I am afraid of the eye, looking down at us, calling us bad people, and I am afraid of the split. I do not want there to be the Blue tribe, the Red tribe, and the Grey tribe. I do not want this issue to become a culture war topic. How do we plan to avoid this outcome? If the book is successful, how do we steer the narrative away from “Group X wants to kill us all by doing Y!” and more into the realm of “Oh, this is a big deal, and we need to all work together to solve it”?
And how do we avoid being Carrie-ed in the cultural spotlight? How do we avoid people protesting in ways that are not beneficial to the cause? If we ‘win’ this thing, it seems to me, we need the support of the average person. But where is our relatable figure? Yudkowsky is a wonderful writer, and a quick thinking speaker. But, he is not a relatable figure head, and he is-unfortunately-somewhat easy to take jabs at.
Relevant fiction here is An Absolutely Remarkable Thing, by Hank Green. In which the Protagonist “April May” is thrown into the spotlight of the world, after an encounter with a mysterious robot. I’d recommend the book any time, but to me, it feels relevant now.
As stated, I am afraid, and it’s possible my anxieties are projections of my own feelings. I’d be thankful to someone who could calm my anxiety with some logical argument. But, as of now, I think this emotion is telling me something important.
Is anyone using the book as a funnel to LessWrong? I don’t think MIRI are (afaik). The only (again, afaik) event going on in the UK is being joint hosted by Pause AI, Control AI and some other local community members are helping out, which is not going to be a funnel for LW at all. I assume Lighthaven is doing something (haven’t checked) but are they going to say “If you like this book you’ll love our online forum?”
Moreover, is using LessWrong as the default funnel a good idea in the first place? I’d guess not. I know lots of people (notably Oliver Habryka) don’t approve of Pause AI or Control AI, but I assume there must be other directions for suddenly-invigorated normies to be pointed in (though I’ve not actually looked for them).