CEO at Machine Intelligence Research Institute (MIRI)
Malo
¯\_(ツ)_/¯
Obviously just one example, but Schneier has generally been quite skeptical, and he blurbed the book.
but subconsciously I notice that MIRI was cleaning house before the book launch (e.g. taking down EY’s light novel because it might look bad)
Do you have any other concrete example here besides the novel?
Maybe that’s wrong; maybe the issue was lack of reach rather than exhausting the persuadees’ supply, and the book-packaging + timing will succeed massively. We’ll see.
This is certainly the hope. Most people in the world have never read anything that anyone here has ever written on this subject.
FWIW, and obviously this is just one anecdote, but a member of Congress who read an early copy, and really enjoyed it, said that Chapter 2 was his favorite chapter.
The market has now resolved to yes, with Paul confirming.
Huh, I thought I fixed this. Thanks for flagging, will ensure I fix now.
Also oddly, the US version is on many of Amazon’s international stores including the German store ¯\_(ツ)_/¯
Schneier is also quite skeptical of the risk of extinction from AI. Here’s a table o3 generated just now when I asked it for some examples.
Date Where he said it What he said Take-away 1 June 2023 Blog post “On the Catastrophic Risk of AI” (written two days after he signed the CAIS one-sentence “extinction risk” statement) “I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war — a risk worth taking seriously, but not something to panic over.” (schneier.com) Explicitly rejects the “extinction” scenario, placing AI in the same (still-serious) bucket as pandemics or nukes. 1 June 2023 Same post, quoting his 2018 book Click Here to Kill Everybody “I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future.” (schneier.com) Long-standing view: most dangers come from how humans use technology we already have. 9 Oct 2023 Essay “AI Risks” (New York Times, reposted on his blog) Warns against “doomsayers” who promote “Hollywood nightmare scenarios” and urges that we “not let apocalyptic prognostications overwhelm us.” (schneier.com) Skeptical of the extinction narrative; argues policy attention should stay on present-day harms and power imbalances.
FWIW, I think Jack Shanahan definitely counts as a skeptic.
My favorite reaction to the Bernanke blurb. From a friend who works on AI policy in DC:
Agree. I think Google DeepMind might actually be the most forthcoming about this kind of thing, e.g., see their Evaluating Frontier Models for Dangerous Capabilities report.
Apple Music?
I’d certainly be interested in hearing about them, though it currently seems pretty unlikely to me that it would make sense for MIRI to pivot to working on such things directly as opposed to encouraging others to do so (to the extent they agree with Nate/EYs view here).
I think this a great comment, and FWIW I agree with, or am at least sympathetic to, most of it.
If you are on an airplane or a train, and you can suddenly work or watch on a real theater screen, that would be a big game. Travel enough and it is well worth paying for that, or it could even enable more travel.
Ben Thompson agrees in a followup (paywalled):
Vision Pro on an Airplane
I tweeted about this, but I think it’s worth including in the Update as a follow-up to last week’s review of the Vision Pro: I used the Vision Pro on an airplane over the weekend, sitting in economy, and it was absolutely incredible. I called it “life-changing” on Twitter, and I don’t think I was being hyperbolic, at least for this specific scenario:
The movie watching experience was utterly immersive. When you go into the Apple TV+ or Disney+ theaters, with noise-canceling turned on, you really are transported to a different place entirely.
The Mac projection experience was an even bigger deal: my 16″ MacBook Pro is basically unusable in economy, and a 14″ requires being all scrunched up with bad posture to see anything. In this case, though, I could have the lid actually folded towards me (if, say, the person in front of me reclined), while still having a big 4K screen to work on. The Wifi on this flight was particularly good, so I had a basketball game streaming to the side while I worked on the Mac; it was really extraordinary.
I mentioned the privacy of using a headset in my review, and that really came through clearly in this use case. It was really freeing to basically be “spread out” as far as my computing and entertainment went and to feel good about the fact I wasn’t bothering anyone else and that no one could see my screen.
There is no sign that anyone plans to actually offer MLB or other games in this mode.
That may be right but then the claim is wrong. The true claim would be “RSPs seem like a robustly good compromise with people who are more optimistic than me”.
IDK man, this seems like nitpicking to me ¯\_(ツ)_/¯. Though I do agree that, on my read, it’s technically more accurate.
My sense here is that Holden is speaking from a place where he considers himself to be among the folks (like you and I) who put significant probability on AI posing a catastrophic/existential risk in the next few years, and “people who have different views from mine” is referring to folks who aren’t in that set.
(Of course, I don’t actually know what Holden meant. This is just what seemed like the natural interpretation to me.)
And then the claim becomes not really relevant?
Why?
Responsible scaling policies (RSPs) seem like a robustly good compromise with people who have different views from mine
2. It seems like it’s empirically wrong based on the strong pushback RSPs received so that at least you shouldn’t call it “robustly”, unless you mean a kind of modified version that would accommodate the most important parts of the pushback.
FWIW, my read here was that “people who have different views from mine” was in reference to these sets of people:
Some people think that the kinds of risks I’m worried about are far off, farfetched or ridiculous.
Some people think such risks might be real and soon, but that we’ll make enough progress on security, alignment, etc. to handle the risks—and indeed, that further scaling is an important enabler of this progress (e.g., a lot of alignment research will work better with more advanced systems).
Some people think the risks are real and soon, but might be relatively small, and that it’s therefore more important to focus on things like the U.S. staying ahead of other countries on AI progress.
LFG!
#7 Combined Print & E-Book Nonfiction
#8 Hardcover Nonfiction