Great, look forward to hearing what you think.
I can’t speak to exactly who IABIED was targeting, but I spent a lot of effort to make it as accessible as possible to someone who (a) would read a non-fiction book about AI, but (b) have no background in science (this includes many people with influence). The logic being that one might lose non-science people if written more towards science people but unlikely to lose science people if written generally but engagingly.
Darren McKee
Oh well, but you were correct.
You aren’t really my target audience but I’d be curious to hear what you think. I’m re-reading it myself.
While a decent exchange, I’m not sure if this is that useful to either of us for future exchanges?
Regarding anecdata, you also have to take into account Scott Alexander disliking the scenario, Will being disappointed, Shakeel thinking the writing was terrible, and Buck thinking that they didn’t sufficiently argue their case. And that’s not even including the people who overly disagree with the main argument.
Anyway, we shall see how it turns out (and I sincerely hope it has a positive impact)
I chose ~20% for a reason, but we can be more precise and say 15% to still keep it under 300 pages.
Also, it you truly think space is at such a premium, then the scenario could be scaled back in favor of explaining how the policy proposals would work.
An excellent point and it’s important to highlight the works that needs to be done.
(I think you’re stance is more obvious to people in the legal/regulatory/policy space—of course, specific definitions are required and it can’t work otherwise).
Yes, a test would be nice but impossible.
I’ll just say I strongly disagree that 20% more length means 20% fewer readers. I would think it wouldn’t change readership much at all. The people who would read such a book wouldn’t drop off quite so dramatically.
Yes, I wrote that in the review. I think that they made the wrong choice because that is not how most people will consume the content.
Do you happen to have a good argument why the book proper couldn’t be 20% longer to better make their case?
Oh yes, guesses all over the place. And very difficult to meaningfully arbitrate.
(FYI, my opinion is that mine hasn’t reached more people for several reasons, such as not having name recognition, existing larger following, and institutional support.
But, whenever some reads it, they seem to really like it.)
To both, have either of you read my book for comparison?
Alice,
Yes, we feel the same way on multiple fronts. I still don’t understand why certain decisions were made that reduced some easy wins. Oh well, we shall see.
yams,
That’s true that it’s better, but there is SO much further it could have gone.
Actually, I think 6-8000 copies can be largely driven by the community (funding book groups) and there was such an institutional push that that should help.
The concern is that they get the sales but it’s the wrong book. So the thing one actually wants—the reader to now be aware/engaged, happens less compared to something else. Perhaps it will polarize in bad ways… or good ways. Experiment indeed.
I think the target audience includes those in various positions and with various backgrounds that would benefit from a more thorough presentation of the ideas, so it’s not just the style issue.
It might depend on what, exactly, rallying means and how you see the implications of that. I thought EY’s appearance on Hard Fork, for example, wasn’t good and the message of AI safety might have been better presented by someone else.
As you read, I agree with Buck that book doesn’t sufficiently argue it’s main points, and this makes it problematic.
We may just disagree on how difficult it would be to recommend my book (as an example) along with IABIED?
There are a wide range of options and some require little effort and wouldn’t take away much from IABIED compared to the benefit (yes, I think the difference is that great).
Hi there, for your consideration, I share my review: IABIED Review—An Unfortunate Miss — LessWrong
Also, I just posted my review: IABIED Review—An Unfortunate Miss — LessWrong
Just posted my review: IABIED Review—An Unfortunate Miss — LessWrong
Thank you kindly, and I look forward to that comparison
Good to know and I appreciate you sharing that exchange.
You are correct that such a thing is not in there… because (if you’re curious) I thought, strategically, it was better to argue for what is desirable (safe AI innovation) than to argue for a negative (stop it all). Of course, if one makes the requirements for safe AI innovation strong enough, it may result in a slowing or restricting of developments.
Cool. No expectations. Hope you find some value :)
Would you like to?
(I could send along an audible credit or a physical copy)
Have you happen to have read by beginner-friendly book about AI safety/risk “Uncontrollable”?
I think a comparison/contrast by someone other than me would be beneficial (although I’ll do one soon)
Buck, did you read my book “Uncontrollable” ?
Given your review, it’s possible my book is the response to what you’re alluding to here: “I don’t know of a resource for laypeople that’s half as good at explaining what AI is, describing superintelligence, and making the basic case for misalignment risk.”
I’m only 40 pages in to the new book, and inherently conflicted of course, so it is better to have the thoughts of someone who has read both and isn’t me, but people have said it is the best introduction to AI risk for laypeople.
I had hoped EY’s book would clearly supplant mine but the more reviews I read, I think that isn’t clearly the case.
(happy to get you a copy, physical or audio, if desired).
“Knowledge is power. Knowledge about what might happen in your future is power, too. As such, it’s important that you have an informed opinion of the promises and perils of artificial superintelligence so you can take effective action. Otherwise, as is likely the case right now, there are small groups of people who largely decide what will happen in the development of AI. Their decisions already have a dramatic impact on our lives. The decisions they make in the next few years may be of far greater consequence. If you don’t engage with the issues surrounding AI, they will decide your future for you.”
- “Uncontrollable”, p. 15