This interlude is included despite the fact that Hanson’s proposed scenario is in contradiction to the main thrust of Bostrom’s argument, namely, that the real threat is rapidly self-improving A.I.
I can’t say I agree with your reasoning behind why Hanson’s ideas are in the book. I think the book’s content is written with accuracy in mind first and foremost, and I think Hanson’s ideas are there because Bostrom thinks they’re genuinely a plausible direction the future could go, especially in the circumstances where recursive self improving AI of the kinds traditionally envisioned turns out to be unlikely or difficult or impossible for whatever reasons. I don’t think those ideas are there in an effort to mine the Halo effect.
And really, the book’s main thrust is in the title. Paths, Dangers, Strategies. Even if these outcomes are not necessarily mutually exclusive (inc. the possibility of singletons forming out of initially multi-polar outcomes as discussed in p.176 onwards), talking about potential pathways is very obviously relevant, I would have thought.
Hypothetically, if there were some famous university professor who had written at length about the possibility of, I dunno, simulated superintelligent ant hives, then I think that Bostrom might have felt compelled to include a discussion of the “superintelligent ant hive hypothesis” in his book. He’s striving for completeness, at least in terms of his coverage of high-level aspects of the A.I. Risk landscape. It would also be a huge slight to the theory’s originator if he left out any reference to the “superintelligent ant hive hypothesis”. And finally, Bostrom probably doesn’t want to place himself in the position of arbiter of which ideas get to be taken seriously, when lots of people probably think of lots of parts of A.I. Risk as loony already.
So, I don’t think Bostrom was sitting in his office plotting how to make his book a weaponized credulity meme. But I also felt, from my own perspective, that the inclusion of the Hanson stuff was just a bit forced.
Yeah, I pretty much agree, but the important point to make is that any superintelligent ant hive hypotheses would have to be at least as plausible and relevant to the topic of the book as Hanson’s ems to make it in. Note Bostrom dismisses brain-computer interfaces as a superintelligence pathway fairly quickly.
Do No Harm, Marsh (elegantly written and moving neurosurgeon memoir on the theme of iatrogenics; I did disagree with his comments on the cost-benefit of operating in one case, though)
M. Atwater, The avalanche hunters. Philadelphia, Macrea Smith Co., 1968. (Russian translation: М. Отуотер, Охотники за лавинами. Изд. 2-е. - М., “Мир”. − 1980). A wonderful memoir, reminds a bit (in spirit, not style) Kipling’s The Head of the District and The Bridge-Builders. Contains examples of real-life problems—risking many lives to save one—with a consequentialist moral.
Nonfiction Books Thread
I wrote a review of Superintelligence: Paths, Dangers, Strategies. It’s also of an essay about the nature of the halo effect on how ideas are perceived.
I can’t say I agree with your reasoning behind why Hanson’s ideas are in the book. I think the book’s content is written with accuracy in mind first and foremost, and I think Hanson’s ideas are there because Bostrom thinks they’re genuinely a plausible direction the future could go, especially in the circumstances where recursive self improving AI of the kinds traditionally envisioned turns out to be unlikely or difficult or impossible for whatever reasons. I don’t think those ideas are there in an effort to mine the Halo effect.
And really, the book’s main thrust is in the title. Paths, Dangers, Strategies. Even if these outcomes are not necessarily mutually exclusive (inc. the possibility of singletons forming out of initially multi-polar outcomes as discussed in p.176 onwards), talking about potential pathways is very obviously relevant, I would have thought.
I think that we are both right.
Hypothetically, if there were some famous university professor who had written at length about the possibility of, I dunno, simulated superintelligent ant hives, then I think that Bostrom might have felt compelled to include a discussion of the “superintelligent ant hive hypothesis” in his book. He’s striving for completeness, at least in terms of his coverage of high-level aspects of the A.I. Risk landscape. It would also be a huge slight to the theory’s originator if he left out any reference to the “superintelligent ant hive hypothesis”. And finally, Bostrom probably doesn’t want to place himself in the position of arbiter of which ideas get to be taken seriously, when lots of people probably think of lots of parts of A.I. Risk as loony already.
So, I don’t think Bostrom was sitting in his office plotting how to make his book a weaponized credulity meme. But I also felt, from my own perspective, that the inclusion of the Hanson stuff was just a bit forced.
Yeah, I pretty much agree, but the important point to make is that any superintelligent ant hive hypotheses would have to be at least as plausible and relevant to the topic of the book as Hanson’s ems to make it in. Note Bostrom dismisses brain-computer interfaces as a superintelligence pathway fairly quickly.
Do No Harm, Marsh (elegantly written and moving neurosurgeon memoir on the theme of iatrogenics; I did disagree with his comments on the cost-benefit of operating in one case, though)
Digital Gold: Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money, Popper (review)
The Party: The Secret World of China’s Communist Rulers, MacGregor
Drop Dead Healthy, Jacob (review)
M. Atwater, The avalanche hunters. Philadelphia, Macrea Smith Co., 1968. (Russian translation: М. Отуотер, Охотники за лавинами. Изд. 2-е. - М., “Мир”. − 1980). A wonderful memoir, reminds a bit (in spirit, not style) Kipling’s The Head of the District and The Bridge-Builders. Contains examples of real-life problems—risking many lives to save one—with a consequentialist moral.