Some thoughts on reading Superintelligence (2014). Overall it’s been quite good, and nice to read through such a thorough overview even if it’s not new to me. Weirdly I got some comments that people often stop reading it. What this puts me in mind of is a physics professor remarking to me that they used to find textbooks impenetrable, but now they find it quite fun to leaf through a new introductory textbook. And now my brain is relating this to the popularity of fanfiction that re-uses familiar characters and settings :P
By god, Nick Bostrom thinks in metaphors all the time. Not to imply that this is bad at all, in fact it’s very interesting.
The way the intelligence explosion kinetics is presented really could stand to be less one-dimensional about intelligence. Or rather, perhaps it should ask us to accept that there is some one-dimensional measure of capability that is growing superlinearly, which can then by parlayed into all the other things we care about via the “superpower”-style arguments that appear two chapters later.
Has progress on AI seemed to outpace progress on augmenting human intelligence since 2014? I think so, and perhaps this explains why Bostrom_2014 puts more emphasis on whole brain emulations. But perhaps not—perhaps instead I’ve/we’ve been unduly neglecting thinking about alternate paths to superintelligence in the last few years.
Human imitations (machine learning systems trained to reproduce the behavior of a human within some domain) seem conspicuously absent from Bostrom_2014′s toolbox of parts to build an aligned AI out of. Is this a reflection of the times? My memory is blurry but… plausibly? If so, I think that’s that’s a pretty big piece of conceptual progress we’ve made.
When discussing what moral theory to give a superintelligence, Bostrom inadvertently makes a good case for another piece of conceptual progress since 2014 - our job is not to find The One Right Moral Theory, it is both more complicated and easier than that (see Scott, Me). Hopefully this notion has percolated around enough that this chapter would get written differently today. Or is this still something we don’t have consensus on among Serious Types? Then again, it was already Bostrom who coined the term MaxiPOK—maybe the functional difference isn’t too large.
I would have thought “maybe the CEV of humanity would just shut itself down” would be uttered with more alarm.
In conclusion, footnotes are superior to endnotes.
Some thoughts on reading Superintelligence (2014). Overall it’s been quite good, and nice to read through such a thorough overview even if it’s not new to me. Weirdly I got some comments that people often stop reading it. What this puts me in mind of is a physics professor remarking to me that they used to find textbooks impenetrable, but now they find it quite fun to leaf through a new introductory textbook. And now my brain is relating this to the popularity of fanfiction that re-uses familiar characters and settings :P
By god, Nick Bostrom thinks in metaphors all the time. Not to imply that this is bad at all, in fact it’s very interesting.
The way the intelligence explosion kinetics is presented really could stand to be less one-dimensional about intelligence. Or rather, perhaps it should ask us to accept that there is some one-dimensional measure of capability that is growing superlinearly, which can then by parlayed into all the other things we care about via the “superpower”-style arguments that appear two chapters later.
Has progress on AI seemed to outpace progress on augmenting human intelligence since 2014? I think so, and perhaps this explains why Bostrom_2014 puts more emphasis on whole brain emulations. But perhaps not—perhaps instead I’ve/we’ve been unduly neglecting thinking about alternate paths to superintelligence in the last few years.
Human imitations (machine learning systems trained to reproduce the behavior of a human within some domain) seem conspicuously absent from Bostrom_2014′s toolbox of parts to build an aligned AI out of. Is this a reflection of the times? My memory is blurry but… plausibly? If so, I think that’s that’s a pretty big piece of conceptual progress we’ve made.
When discussing what moral theory to give a superintelligence, Bostrom inadvertently makes a good case for another piece of conceptual progress since 2014 - our job is not to find The One Right Moral Theory, it is both more complicated and easier than that (see Scott, Me). Hopefully this notion has percolated around enough that this chapter would get written differently today. Or is this still something we don’t have consensus on among Serious Types? Then again, it was already Bostrom who coined the term MaxiPOK—maybe the functional difference isn’t too large.
I would have thought “maybe the CEV of humanity would just shut itself down” would be uttered with more alarm.
In conclusion, footnotes are superior to endnotes.