Overall, I got the strong impression that the book was trying to convince me of a worldview where it doesn’t matter how hard we try to come up with methods to control advanced AI systems, because at some point one of those systems will tip over into a level of intelligence where we just can’t compete.
FWIW, my sense is that Y&S do believe that alignment is possible in principle. (I do.)
I think the “eventually, we just can’t compete” point is correct. Suppose we have some gradualist chain of humans controlling models controlling model advancements, from here out to Dyson spheres. I think it’s extremely likely that eventually the human control on top gets phased out, like happened in humans playing chess, where centaurs are worse and make more mistakes than pure AI systems. Thinking otherwise feels like postulating that machines can never be superhuman at legitimacy.[1]
Chapter 10 of the book talks about the space probe / nuclear reactor / computer security angle, and I think a gradualist control approach that takes those three seriously will probably work. I think my core complaint is that I mostly see people using gradualism as an argument that they don’t need to face those engineering challenges, and I expect them to simply fail at difficult challenges they’re not attempting to succeed at.
Like, there’s this old idea of basins of reflective stability. It’s possible to imagine a system that looks at itself and says “I’m perfect, no notes”, and then the question is—how many such systems are there? Each is probably surrounded by other systems that look at themselves and say “actually I should change a bit, like so—” and become one of the stable systems, and systems even further out will change to only have one problem, and so on. The choices we’re making now at probably not jumping straight to the end, but instead deciding which basin of reflective stability we’re in. I mostly don’t see people grappling with the endpoint, or trying to figure out the dynamics of the process, and instead just trusting it and hoping that local improvements will eventually translate to global improvements.
Incidentally, a somewhat formative experience for me was AAAI 2015, when a campaign to stop lethal autonomous weapons was getting off the ground, and at the ethics workshop a representative wanted to establish a principle that computers should never make a life-or-death decision. One of the other attendees objected—he worked on software to allocate donor organs to people on the waitlist, and for them it was a point of pride and important coordination tool that decisions were being made by fair systems instead of corruptible or biased humans.
Like, imagine someone saying that driving is a series of many life-or-death decisions, and so we shouldn’t let computers do it, even as the computers become demonstrably superior to humans. At some point people let the computers do it, and at a later point they tax or prevent the humans from doing it.
FWIW, my sense is that Y&S do believe that alignment is possible in principle. (I do.)
I think the “eventually, we just can’t compete” point is correct. Suppose we have some gradualist chain of humans controlling models controlling model advancements, from here out to Dyson spheres. I think it’s extremely likely that eventually the human control on top gets phased out, like happened in humans playing chess, where centaurs are worse and make more mistakes than pure AI systems. Thinking otherwise feels like postulating that machines can never be superhuman at legitimacy.[1]
Chapter 10 of the book talks about the space probe / nuclear reactor / computer security angle, and I think a gradualist control approach that takes those three seriously will probably work. I think my core complaint is that I mostly see people using gradualism as an argument that they don’t need to face those engineering challenges, and I expect them to simply fail at difficult challenges they’re not attempting to succeed at.
Like, there’s this old idea of basins of reflective stability. It’s possible to imagine a system that looks at itself and says “I’m perfect, no notes”, and then the question is—how many such systems are there? Each is probably surrounded by other systems that look at themselves and say “actually I should change a bit, like so—” and become one of the stable systems, and systems even further out will change to only have one problem, and so on. The choices we’re making now at probably not jumping straight to the end, but instead deciding which basin of reflective stability we’re in. I mostly don’t see people grappling with the endpoint, or trying to figure out the dynamics of the process, and instead just trusting it and hoping that local improvements will eventually translate to global improvements.
Incidentally, a somewhat formative experience for me was AAAI 2015, when a campaign to stop lethal autonomous weapons was getting off the ground, and at the ethics workshop a representative wanted to establish a principle that computers should never make a life-or-death decision. One of the other attendees objected—he worked on software to allocate donor organs to people on the waitlist, and for them it was a point of pride and important coordination tool that decisions were being made by fair systems instead of corruptible or biased humans.
Like, imagine someone saying that driving is a series of many life-or-death decisions, and so we shouldn’t let computers do it, even as the computers become demonstrably superior to humans. At some point people let the computers do it, and at a later point they tax or prevent the humans from doing it.