This is getting a bit too long for a point-by-point response, so I’ll pick what I think are the most productive points to make. Let me know if there’s anything in particular you’d like a response on.
It seems like you are relying on an assumption of a rapid transition from a world like ours to a world dominated by superhuman AI.
I try not to assume this, but quite possibly I’m being unconsciously biased in that direction. If you see any place where I seem to be implicitly assuming this, please point it out, but I think my argument applies even if the transition takes years instead of weeks.
If so, this seems unlikely given the great range of possible coordination mechanisms many of which look like they could avert this problem, the robust historical trends in increasing coordination ability and scale of organization, etc.
Coordination ability may be increasing but is still very low on an absolute scale. (For example we haven’t achieved nuclear disarmament, which seems like a vastly easier coordination problem.) I don’t see it increasing at a fast enough pace to be able to solve the problem in time. I also think there are arguments in economics (asymmetric information, public choice theory, principal-agent problems) that suggest theoretical limits to how effective coordination mechanisms can be.
Indeed, I would even agree that any particular proposal is very unlikely to work, and any class of proposals is pretty unlikely to work, etc. (I would say the same thing about approaches to AI itself).
For each AI approach there is not a large number of classes of “AI control schemes” that are compatible or applicable to it, so I don’t understand your relative optimism if you think any given class of proposals is pretty unlikely to work.
But the bigger problem for me is that even if one of these proposals “works”, I still don’t see how that helps towards the goal of ending up with a superintelligent singleton that shares our values and is capable of solving philosophical problems, which I think is necessary to get the best outcome in the long run. An AI that respects my intentions might be “safe” in the immediate sense, but if everyone else has got one, we now have less time to solve philosophy/metaphilosophy before the window of opportunity for building a singleton closes.
I agree that we have little idea what you would like the universe to look like. Presumably what you would want in the near term involves e.g. more robust solutions to the control problem and opportunities for further reflection, if not direct philosophical help.
(Quoting from a parallel email discussion which we might as well continue here.) My point is that the development of such an AI leaves people like me in a worse position than before. Yes I would ask for “more robust solutions to the control problem” but unless the solutions are on the path to solving philosophy/metaphilosophy, they are only ameliorating the damage and not contributing to the ultimate goal, and while I do want “opportunities for further reflection”, the AI isn’t going to give me more than what I already had before. In the mean time, other people who are less reflective than me are using their AIs to develop nanotech and more powerful AIs, likely forcing me to do the same (before I’d otherwise prefer) in order to remain competitive.
This is getting a bit too long for a point-by-point response, so I’ll pick what I think are the most productive points to make. Let me know if there’s anything in particular you’d like a response on.
I try not to assume this, but quite possibly I’m being unconsciously biased in that direction. If you see any place where I seem to be implicitly assuming this, please point it out, but I think my argument applies even if the transition takes years instead of weeks.
Coordination ability may be increasing but is still very low on an absolute scale. (For example we haven’t achieved nuclear disarmament, which seems like a vastly easier coordination problem.) I don’t see it increasing at a fast enough pace to be able to solve the problem in time. I also think there are arguments in economics (asymmetric information, public choice theory, principal-agent problems) that suggest theoretical limits to how effective coordination mechanisms can be.
For each AI approach there is not a large number of classes of “AI control schemes” that are compatible or applicable to it, so I don’t understand your relative optimism if you think any given class of proposals is pretty unlikely to work.
But the bigger problem for me is that even if one of these proposals “works”, I still don’t see how that helps towards the goal of ending up with a superintelligent singleton that shares our values and is capable of solving philosophical problems, which I think is necessary to get the best outcome in the long run. An AI that respects my intentions might be “safe” in the immediate sense, but if everyone else has got one, we now have less time to solve philosophy/metaphilosophy before the window of opportunity for building a singleton closes.
(Quoting from a parallel email discussion which we might as well continue here.) My point is that the development of such an AI leaves people like me in a worse position than before. Yes I would ask for “more robust solutions to the control problem” but unless the solutions are on the path to solving philosophy/metaphilosophy, they are only ameliorating the damage and not contributing to the ultimate goal, and while I do want “opportunities for further reflection”, the AI isn’t going to give me more than what I already had before. In the mean time, other people who are less reflective than me are using their AIs to develop nanotech and more powerful AIs, likely forcing me to do the same (before I’d otherwise prefer) in order to remain competitive.