There’s a crux here somewhere related to the idea that, with high probability, AI will be powerful enough and integrated into the world in such a way that it will be inevitable or desirable for normal human institutions to eventually lose control and for some small regime to take over the world. I don’t think this is very likely for reasons discussed in the post, and it’s also easy to use this kind of view to justify some pretty harmful types of actions.
I don’t think I understand. It’s not about human institutions losing control “to a small regime”. It’s just about most coordination problems being things you can solve by being smarter. You can do that in high-integrity ways, probably much higher integrity and with less harmful effects than how we’ve historically overcome coordination problems. I de-facto don’t expect things to go this way, but my opinions here are not at all premised on it being desirable for humanity to lose control?
You did say it would be premised on either “inevitable or desirable for normal institutions to be eventually lose control”. In some sense I do think this is “inevitable” but only in the same sense as past “normal human institutions” lost control.
We now have the internet and widespread democracy so almost all governmental institutions needed to change how they operate. Future technological change will force similar changes. But I don’t put any value in the literal existence of our existing institutions, what I care about is whether our institutions are going to make good governance decisions. I am saying that the development of systems much smarter than current humans will change those institutions, very likely within the next few decades, making most concerns about present institutional challenges obsolete.
Of course something that one might call “institutional challenges” will remain, but I do think there really will be a lot of buck-passing that will happen from the perspective of present day humans. We do really have a crunch time of a few decades on our hands, after which we will no longer have much influence over the outcome.
There’s a crux here somewhere related to the idea that, with high probability, AI will be powerful enough and integrated into the world in such a way that it will be inevitable or desirable for normal human institutions to eventually lose control and for some small regime to take over the world. I don’t think this is very likely for reasons discussed in the post, and it’s also easy to use this kind of view to justify some pretty harmful types of actions.
I don’t think I understand. It’s not about human institutions losing control “to a small regime”. It’s just about most coordination problems being things you can solve by being smarter. You can do that in high-integrity ways, probably much higher integrity and with less harmful effects than how we’ve historically overcome coordination problems. I de-facto don’t expect things to go this way, but my opinions here are not at all premised on it being desirable for humanity to lose control?
My bad. Didn’t mean to imply you thought it was desirable.
No worries!
You did say it would be premised on either “inevitable or desirable for normal institutions to be eventually lose control”. In some sense I do think this is “inevitable” but only in the same sense as past “normal human institutions” lost control.
We now have the internet and widespread democracy so almost all governmental institutions needed to change how they operate. Future technological change will force similar changes. But I don’t put any value in the literal existence of our existing institutions, what I care about is whether our institutions are going to make good governance decisions. I am saying that the development of systems much smarter than current humans will change those institutions, very likely within the next few decades, making most concerns about present institutional challenges obsolete.
Of course something that one might call “institutional challenges” will remain, but I do think there really will be a lot of buck-passing that will happen from the perspective of present day humans. We do really have a crunch time of a few decades on our hands, after which we will no longer have much influence over the outcome.