Yes, I think you make some very key points. I think any plan which claims to be coherent but neglects these concerns is fatally flawed. That said, I think it could be useful to expand your conception of what a ‘pivotal act’ might consist of. What if the thing we really need the Aligned AI to engineer for us is… a better governance system?
What if we could come up with a system of voluntary contracts that enabled decentralized human-flourishing-aligned governance while gradually eroding the power of centralized governments. Peace, freedom, maximum autonomy insofar as it doesn’t hurt others, avoidance of traps like arms races and tragedy-of-the-commons. Is such a thing even possible? Would we be able to successfully distinguish a good plan from a bad one? I don’t know. I think it’s worth considering though.
What if the thing we really need the Aligned AI to engineer for us is… a better governance system?
I’ve been arguing for the importance of having wise AI advisors. Which isn’t quite the same thing as a “better governance system”, since they could advise us about all kinds of things, but feels like it’s in the same direction.
The pivotal act was defined by Yudkowsky, I’m just borrowing the definition. The idea is that even after you’ve built a perfectly aligned superintelligent AI, you only have about 6 months before someone else builds an unaligned superintelligent AI. That’s probably not enough time to convince the entire world to adopt a better governance system before getting atomized by nanobots. So your aligned AI would have to take over the world and forcefully implement this better governance system within a span of a few months.
Yes, I’m hoping that the better governance system is something that can be accomplished prior to superintelligence. I do agree that the short time frame for implementation seems like the biggest obstacle to success.
Yes, I think you make some very key points. I think any plan which claims to be coherent but neglects these concerns is fatally flawed. That said, I think it could be useful to expand your conception of what a ‘pivotal act’ might consist of. What if the thing we really need the Aligned AI to engineer for us is… a better governance system?
What if we could come up with a system of voluntary contracts that enabled decentralized human-flourishing-aligned governance while gradually eroding the power of centralized governments. Peace, freedom, maximum autonomy insofar as it doesn’t hurt others, avoidance of traps like arms races and tragedy-of-the-commons. Is such a thing even possible? Would we be able to successfully distinguish a good plan from a bad one? I don’t know. I think it’s worth considering though.
See my comment here for more about what I mean.
I’ve been arguing for the importance of having wise AI advisors. Which isn’t quite the same thing as a “better governance system”, since they could advise us about all kinds of things, but feels like it’s in the same direction.
Thanks so much for engaging, Nathan!
The pivotal act was defined by Yudkowsky, I’m just borrowing the definition. The idea is that even after you’ve built a perfectly aligned superintelligent AI, you only have about 6 months before someone else builds an unaligned superintelligent AI. That’s probably not enough time to convince the entire world to adopt a better governance system before getting atomized by nanobots. So your aligned AI would have to take over the world and forcefully implement this better governance system within a span of a few months.
Yes, I’m hoping that the better governance system is something that can be accomplished prior to superintelligence. I do agree that the short time frame for implementation seems like the biggest obstacle to success.