When it comes to solutions I think that humans versus AI axis doesn’t make sense for the systems that we’re in, it is rather about desirable system properties such as participation, exploration and caring for the participants in the system.
If we can foster a democratic, caring, open-ended decision making process where humans and AI can converge towards optimal solutions then I think our work is done.
Human disempowerment is okay as long as it is replaced by a better and smarter system so whilst I think the solutions are pointing in the right direction, the main axis of validation should rather be around system properties and not power distribution.
Good summary though, it is great that we finally have a great paper to point towards for these problems.
Also, the solution is obviously to friendship is optimal the system that humans and AI coordinate in. Create an opt-in secure system that allows more resources if you cooperate and you will be able to outperform those silly defectors.
When it comes to solutions I think that humans versus AI axis doesn’t make sense for the systems that we’re in, it is rather about desirable system properties such as participation, exploration and caring for the participants in the system.
If we can foster a democratic, caring, open-ended decision making process where humans and AI can converge towards optimal solutions then I think our work is done.
Human disempowerment is okay as long as it is replaced by a better and smarter system so whilst I think the solutions are pointing in the right direction, the main axis of validation should rather be around system properties and not power distribution.
Good summary though, it is great that we finally have a great paper to point towards for these problems.
Also, the solution is obviously to friendship is optimal the system that humans and AI coordinate in. Create an opt-in secure system that allows more resources if you cooperate and you will be able to outperform those silly defectors.