I feel like I already addressed this not in my previous comment, but the one before that. We might put a a semi-corrigible weak AI in a box and try extract work from it in the near future, but that’s clealry not the end goal.
I guess you now have better understanding of why people are still interested in solving morality and politics and meaning, without delegating these problems to an ASI.
I feel like I already addressed this not in my previous comment, but the one before that. We might put a a semi-corrigible weak AI in a box and try extract work from it in the near future, but that’s clealry not the end goal.
Okay cool.
I guess you now have better understanding of why people are still interested in solving morality and politics and meaning, without delegating these problems to an ASI.
No, I don’t think so.