My question is, how do you make AI risk known while minimizing the risk of paradoxical impacts? “Never talk about it” is the wrong answer, but I expect there’s a way to do better than we’ve done so far. This seems like an important thing to try to understand.
My question is, how do you make AI risk known while minimizing the risk of paradoxical impacts? “Never talk about it” is the wrong answer, but I expect there’s a way to do better than we’ve done so far. This seems like an important thing to try to understand.