Has anyone written anything about the costs of pausing early? If the AI safety position on superintelligence eventually killing us all is correct, presumably there are points on the path to it that are better than others.
Is the best spot to pause in the past? If it’s in the future, what do we lose by stopping before we reach that point?
As I’ve written before, I think humans are on a glide path to extinction from non-AI causes. I think we are locked into a bunch of problems that require science and engineering solutions that are not currently available.
Pausing AI is likely pausing or rolling back technical development in general. I think the arguments for that leading to extinction long term are stronger than the arguments for superintelligence coming into being and instantly destroying the universe.
Has anyone written anything about the costs of pausing early? If the AI safety position on superintelligence eventually killing us all is correct, presumably there are points on the path to it that are better than others.
Is the best spot to pause in the past? If it’s in the future, what do we lose by stopping before we reach that point?
As I’ve written before, I think humans are on a glide path to extinction from non-AI causes. I think we are locked into a bunch of problems that require science and engineering solutions that are not currently available.
Pausing AI is likely pausing or rolling back technical development in general. I think the arguments for that leading to extinction long term are stronger than the arguments for superintelligence coming into being and instantly destroying the universe.