Has anyone written anything about the costs of pausing early? If the AI safety position on superintelligence eventually killing us all is correct, presumably there are points on the path to it that are better than others.
Is the best spot to pause in the past? If it’s in the future, what do we lose by stopping before we reach that point?
As I’ve written before, I think humans are on a glide path to extinction from non-AI causes. I think we are locked into a bunch of problems that require science and engineering solutions that are not currently available.
Pausing AI is likely pausing or rolling back technical development in general. I think the arguments for that leading to extinction long term are stronger than the arguments for superintelligence coming into being and instantly destroying the universe.
When doing big data analysis on stuff like this, there’s a big difference between generating a story that seems to make sense and generating correct conclusions.
For these examples, how are you validating claude’s conclusions? Are you certain enough to put warheads on foreheads? People during that time were manually analyzing these types of data, and they absolutely were making those decisions.
What is the readiness level of the tech for doing this kind of analysis: 1) nobody can do it (you’re delusional if you believe we’re here) 2) some large organizations can do it if they really want to. 3) most places that want to do it right now can do it, slightly constrained by funding (I believe we are here) 4) absolutely anyone can do it to anyone at any time.
I don’t think data access availability shifts enough to move us from world 3 to world 4 when mythos hits. So...what are you worried about?