The AI you want to assist you in research is misaligned and not trustworthy.
AI is becoming less corrigible as it becomes more powerful.
AI safety research almost certainly cannot outpace AI capabilities research from an equal start.
AI safety research is way behind capabilities research.
Solving the technical alignment problem on its own does not solve the AI doom crisis.
Short timelines very likely mean we’re just dead, so this is a conversation about what to do with the last years of your life, not what to do that stands a chance at being useful.
Overall, the argument in this post serves primarily to reinforce an existing belief and to make people feel better about what they are already doing. (In other words, it is just cope.)
Bonus:
AI governance is strictly necessary in order to prevent the world from being destroyed.
AI governance on its own is sufficient to prevent the world from being destroyed.
AI governance is evidentially much more tractable than AI technical alignment.
There is a lot wrong with this post.
The AI you want to assist you in research is misaligned and not trustworthy.
AI is becoming less corrigible as it becomes more powerful.
AI safety research almost certainly cannot outpace AI capabilities research from an equal start.
AI safety research is way behind capabilities research.
Solving the technical alignment problem on its own does not solve the AI doom crisis.
Short timelines very likely mean we’re just dead, so this is a conversation about what to do with the last years of your life, not what to do that stands a chance at being useful.
Overall, the argument in this post serves primarily to reinforce an existing belief and to make people feel better about what they are already doing. (In other words, it is just cope.)
Bonus:
AI governance is strictly necessary in order to prevent the world from being destroyed.
AI governance on its own is sufficient to prevent the world from being destroyed.
AI governance is evidentially much more tractable than AI technical alignment.