[Question] What is a disagreement you have around AI safety?

I was once discussing Aumann’s agreement theorem with a rationalist who wanted to use it to reduce disagreements about AI safety. We were thinking about why it doesn’t work in that context when (I would argue) Aumann’s agreement theorem applies robustly in lots of other contexts.

I think I can find out (or maybe even already know) why it doesn’t apply in the context of AI safety, but to test it I would benefit from having a bunch of cases of disagreements about AI safety to investigate.

So if you have a disagreement with anyone about AI safety, feel encouraged to post it here.