Yeh so thinking a little more I’m not sure my original comment conveyed everything I was hoping to. I’ll add that even if you could get a side of A4 explaining AI x-risk in front of a capabilities researcher at <big_capabilities_lab>, I think they would be much more likely to engage with it if <big_capabilities_lab> is mentioned.
I think arguments will probably be more salient if they include “and you personally, intentionally or not, are entangled with this.”
Saying that, I don’t have any data about the above. I’m keen to hear any personal experiences anyone else might have in this area.
Yeh so thinking a little more I’m not sure my original comment conveyed everything I was hoping to. I’ll add that even if you could get a side of A4 explaining AI x-risk in front of a capabilities researcher at <big_capabilities_lab>, I think they would be much more likely to engage with it if <big_capabilities_lab> is mentioned.
I think arguments will probably be more salient if they include “and you personally, intentionally or not, are entangled with this.”
Saying that, I don’t have any data about the above. I’m keen to hear any personal experiences anyone else might have in this area.