I think it’s good that someone is bringing this up. I think as a community we want to be deliberate and thoughtful with this class of things.
That being said, my read is that the main failure mode with advocacy at the moment isn’t “capabilities researchers are having emotional responses to being called out which is making it hard for them to engage seriously with x-risk.” It’s “they literally have no idea that anyone thinks what they are doing is bad.”
Consider FAIR trying their hardest to open-source capabilities work with OPT. The tone and content of the responses shows overwhelming support for doing something that is, in my worldview, really, really bad.
I would feel much better if these people at least glanced their eyeballs over arguments for not open-source capabilities. Using the names of specific labs surely makes it more likely that the relevant writing ends up in front of them?
I’ve done quite a bit of thinking about this, and I’m pretty familiar with the area.
If a corporation has a brand, and you have no idea how powerful, aggressive, or exploitative that corporation is (e.g. Facebook, Disney, etc), then it’s best not to write anything that calls out that brand. If you go on Reddit and write something publically about how awful Dr. Pepper is, then you’re entangling yourself into the ongoing conflict between Coca Cola and Pepsi, whether you know about it or not. And if you don’t know what you’re getting into, or even aren’t sure, then you certainly aren’t prepared to model the potential consequences.
Ok not sure I understand this. Are you saying “Big corps are both powerful and complicated. Trying to model their response is intractably difficult so under that uncertainty you are better to just steer clear?”
Yeh so thinking a little more I’m not sure my original comment conveyed everything I was hoping to. I’ll add that even if you could get a side of A4 explaining AI x-risk in front of a capabilities researcher at <big_capabilities_lab>, I think they would be much more likely to engage with it if <big_capabilities_lab> is mentioned.
I think arguments will probably be more salient if they include “and you personally, intentionally or not, are entangled with this.”
Saying that, I don’t have any data about the above. I’m keen to hear any personal experiences anyone else might have in this area.
I think it’s good that someone is bringing this up. I think as a community we want to be deliberate and thoughtful with this class of things.
That being said, my read is that the main failure mode with advocacy at the moment isn’t “capabilities researchers are having emotional responses to being called out which is making it hard for them to engage seriously with x-risk.”
It’s “they literally have no idea that anyone thinks what they are doing is bad.”
Consider FAIR trying their hardest to open-source capabilities work with OPT. The tone and content of the responses shows overwhelming support for doing something that is, in my worldview, really, really bad.
I would feel much better if these people at least glanced their eyeballs over arguments for not open-source capabilities. Using the names of specific labs surely makes it more likely that the relevant writing ends up in front of them?
I’ve done quite a bit of thinking about this, and I’m pretty familiar with the area.
If a corporation has a brand, and you have no idea how powerful, aggressive, or exploitative that corporation is (e.g. Facebook, Disney, etc), then it’s best not to write anything that calls out that brand. If you go on Reddit and write something publically about how awful Dr. Pepper is, then you’re entangling yourself into the ongoing conflict between Coca Cola and Pepsi, whether you know about it or not. And if you don’t know what you’re getting into, or even aren’t sure, then you certainly aren’t prepared to model the potential consequences.
Ok not sure I understand this. Are you saying “Big corps are both powerful and complicated. Trying to model their response is intractably difficult so under that uncertainty you are better to just steer clear?”
Yes, that’s a very good way of putting it. I will be more careful to think about inferential distance from now on.
Yeh so thinking a little more I’m not sure my original comment conveyed everything I was hoping to. I’ll add that even if you could get a side of A4 explaining AI x-risk in front of a capabilities researcher at <big_capabilities_lab>, I think they would be much more likely to engage with it if <big_capabilities_lab> is mentioned.
I think arguments will probably be more salient if they include “and you personally, intentionally or not, are entangled with this.”
Saying that, I don’t have any data about the above. I’m keen to hear any personal experiences anyone else might have in this area.