I have a feeling it’s not that simple. See the last part of “Generate evidence of difficulty” as a research purpose on biases. So for example I know at least one person who quit from an AI safety org (in part) because they became convinced that it’s too difficult to achieve safe, competitive AI (or at least the approach pursued by the org wasn’t going to work). Another person privately told me they have little idea how their research will eventually contribute to a safe, competitive AI, but hasn’t written anything like that publicly AFAIK. (And note that I don’t actually have that many opportunities to speak privately with other AI safety researchers.) Another thing is that most AI safety researchers probably don’t think it’s part of their job to “generate evidence of difficulty” so I have to convince them of that first.
Unless these problems are solved, I might be able to convince a few safety researchers to go to governance researchers and tell them they think it’s not possible to get safe, competitive AI, but their concerns will probably just be dismissed as outliers. I think a better step forward would be to build a private forum where these kinds of concerns can be more frankly discussed, as well as a culture where doing so is normative. This addresses some of the possible biases and I’m still not sure about the others.
This is pretty strongly different from my impressions, but I don’t think we could resolve the disagreement without talking about specific examples of people, so I’m inclined to set this aside.
I have a feeling it’s not that simple. See the last part of “Generate evidence of difficulty” as a research purpose on biases. So for example I know at least one person who quit from an AI safety org (in part) because they became convinced that it’s too difficult to achieve safe, competitive AI (or at least the approach pursued by the org wasn’t going to work). Another person privately told me they have little idea how their research will eventually contribute to a safe, competitive AI, but hasn’t written anything like that publicly AFAIK. (And note that I don’t actually have that many opportunities to speak privately with other AI safety researchers.) Another thing is that most AI safety researchers probably don’t think it’s part of their job to “generate evidence of difficulty” so I have to convince them of that first.
Unless these problems are solved, I might be able to convince a few safety researchers to go to governance researchers and tell them they think it’s not possible to get safe, competitive AI, but their concerns will probably just be dismissed as outliers. I think a better step forward would be to build a private forum where these kinds of concerns can be more frankly discussed, as well as a culture where doing so is normative. This addresses some of the possible biases and I’m still not sure about the others.
This is pretty strongly different from my impressions, but I don’t think we could resolve the disagreement without talking about specific examples of people, so I’m inclined to set this aside.