But this only works if those less worried about AI risks who join such a collaboration don’t use the knowledge they gain to cash in on the AI boom in an acceleratory way. Doing so undermines the very point of such a project, namely, to try to make AI go well. It is incredibly damaging to trust within the community.
...This is less about attacking those three folks and more just noting that we need to strive to avoid situations where things like this happen in the first place.
(note: I work at Epoch) This attitude feels like a recipe for creating an intellectual bubble. Of course people will use the knowledge they gain in collaboration with you for the purposes that they think are best. I think it would be pretty bad for the AI safety community if it just relied on forecasting work from card-carrying AI safety advocates.
This attitude feels like a recipe for creating an intellectual bubble
Oh, additional screening could very easily have unwanted side-effects. That’s why I wrote: “It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias” and why it would be better for this issue to never have arisen in the first place. Actions like this can create situations with no good trade-offs.
I think it would be pretty bad for the AI safety community if it just relied on forecasting work from card-carrying AI safety advocates.
I was definitely not suggesting that the AI safety community should decide which forecasts to listen to based on the views of the forecasters. That’s irrelevant, we should pay attention to the best forecasters.
I was talking about funding decisions. This is a separate matter.
If someone else decides to fund a forecaster even though we’re worried they’re net-negative or they do work voluntarily, then we should pay attention to their forecasts if they’re good at their job.
Of course people will use the knowledge they gain in collaboration with you for the purposes that they think are best
Seems like several professions have formal or informal restrictions on how they can use information that they gain in a particular capacity to their advantage. People applying for a forecasting role are certainly entitled to say, ’If I learn anything about AI capabilities here, I may use it to start an AI startup and I won’t actually feel bad about this”. It doesn’t mean you have to hire them.
Of course people will use the knowledge they gain in collaboration with you for the purposes that they think are best.
It is entirely normal for there to be widely accepted, clearly formalized, and meaningfully enforced restrictions on how people use knowledge they’ve gotten in this or that setting… regardless of what they think is best. It’s a commonplace of professional ethics.
Sure, there are in some very specific settings with long held professional norms that people agree to (e.g. doctors and lawyers). I don’t think this applies in this case, though you could try to create such a norm that people agree to.
I would like to see serious thought given to instituting such a norm. There’s a lot of complexities here, figuring out what is or isn’t kosher would be challenging, but it should be explored.
I largely agree with the underlying point here, but I don’t think its quite correct that something like this only applies in specific professions. For example, I think every major company is going to expect employees to be careful about revealing internal info, and there are norms that apply more broadly (trade secrets, insider trading etc.).
As far as I can tell though, those are all highly dissimilar to this scenario because they involve an existing widespread expectation of not using information in a certain way. Its not even clear to me in this case what information was used in what way that is allegedly bad.
I don’t think this is true. People can’t really restrict their use of knowledge, and subtle uses are pretty unenforceable. So it’s expected that knowledge will be used in whatever they do next. Patents and noncompete clauses are attempts to work around this. They work a little, for a little.
Agreed. This is how these codes form. Someone does something like this and then people discuss and decide that there should be a rule against it or that it should at least be frowned upon.
(note: I work at Epoch) This attitude feels like a recipe for creating an intellectual bubble. Of course people will use the knowledge they gain in collaboration with you for the purposes that they think are best. I think it would be pretty bad for the AI safety community if it just relied on forecasting work from card-carrying AI safety advocates.
Thanks for weighing in.
Oh, additional screening could very easily have unwanted side-effects. That’s why I wrote: “It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias” and why it would be better for this issue to never have arisen in the first place. Actions like this can create situations with no good trade-offs.
I was definitely not suggesting that the AI safety community should decide which forecasts to listen to based on the views of the forecasters. That’s irrelevant, we should pay attention to the best forecasters.
I was talking about funding decisions. This is a separate matter.
If someone else decides to fund a forecaster even though we’re worried they’re net-negative or they do work voluntarily, then we should pay attention to their forecasts if they’re good at their job.
Seems like several professions have formal or informal restrictions on how they can use information that they gain in a particular capacity to their advantage. People applying for a forecasting role are certainly entitled to say, ’If I learn anything about AI capabilities here, I may use it to start an AI startup and I won’t actually feel bad about this”. It doesn’t mean you have to hire them.
It is entirely normal for there to be widely accepted, clearly formalized, and meaningfully enforced restrictions on how people use knowledge they’ve gotten in this or that setting… regardless of what they think is best. It’s a commonplace of professional ethics.
Sure, there are in some very specific settings with long held professional norms that people agree to (e.g. doctors and lawyers). I don’t think this applies in this case, though you could try to create such a norm that people agree to.
I would like to see serious thought given to instituting such a norm. There’s a lot of complexities here, figuring out what is or isn’t kosher would be challenging, but it should be explored.
I largely agree with the underlying point here, but I don’t think its quite correct that something like this only applies in specific professions. For example, I think every major company is going to expect employees to be careful about revealing internal info, and there are norms that apply more broadly (trade secrets, insider trading etc.).
As far as I can tell though, those are all highly dissimilar to this scenario because they involve an existing widespread expectation of not using information in a certain way. Its not even clear to me in this case what information was used in what way that is allegedly bad.
I don’t think this is true. People can’t really restrict their use of knowledge, and subtle uses are pretty unenforceable. So it’s expected that knowledge will be used in whatever they do next. Patents and noncompete clauses are attempts to work around this. They work a little, for a little.
Agreed. This is how these codes form. Someone does something like this and then people discuss and decide that there should be a rule against it or that it should at least be frowned upon.