This attitude feels like a recipe for creating an intellectual bubble
Oh, additional screening could very easily have unwanted side-effects. That’s why I wrote: “It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias” and why it would be better for this issue to never have arisen in the first place. Actions like this can create situations with no good trade-offs.
I think it would be pretty bad for the AI safety community if it just relied on forecasting work from card-carrying AI safety advocates.
I was definitely not suggesting that the AI safety community should decide which forecasts to listen to based on the views of the forecasters. That’s irrelevant, we should pay attention to the best forecasters.
I was talking about funding decisions. This is a separate matter.
If someone else decides to fund a forecaster even though we’re worried they’re net-negative or they do work voluntarily, then we should pay attention to their forecasts if they’re good at their job.
Of course people will use the knowledge they gain in collaboration with you for the purposes that they think are best
Seems like several professions have formal or informal restrictions on how they can use information that they gain in a particular capacity to their advantage. People applying for a forecasting role are certainly entitled to say, ’If I learn anything about AI capabilities here, I may use it to start an AI startup and I won’t actually feel bad about this”. It doesn’t mean you have to hire them.
Thanks for weighing in.
Oh, additional screening could very easily have unwanted side-effects. That’s why I wrote: “It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias” and why it would be better for this issue to never have arisen in the first place. Actions like this can create situations with no good trade-offs.
I was definitely not suggesting that the AI safety community should decide which forecasts to listen to based on the views of the forecasters. That’s irrelevant, we should pay attention to the best forecasters.
I was talking about funding decisions. This is a separate matter.
If someone else decides to fund a forecaster even though we’re worried they’re net-negative or they do work voluntarily, then we should pay attention to their forecasts if they’re good at their job.
Seems like several professions have formal or informal restrictions on how they can use information that they gain in a particular capacity to their advantage. People applying for a forecasting role are certainly entitled to say, ’If I learn anything about AI capabilities here, I may use it to start an AI startup and I won’t actually feel bad about this”. It doesn’t mean you have to hire them.