Fair enough. There are some for profits where profit and impact are more related than others.
But it’s also quite likely your evals are not actually evaluating anything to do with x-risks or s-risks, and so it just feels like it’s making progress, but isn’t.
I’m assuming here people are trying to prevent AI from killing everyone. If you have other goals, this doesn’t apply.
there are an extremely large number of NGOs with passionate people who do not remotely move the needle on whatever problem they are trying to solve. I think it’s the modal outcome for a new nonprofit
I’d say this is the same thing for AI for-profits from the perspective of AI notkilleveryoneism. Probably, the modal outcome is slightly increasing the odds AI kills everyone. At least the non-profits the modal outcome is not doing anything, rather than making things worse.
Fair enough. There are some for profits where profit and impact are more related than others.
But it’s also quite likely your evals are not actually evaluating anything to do with x-risks or s-risks, and so it just feels like it’s making progress, but isn’t.
I’m assuming here people are trying to prevent AI from killing everyone. If you have other goals, this doesn’t apply.
I’d say this is the same thing for AI for-profits from the perspective of AI notkilleveryoneism. Probably, the modal outcome is slightly increasing the odds AI kills everyone. At least the non-profits the modal outcome is not doing anything, rather than making things worse.