cG is institutionally capable of funding the kinds of things the people who have strong technical models of the hard parts of alignment think might be helpful. They mostly don’t because most of the cG grantmakers don’t have those technical models (though some have a fair amount of the picture, including Jake who is doing this hiring round).
My guess as to why they don’t is partly normal organizational inertia things, but plausibly mostly because the kinds of conversations that would be needed to change that don’t happen very easily. Most of the people who are talking to them are trying to get money for specific things, hence the conversation is not super clean for general purpose information transfer, as one party has an extremely strong interest in the outcome of the object level. Also, most of the people who have the kinds of models of technical I think are needed to make good calls are not super good at passing the ITT of prosaic empirical stuff, so the cG grantmakers probably feel frustrated and won’t rate the incoming models highly enough.
My guess is getting a single cG grantmaker who deeply gets it, has grounded confidence and a type of truth-seeking that will hold up even if people around you disagree, and can engage flexibly and with good humor to convey the models that a bunch of the most experienced people around here hold would not just something like double the amount of really well directed dollars, but also maybe shift other things in cG for the better.
I’ve sent them the list of my top ~10 picks and reached out to them. Many don’t want to drop out of research or other roles entirely, but would be interested in a re-granting program, which seems like a best of both worlds.
In my three calls with cG following my post which was fairly critical of them (and almost all the other grantmakers) I’ve updated to something like:
cG is institutionally capable of funding the kinds of things the people who have strong technical models of the hard parts of alignment think might be helpful. They mostly don’t because most of the cG grantmakers don’t have those technical models (though some have a fair amount of the picture, including Jake who is doing this hiring round).
My guess as to why they don’t is partly normal organizational inertia things, but plausibly mostly because the kinds of conversations that would be needed to change that don’t happen very easily. Most of the people who are talking to them are trying to get money for specific things, hence the conversation is not super clean for general purpose information transfer, as one party has an extremely strong interest in the outcome of the object level. Also, most of the people who have the kinds of models of technical I think are needed to make good calls are not super good at passing the ITT of prosaic empirical stuff, so the cG grantmakers probably feel frustrated and won’t rate the incoming models highly enough.
My guess is getting a single cG grantmaker who deeply gets it, has grounded confidence and a type of truth-seeking that will hold up even if people around you disagree, and can engage flexibly and with good humor to convey the models that a bunch of the most experienced people around here hold would not just something like double the amount of really well directed dollars, but also maybe shift other things in cG for the better.
I’ve sent them the list of my top ~10 picks and reached out to them. Many don’t want to drop out of research or other roles entirely, but would be interested in a re-granting program, which seems like a best of both worlds.