Relatedly, I think Buck far overestimates the influence and resources of safety-concerned staff in a ‘rushed unreasonable developer’.
As in, you don’t expect they’ll be able to implement stuff even if it doesn’t make anyone’s workflow harder or you don’t expect they’ll be able to get that much compute?
Naively, we might expect ~1% of compute as we might expect around 1000 researchers and 10/1000 is 1%. Buck said 3% because I argued for increasing this number. My case would be that there will be bunch of cases where the thing they want to do is obviously reasonable and potentially justifiable from multiple perspectives (do some monitoring of internal usage, fine-tune a model for forecasting/advice, use models to do safety research) such that they can pull somewhat more compute than just the head count would suggest.
I also agree with Zac, maybe if you had a really well-selected group of 10 people you could do something, but 10 randomly selected AGI safety researchers probably don’t accomplish much.
By far my biggest objection is that there are approximately zero useful things that “[don’t] make anyone’s workflow harder”. I expect you’re vastly underestimating the complexity of production systems and companies that build them, and the number of constraints they are under. (You are assuming a do-ocracy though, depending on how much of a do-ocracy it is (e.g. willing to ignore laws) I could imagine changing my mind here.)
EDIT: I could imagine doing asynchronous monitoring of internal deployments. This is still going to make some workflows harder, but probably not a ton, so it seems surmountable. Especially since you could combine it with async analyses that the unreasonable developer actually finds useful.
EDIT 2 (Feb 7): To be clear, I also disagree with the compute number. I’m on board with starting with 1% since they are 1% of the headcount. But then I would decrease it first because they’re not a high-compute capabilities team, and second because whatever they are doing should be less useful to the company than whatever the other researchers are doing (otherwise why weren’t the other researchers doing it?), so maybe I’d estimate 0.3%. But this isn’t super cruxy because I think you can do useful safety work with just 0.3% of the compute.
Again, I could imagine getting more compute with a well-selected group of 10 people (though even then 3% seems unlikely, I’m imagining more like 1%), but I don’t see why in this scenario you should assume you get a well-selected group, as opposed to 10 random AGI safety researchers.
Yep, I think that at least some of the 10 would have to have some serious hustle and political savvy that is atypical (but not totally absent) among AI safety people.
What laws are you imagine making it harder to deploy stuff? Notably I’m imagining these people mostly doing stuff with internal deployments.
I think you’re overfixating on the experience of Google, which has more complicated production systems than most.
I agree with Rohin that there are approximately zero useful things that don’t make anyone’s workflow harder. The default state is “only just working means working, so I’ve moved on to the next thing” and if you want to change something there’d better be a benefit to balance the risk of breaking it.
Also 3% of compute is so much compute; probably more than the “20% to date over four years” that OpenAI promised and then yanked from superalignment. Take your preferred estimate of lab compute spending, multiply by 3%, and ask yourself whether a rushed unreasonable lab would grant that much money to people working on a topic it didn’t care for, at the expense of those it did.
As in, you don’t expect they’ll be able to implement stuff even if it doesn’t make anyone’s workflow harder or you don’t expect they’ll be able to get that much compute?
Naively, we might expect ~1% of compute as we might expect around 1000 researchers and 10/1000 is 1%. Buck said 3% because I argued for increasing this number. My case would be that there will be bunch of cases where the thing they want to do is obviously reasonable and potentially justifiable from multiple perspectives (do some monitoring of internal usage, fine-tune a model for forecasting/advice, use models to do safety research) such that they can pull somewhat more compute than just the head count would suggest.
I also agree with Zac, maybe if you had a really well-selected group of 10 people you could do something, but 10 randomly selected AGI safety researchers probably don’t accomplish much.
By far my biggest objection is that there are approximately zero useful things that “[don’t] make anyone’s workflow harder”. I expect you’re vastly underestimating the complexity of production systems and companies that build them, and the number of constraints they are under. (You are assuming a do-ocracy though, depending on how much of a do-ocracy it is (e.g. willing to ignore laws) I could imagine changing my mind here.)
EDIT: I could imagine doing asynchronous monitoring of internal deployments. This is still going to make some workflows harder, but probably not a ton, so it seems surmountable. Especially since you could combine it with async analyses that the unreasonable developer actually finds useful.
EDIT 2 (Feb 7): To be clear, I also disagree with the compute number. I’m on board with starting with 1% since they are 1% of the headcount. But then I would decrease it first because they’re not a high-compute capabilities team, and second because whatever they are doing should be less useful to the company than whatever the other researchers are doing (otherwise why weren’t the other researchers doing it?), so maybe I’d estimate 0.3%. But this isn’t super cruxy because I think you can do useful safety work with just 0.3% of the compute.
Again, I could imagine getting more compute with a well-selected group of 10 people (though even then 3% seems unlikely, I’m imagining more like 1%), but I don’t see why in this scenario you should assume you get a well-selected group, as opposed to 10 random AGI safety researchers.
Yep, I think that at least some of the 10 would have to have some serious hustle and political savvy that is atypical (but not totally absent) among AI safety people.
What laws are you imagine making it harder to deploy stuff? Notably I’m imagining these people mostly doing stuff with internal deployments.
I think you’re overfixating on the experience of Google, which has more complicated production systems than most.
I agree with Rohin that there are approximately zero useful things that don’t make anyone’s workflow harder. The default state is “only just working means working, so I’ve moved on to the next thing” and if you want to change something there’d better be a benefit to balance the risk of breaking it.
Also 3% of compute is so much compute; probably more than the “20% to date over four years” that OpenAI promised and then yanked from superalignment. Take your preferred estimate of lab compute spending, multiply by 3%, and ask yourself whether a rushed unreasonable lab would grant that much money to people working on a topic it didn’t care for, at the expense of those it did.