Yep, I think that at least some of the 10 would have to have some serious hustle and political savvy that is atypical (but not totally absent) among AI safety people.
What laws are you imagine making it harder to deploy stuff? Notably I’m imagining these people mostly doing stuff with internal deployments.
I think you’re overfixating on the experience of Google, which has more complicated production systems than most.
I agree with Rohin that there are approximately zero useful things that don’t make anyone’s workflow harder. The default state is “only just working means working, so I’ve moved on to the next thing” and if you want to change something there’d better be a benefit to balance the risk of breaking it.
Also 3% of compute is so much compute; probably more than the “20% to date over four years” that OpenAI promised and then yanked from superalignment. Take your preferred estimate of lab compute spending, multiply by 3%, and ask yourself whether a rushed unreasonable lab would grant that much money to people working on a topic it didn’t care for, at the expense of those it did.
Yep, I think that at least some of the 10 would have to have some serious hustle and political savvy that is atypical (but not totally absent) among AI safety people.
What laws are you imagine making it harder to deploy stuff? Notably I’m imagining these people mostly doing stuff with internal deployments.
I think you’re overfixating on the experience of Google, which has more complicated production systems than most.
I agree with Rohin that there are approximately zero useful things that don’t make anyone’s workflow harder. The default state is “only just working means working, so I’ve moved on to the next thing” and if you want to change something there’d better be a benefit to balance the risk of breaking it.
Also 3% of compute is so much compute; probably more than the “20% to date over four years” that OpenAI promised and then yanked from superalignment. Take your preferred estimate of lab compute spending, multiply by 3%, and ask yourself whether a rushed unreasonable lab would grant that much money to people working on a topic it didn’t care for, at the expense of those it did.