Deepmind’s safety team actually seems like a pretty good place to work. They don’t have a history of contributing much to commercialization, and the people working there seem to have quite a bit of freedom in what they choose to work on, while also having access to Deepmind resources.
The biggest risk from working there is just that making the safety team bigger makes more people think that Deepmind’s AI development will be safe, which seems really very far from the truth, but I don’t think this effect is that large.
The biggest risk from working there is just that making the safety team bigger makes more people think that Deepmind’s AI development will be safe, which seems really very far from the truth
It is the capability researchers in particular and their managers and funders that I worry will be lulled into a false sense of security by the presence of the safety team, not onlookers in general. When you make driving safer, e.g., by putting guardrails on a road, or you make driving appear (to the driver) to be safer, drivers react by taking more risks.
Deepmind’s safety team actually seems like a pretty good place to work. They don’t have a history of contributing much to commercialization, and the people working there seem to have quite a bit of freedom in what they choose to work on, while also having access to Deepmind resources.
The biggest risk from working there is just that making the safety team bigger makes more people think that Deepmind’s AI development will be safe, which seems really very far from the truth, but I don’t think this effect is that large.
It is the capability researchers in particular and their managers and funders that I worry will be lulled into a false sense of security by the presence of the safety team, not onlookers in general. When you make driving safer, e.g., by putting guardrails on a road, or you make driving appear (to the driver) to be safer, drivers react by taking more risks.