I’ve also felt like people who think we’re doomed are basically spending a lot of their effort on sabotaging one of our best bets in the case that we are not doomed, with no clear path to victory in the case where they are correct (how would Anthropic slowing down lead to a global stop?)
And yeah I’m also concerned about competition between DeepMind/Anthropic/SSI/OpenAI—in theory they should all be aligned with each other but as far as I can see they aren’t acting like it.
As an aside, I think the extreme pro-slowdown view is something of a vocal minority. I met some Pause AI organizers IRL and brought up the points I brought in my original comment, expecting pushback, but they agreed, saying they were focused on neutrally enforced slowdowns e.g. government action.
Yeah, I think arguably the biggest thing to judge AI labs on is whether they are pushing the government in favour of regulation or against. With businesses in general, the only way for businesses in a misregulated industry to do good, is to lobby in favour of better regulation (rather than against).
It’s inefficient and outright futile for activists to demand individual businesses to unilaterally do the right thing, get outcompeted, go out of business, have to fire all their employees, and so much better if the activists focus on the government instead. Not only is it extraordinarily hard for one business to make this self sacrifice, but even if one does it, the problem will remain almost just as bad. This applies to every misregulated industry, but for AI in particular “doing the right thing” seems the most antithetical to commercial viability.
It’s disappointing that I don’t see Anthropic pushing the government extremely urgently on AI x-risk, whether it’s regulation or even x-risk spending. I think at one point they even mentioned the importance of the US winning the AI race against China. But at least they’re not against more regulation and seem more in favour of it than other AI labs? At least they’re not openly downplaying the risk? It’s hard to say.
Wow we have a lot of the same thinking!
I’ve also felt like people who think we’re doomed are basically spending a lot of their effort on sabotaging one of our best bets in the case that we are not doomed, with no clear path to victory in the case where they are correct (how would Anthropic slowing down lead to a global stop?)
And yeah I’m also concerned about competition between DeepMind/Anthropic/SSI/OpenAI—in theory they should all be aligned with each other but as far as I can see they aren’t acting like it.
As an aside, I think the extreme pro-slowdown view is something of a vocal minority. I met some Pause AI organizers IRL and brought up the points I brought in my original comment, expecting pushback, but they agreed, saying they were focused on neutrally enforced slowdowns e.g. government action.
Yeah, I think arguably the biggest thing to judge AI labs on is whether they are pushing the government in favour of regulation or against. With businesses in general, the only way for businesses in a misregulated industry to do good, is to lobby in favour of better regulation (rather than against).
It’s inefficient and outright futile for activists to demand individual businesses to unilaterally do the right thing, get outcompeted, go out of business, have to fire all their employees, and so much better if the activists focus on the government instead. Not only is it extraordinarily hard for one business to make this self sacrifice, but even if one does it, the problem will remain almost just as bad. This applies to every misregulated industry, but for AI in particular “doing the right thing” seems the most antithetical to commercial viability.
It’s disappointing that I don’t see Anthropic pushing the government extremely urgently on AI x-risk, whether it’s regulation or even x-risk spending. I think at one point they even mentioned the importance of the US winning the AI race against China. But at least they’re not against more regulation and seem more in favour of it than other AI labs? At least they’re not openly downplaying the risk? It’s hard to say.