I am fundamentally suspicious of any plan to solve AI risk where everyone is better off at the end. Unless you can pinpoint who is suffering as a result of your plan succeeding, I am unlikely to take your plan too seriously.
Fair enough. But the changes that have created the most wealth have tended to benefit a larger fraction of people (though definitely not benefit them equally). The more wealth is generated the cheaper it is to pay off losers.
Positive sum games still involve a lot of zero sum moves! Just because the pie is growing doesn’t mean it doesn’t matter who gets more of the pie. If you are a company CEO in a growing industry, you will end up taking adversarial moves against lots of people. You will sue people, you will fire your employees, you will take away profit from your competitors if you succeed, and so on.
It’s important to separate the plan from the public advocacy of the plan. A person might internally be fully aware of the tradeoffs of a plan, while being unable to publicly acknowledge them, because coming out and publicly saying “<powerful group> wouldn’t do as well under our plan as they would under other plans, but we think it’s worth the cost to them for the greater good” will generally lead to righteous failure, do you want to fail righteously? To lose the political game but to be content knowing that you were right and they were wrong and you lost for ostensibly virtuous reasons?
Can you give an example in the real world? (Prefer historical examples if you dont wanna be too controversial) Both your comments are abstract so I’m unclear what you have in mind.
I am fundamentally suspicious of any plan to solve AI risk where everyone is better off at the end. Unless you can pinpoint who is suffering as a result of your plan succeeding, I am unlikely to take your plan too seriously.
Why?
The situation is fundamentally adversarial. People want different things and are willing to go to extreme lengths to get it.
I think my statement is true of basically every major political or economic change in human history.
Fair enough. But the changes that have created the most wealth have tended to benefit a larger fraction of people (though definitely not benefit them equally). The more wealth is generated the cheaper it is to pay off losers.
Positive sum games still involve a lot of zero sum moves! Just because the pie is growing doesn’t mean it doesn’t matter who gets more of the pie. If you are a company CEO in a growing industry, you will end up taking adversarial moves against lots of people. You will sue people, you will fire your employees, you will take away profit from your competitors if you succeed, and so on.
I agree. But that doesn’t necessitate that any particular person is going to lose in absolute terms.
In some hypothetical game theory puzzle sure. In the real world it does necessitate it with like >95% probability.
And here we are talking about positive sum stuff like growing a business.
Pause AI movement is explicitly a zero sum political battle.
Would you ever be willing to support or advocate a plan you were suspicious of?
It’s kinda complicated, I cant answer a blanket yes or no. There are hypothetical situations where I might advocate such a plan yes.
Also I want more info on how this connects to my comment.
It’s important to separate the plan from the public advocacy of the plan. A person might internally be fully aware of the tradeoffs of a plan, while being unable to publicly acknowledge them, because coming out and publicly saying “<powerful group> wouldn’t do as well under our plan as they would under other plans, but we think it’s worth the cost to them for the greater good” will generally lead to righteous failure, do you want to fail righteously? To lose the political game but to be content knowing that you were right and they were wrong and you lost for ostensibly virtuous reasons?
Can you give an example in the real world? (Prefer historical examples if you dont wanna be too controversial) Both your comments are abstract so I’m unclear what you have in mind.