I’m missing some context here. Is this not obvious, and well-supported by the vast majority of “treaties” between europeans and natives in the 16th through 19th centuries? For legal settlements, it’s generally between the extremes that each party would prefer, but it’s not always the case that this range doesn’t include “quite bad”, even if not completely arbitrary.
“We’ll kill you quickly and painlessly” isn’t actually arbitrarily bad, it’s only quite bad. There are possibly worse outcomes available if no agreement was available.
The two guys from Epoch on the recent Dwarkesh Patel podcast repeatedly made the argument that we shouldn’t fear AI catastrophe, because even if our successor AIs wanted to pave our cities with datacenters, they would negotiate a treaty with us instead of killing us. It’s a ridiculous argument for many reasons but one of them is that they use abstract game theoretic and economic terms to hide nasty implementation details
Ah, yes—bargaining solutions that ignore or hide a significant underlying power disparity are rampant in wishful-thinking academic circles, and irrelevant in real life. That’s the context I was missing; my confusion is resolved. Thanks!
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven’t read the fine print. /s
Yeah I said that to Matt Barnett 4 months ago here. For example, one man’s “avoiding conflict by reaching negotiated settlement” may be another man’s “acceding to extortion”. Evidently I did not convince him. Shrug.
I’m missing some context here. Is this not obvious, and well-supported by the vast majority of “treaties” between europeans and natives in the 16th through 19th centuries? For legal settlements, it’s generally between the extremes that each party would prefer, but it’s not always the case that this range doesn’t include “quite bad”, even if not completely arbitrary.
“We’ll kill you quickly and painlessly” isn’t actually arbitrarily bad, it’s only quite bad. There are possibly worse outcomes available if no agreement was available.
The two guys from Epoch on the recent Dwarkesh Patel podcast repeatedly made the argument that we shouldn’t fear AI catastrophe, because even if our successor AIs wanted to pave our cities with datacenters, they would negotiate a treaty with us instead of killing us. It’s a ridiculous argument for many reasons but one of them is that they use abstract game theoretic and economic terms to hide nasty implementation details
Ah, yes—bargaining solutions that ignore or hide a significant underlying power disparity are rampant in wishful-thinking academic circles, and irrelevant in real life. That’s the context I was missing; my confusion is resolved. Thanks!
I wonder how many treaties we singed with the countless animal species we destroyed or decided to torture on mass scale during our history? Guess those poor animals were bad negotiators and haven’t read the fine print. /s
Yeah I said that to Matt Barnett 4 months ago here. For example, one man’s “avoiding conflict by reaching negotiated settlement” may be another man’s “acceding to extortion”. Evidently I did not convince him. Shrug.