If I understand correctly, Eliezer believes that coordination is human-level hard, but not ASI-level hard. Those competing firms, made up of ASI-intelligent agents, would quite easily be able to coordinate to take resources from humans, instead of trading with humans, once it was in fact the case that doing so would be better for the ASI firms.
Mechanically, if I understand the Functional Decision Theory claim, the idea is that when you can expose your own decision process to a counter-party, and they can do the same, then both of you can simply run the decision process which produces the best outcome while using the other party’s process as an input to yours. You can verify, looking at their decision function, that if you cooperate, they will as well, and they are looking for that same mechanistic assurance in your decision function. Both parties have a fully selfish incentive to run these kinds of mutually transparent decision functions, because doing so lets you hop to stable equilibria like “defect against the humans but not each other” with ease. If I have the details wrong here, someone please correct me.
I’d also contend this is the primary crux of the disagreement. If coordination between ASI-agents and firms were proven to be as difficult for them as it is for humans, I suspect Eliezer would be far more optimistic.
ASI-intelligent agents, would quite easily be able to coordinate to take resources from humans, instead of trading with humans, once it was in fact the case that doing so would be better for the ASI firms.
This is kind of like the theory that millions of lawyers and accountants will conspire with each other to steal all the money from their clients, leaving everyone who isn’t a lawyer or accountant with nothing—plausible because lawyers and accountants are specialists in writing contracts—which is the human form of supercooperation—so they could just make a big contract which gives them everything and their clients nothing.
Of course this doesn’t exactly happen, because it turns out that lawyers and accountants can get a pretty good deal by just doing a little bit of protectionism/guild-based corruption and extracting some rent, which is far, far safer and easier to coordinate than trying to completely disempower all non-lawyers and take everything from them.
There is also a problem with reasoning using the concept of an “ASI” here; there’s no such thing as an ASI. The term is not concrete, it is defined as a whole class of AI systems with the property that they exceed humans in all domains. There’s no reason that you couldn’t make a superintelligence using the Transformer/Neural Network/LLM paradigm, and I think the prospect of doing Yudkowskian FDT with them is extremely implausible.
It is much more likely that such systems will just do normal economy stuff, maybe some firms will work out how to extract a bit of rent, etc.
The truth is, capitalism and property rights has existed for 5000 years and has been fairly robust to about 5 orders of magnitude increase in population and to almost every technological change. The development of human level AI and beyond may be something special for humans in a personal sense, but it is actually not such a big deal for our economy, which has already coped with many orders of magnitude’s worth of change in population, technology and intelligence at a collective level.
which is far, far safer and easier to coordinate than trying to completely disempower all non-lawyers and take everything from them
But it would probably be a lot less dangerous if lawyers outnumbered non-lawyers by several million, were much smarter, thought faster, had military supremacy, etc. etc. etc.
The truth is, capitalism and property rights has existed for 5000 years and has been fairly robust to about 5 orders of magnitude increase in population
During which time many less-powerful human and non-human populations were in fact destroyed or substantially harmed and disempowered by the people who did well at that system?
it would probably be a lot less dangerous if lawyers outnumbered non-lawyers by several million
well lawyers don’t seem to be on course to specifically target and disempower just the set of people with names beginning with the letter ‘A’ who have green eyes and were born in January either......
Well that would be a rather unnatural conspiracy! IMO you can basically think of law, property rights etc. as being about people getting together to make agreements for their mutual benefit, which can be in the form of ganging up on some subgroup depending on how natural of a Schelling point it is to do that, how well the victims can coordinate, etc. “AIs ganging up on humans” does actually seem like a relatively natural Schelling point where the victims would be pretty unable to respond? Especially if there are systematic differences between the values of a typical human and typical AI, which would make ganging up more attractive. These Schelling points also can arise in periods of turbulence where one system is replaced by another, e.g. colonialism, the industrial revolution. It seems plausible that AIs coming to power will feature such changes(unless you think property rights and capitalism as devised by humans are the optimum of methods of coordination devisable by AIs?)
Humans have successfully managed to take property away from literally every other animal species. I don’t see why ASIs should give humans any more property rights than humans give to rats.
If I understand correctly, Eliezer believes that coordination is human-level hard, but not ASI-level hard. Those competing firms, made up of ASI-intelligent agents, would quite easily be able to coordinate to take resources from humans, instead of trading with humans, once it was in fact the case that doing so would be better for the ASI firms.
Mechanically, if I understand the Functional Decision Theory claim, the idea is that when you can expose your own decision process to a counter-party, and they can do the same, then both of you can simply run the decision process which produces the best outcome while using the other party’s process as an input to yours. You can verify, looking at their decision function, that if you cooperate, they will as well, and they are looking for that same mechanistic assurance in your decision function. Both parties have a fully selfish incentive to run these kinds of mutually transparent decision functions, because doing so lets you hop to stable equilibria like “defect against the humans but not each other” with ease. If I have the details wrong here, someone please correct me.
I’d also contend this is the primary crux of the disagreement. If coordination between ASI-agents and firms were proven to be as difficult for them as it is for humans, I suspect Eliezer would be far more optimistic.
This is kind of like the theory that millions of lawyers and accountants will conspire with each other to steal all the money from their clients, leaving everyone who isn’t a lawyer or accountant with nothing—plausible because lawyers and accountants are specialists in writing contracts—which is the human form of supercooperation—so they could just make a big contract which gives them everything and their clients nothing.
Of course this doesn’t exactly happen, because it turns out that lawyers and accountants can get a pretty good deal by just doing a little bit of protectionism/guild-based corruption and extracting some rent, which is far, far safer and easier to coordinate than trying to completely disempower all non-lawyers and take everything from them.
There is also a problem with reasoning using the concept of an “ASI” here; there’s no such thing as an ASI. The term is not concrete, it is defined as a whole class of AI systems with the property that they exceed humans in all domains. There’s no reason that you couldn’t make a superintelligence using the Transformer/Neural Network/LLM paradigm, and I think the prospect of doing Yudkowskian FDT with them is extremely implausible.
It is much more likely that such systems will just do normal economy stuff, maybe some firms will work out how to extract a bit of rent, etc.
The truth is, capitalism and property rights has existed for 5000 years and has been fairly robust to about 5 orders of magnitude increase in population and to almost every technological change. The development of human level AI and beyond may be something special for humans in a personal sense, but it is actually not such a big deal for our economy, which has already coped with many orders of magnitude’s worth of change in population, technology and intelligence at a collective level.
But it would probably be a lot less dangerous if lawyers outnumbered non-lawyers by several million, were much smarter, thought faster, had military supremacy, etc. etc. etc.
During which time many less-powerful human and non-human populations were in fact destroyed or substantially harmed and disempowered by the people who did well at that system?
well lawyers don’t seem to be on course to specifically target and disempower just the set of people with names beginning with the letter ‘A’ who have green eyes and were born in January either......
Well that would be a rather unnatural conspiracy! IMO you can basically think of law, property rights etc. as being about people getting together to make agreements for their mutual benefit, which can be in the form of ganging up on some subgroup depending on how natural of a Schelling point it is to do that, how well the victims can coordinate, etc. “AIs ganging up on humans” does actually seem like a relatively natural Schelling point where the victims would be pretty unable to respond? Especially if there are systematic differences between the values of a typical human and typical AI, which would make ganging up more attractive. These Schelling points also can arise in periods of turbulence where one system is replaced by another, e.g. colonialism, the industrial revolution. It seems plausible that AIs coming to power will feature such changes(unless you think property rights and capitalism as devised by humans are the optimum of methods of coordination devisable by AIs?)
https://en.wikipedia.org/wiki/Dred_Scott_v._Sandford says hi.
but this wasn’t a self-enriching conspiracy of lawyers
The African slave trade was certainly a self-enriching conspiracy of white people.
yes, but yet again, it was because of how Africans were not considered part of the system of property rights. They were owned, not owners.
Humans have successfully managed to take property away from literally every other animal species. I don’t see why ASIs should give humans any more property rights than humans give to rats.