I don’t view ASI as substantially different than an upload economy.
I’m very confused about why you think that.
You ignored most of my explanation so I’ll reiterate a bit differently. But first taboo the ASI fantasy.
any good post-AGI future is one with uploading—humans will want this
uploads will be very similar to AI, and become moreso as they transcend
the resulting upload economy is one of many agents with different values
the organizational structure of any pareto optimal multi-agent system is necessarily market-like
it is a provable fact that wealth/power inequality is a consequent requisite side effect
Most worlds where we don’t die are worlds where a single aligned ASI achieves decisive strategic advantage
Unlikely but it also doesn’t matter as what alignment actually means is the resulting ASI must approximate pareto optimality with respect to various stakeholder utility functions, which requires that:
it uses stakeholder’s own beliefs to evaluate utility of actions
it must redistribute stakeholder power (ie wealth) toward agents with better predictive beliefs over time (in a fashion that looks like internal bayesian updating).
In other words, the internal structure of the optimal ASI is nigh indistinguishable from an optimal market.
Additionally, the powerful AI systems which are actually created are far more likely to be one which precommit to honoring their creator stakeholder weath distribution. In fact—that is part of what alignment actually means.
You ignored most of my explanation so I’ll reiterate a bit differently. But first taboo the ASI fantasy.
any good post-AGI future is one with uploading—humans will want this
uploads will be very similar to AI, and become moreso as they transcend
the resulting upload economy is one of many agents with different values
the organizational structure of any pareto optimal multi-agent system is necessarily market-like
it is a provable fact that wealth/power inequality is a consequent requisite side effect
Unlikely but it also doesn’t matter as what alignment actually means is the resulting ASI must approximate pareto optimality with respect to various stakeholder utility functions, which requires that:
it uses stakeholder’s own beliefs to evaluate utility of actions
it must redistribute stakeholder power (ie wealth) toward agents with better predictive beliefs over time (in a fashion that looks like internal bayesian updating).
In other words, the internal structure of the optimal ASI is nigh indistinguishable from an optimal market.
Additionally, the powerful AI systems which are actually created are far more likely to be one which precommit to honoring their creator stakeholder weath distribution. In fact—that is part of what alignment actually means.