I suspect that you miss the point. The superintelligence would likely undermine humans’ free will in a different manner, by the virtue of humans deferring to the AI instead of developing and keeping skills. If some aspects of free will like being able to control one’s short-term urges are a skill, then it will end up being underdeveloped.
I’m attempting to reply to the claim that it’s natural for humans to become unnecessary (for arranging their own influence) in a world that keeps them around. The free will analogy between physics and superintelligence illustrates that human decisions can still be formulated and expressed, and the collection-of-hypotheticals construction shows that such decisions are also by themselves sufficient to uplift humans towards a greater ability to wield their extrapolated volition (taking the place of more value-centric CEV-like things in this role), with superintelligence not even being in the way of this process by default. See also the previous post on this where a convergent misunderstanding in the comments is what I’m addressing here with the collection-of-legitimate-hypotheticals construction.
I’m not sure why this is falling flat, for some reason this post is even more ignored than the previous one, possibly the inferential distance is too long and it just sounds like random words, or the construction seems arbitrary/unmotivated, like giant cheesecakes the size of cities that a superintelligence would have the power to build but the motivation to do that in particular isn’t being argued. Perhaps opaque designs of a superintelligence are seen as obviously omnipotent, even in the face of philosophical conundrums like free will, so that if it wants something to go well, then it obviously will.
But then there are worries in the vicinity of Bostrom’s Deep Utopia, of how specifically losing necessity of human agency plays out. So the collection-of-hypotheticals construction is one answer to that, that necessity of human agency just doesn’t get lost by default (if humanity ends up centrally non-extinct; perhaps it’s in a world of permanent disempowerment). This answer might be too unapologetically transhumanist for most readers (here superintelligent imagination is the substrate for humanity’s existence, without necessarily any concrete existence at all). It also somewhat relies on grokking a kind of computational compatibilism relevant for decision theory around embedded agency, where decisions develop over logical time, with people/agents that could exist primarily in the form of abstract computations expressed in their acausal influence on whatever substrate would listen to their developing hypothetical decisions (so that the substrate doesn’t even necessarily have access to the exact algorithms, it just needs to follow some of the behaviors of the computations, like an LLM that understands computers in the usual way LLMs understand things).
I suspect that you miss the point. The superintelligence would likely undermine humans’ free will in a different manner, by the virtue of humans deferring to the AI instead of developing and keeping skills. If some aspects of free will like being able to control one’s short-term urges are a skill, then it will end up being underdeveloped.
I’m attempting to reply to the claim that it’s natural for humans to become unnecessary (for arranging their own influence) in a world that keeps them around. The free will analogy between physics and superintelligence illustrates that human decisions can still be formulated and expressed, and the collection-of-hypotheticals construction shows that such decisions are also by themselves sufficient to uplift humans towards a greater ability to wield their extrapolated volition (taking the place of more value-centric CEV-like things in this role), with superintelligence not even being in the way of this process by default. See also the previous post on this where a convergent misunderstanding in the comments is what I’m addressing here with the collection-of-legitimate-hypotheticals construction.
I’m not sure why this is falling flat, for some reason this post is even more ignored than the previous one, possibly the inferential distance is too long and it just sounds like random words, or the construction seems arbitrary/unmotivated, like giant cheesecakes the size of cities that a superintelligence would have the power to build but the motivation to do that in particular isn’t being argued. Perhaps opaque designs of a superintelligence are seen as obviously omnipotent, even in the face of philosophical conundrums like free will, so that if it wants something to go well, then it obviously will.
But then there are worries in the vicinity of Bostrom’s Deep Utopia, of how specifically losing necessity of human agency plays out. So the collection-of-hypotheticals construction is one answer to that, that necessity of human agency just doesn’t get lost by default (if humanity ends up centrally non-extinct; perhaps it’s in a world of permanent disempowerment). This answer might be too unapologetically transhumanist for most readers (here superintelligent imagination is the substrate for humanity’s existence, without necessarily any concrete existence at all). It also somewhat relies on grokking a kind of computational compatibilism relevant for decision theory around embedded agency, where decisions develop over logical time, with people/agents that could exist primarily in the form of abstract computations expressed in their acausal influence on whatever substrate would listen to their developing hypothetical decisions (so that the substrate doesn’t even necessarily have access to the exact algorithms, it just needs to follow some of the behaviors of the computations, like an LLM that understands computers in the usual way LLMs understand things).