From my perspective, the major issue remains Phase 1. It seems to me that most of the concerns mentioned in the article stem from the idea that an ASI could ultimately find itself more aligned with the interests of socio-political-economic systems or leaders that are themselves poorly aligned with the general interest. Essentially, this brings us back to a discussion about alignment. What exactly do we mean by “aligned”? Aligned with what? With whom? Back to phase 1.
But assuming an ASI truly aligned with humanity in a very inclusive definition and with high moral standards, phase 2 seems less frightening to me.
Indeed, we must not forget:
that human brains are highly energy-efficient;
that there are nearly 10 billion human brains, representing a considerable computing power.
Assuming we reach the ASI stage with a system possessing computational power equivalent to a few million human brains, but consuming energy equivalent to a few billion human brains, the ASI will still have a lot of work to do (self-improvement cycles) before it can surpass humanity both in computational capacity and energy efficiency.
Initially, it will not have the capability to replace all humans at one.
It will need to allocate part of its resources to continue improving itself, both in absolute capacity and in energy efficiency. Additionally, since we are considering the hypothesis of an aligned ASI, a significant portion of its resources would be dedicated to fulfilling human requests.
The more AI is perceived as supremely intelligent, the more we will tend to entrust it with solving complex tasks that humans struggle to resolve or can only tackle with great difficulty—problems that will seem more urgent compared to simpler tasks that humans can still handle.
I won’t compile a list of problems that could be assigned to an ASI, but one could think, for example, of institutional and legal solutions to achieve a more stable and harmonious social, economic, and political organization on a global scale (even an ASI—would it be capable of this?), solutions to physics and mathematics problems, and, of course, advances in medicine and biology.
It is possible that part of the ASI would also be assigned to performing less demanding tasks that humans could handle, thus replacing certain human activities. However, given that its resources are not unlimited and its energy cost is significant, one could indeed expect a “slow takeover.”
More specifically, in the fields of medicine and biology, the solutions provided by an ASI could focus on eradicating diseases, increasing life expectancy, and even enhancing human capabilities, particularly cognitive abilities (with great caution in my opinion). Even though humans have a significant advantage in energy efficiency, this does not mean that this aspect cannot also be improved further.
Thus, we could envision a symbiotic co-evolution between ASI and humanity. As long as the ASI prioritizes human interests at least at the same level as its own and continues to respond to human demands, disempowerment is not necessarily inevitable—we could imagine a very gradual human-machine coalescence (CPU and GPU coevoluted for a while and GPU still doesn’t have entirely replace CPU, and it’s likely quantum processors will also coevolute aside classic processors, even in the world of computation, diversity could be an advantage).
From my perspective, the major issue remains Phase 1. It seems to me that most of the concerns mentioned in the article stem from the idea that an ASI could ultimately find itself more aligned with the interests of socio-political-economic systems or leaders that are themselves poorly aligned with the general interest. Essentially, this brings us back to a discussion about alignment. What exactly do we mean by “aligned”? Aligned with what? With whom? Back to phase 1.
But assuming an ASI truly aligned with humanity in a very inclusive definition and with high moral standards, phase 2 seems less frightening to me.
Indeed, we must not forget:
that human brains are highly energy-efficient;
that there are nearly 10 billion human brains, representing a considerable computing power.
Assuming we reach the ASI stage with a system possessing computational power equivalent to a few million human brains, but consuming energy equivalent to a few billion human brains, the ASI will still have a lot of work to do (self-improvement cycles) before it can surpass humanity both in computational capacity and energy efficiency.
Initially, it will not have the capability to replace all humans at one.
It will need to allocate part of its resources to continue improving itself, both in absolute capacity and in energy efficiency. Additionally, since we are considering the hypothesis of an aligned ASI, a significant portion of its resources would be dedicated to fulfilling human requests.
The more AI is perceived as supremely intelligent, the more we will tend to entrust it with solving complex tasks that humans struggle to resolve or can only tackle with great difficulty—problems that will seem more urgent compared to simpler tasks that humans can still handle.
I won’t compile a list of problems that could be assigned to an ASI, but one could think, for example, of institutional and legal solutions to achieve a more stable and harmonious social, economic, and political organization on a global scale (even an ASI—would it be capable of this?), solutions to physics and mathematics problems, and, of course, advances in medicine and biology.
It is possible that part of the ASI would also be assigned to performing less demanding tasks that humans could handle, thus replacing certain human activities. However, given that its resources are not unlimited and its energy cost is significant, one could indeed expect a “slow takeover.”
More specifically, in the fields of medicine and biology, the solutions provided by an ASI could focus on eradicating diseases, increasing life expectancy, and even enhancing human capabilities, particularly cognitive abilities (with great caution in my opinion). Even though humans have a significant advantage in energy efficiency, this does not mean that this aspect cannot also be improved further.
Thus, we could envision a symbiotic co-evolution between ASI and humanity. As long as the ASI prioritizes human interests at least at the same level as its own and continues to respond to human demands, disempowerment is not necessarily inevitable—we could imagine a very gradual human-machine coalescence (CPU and GPU coevoluted for a while and GPU still doesn’t have entirely replace CPU, and it’s likely quantum processors will also coevolute aside classic processors, even in the world of computation, diversity could be an advantage).