Investigating Alternative Futures: Human and Superintelligence Interaction Scenarios

Abstract

Given recent advances in AI technology, there is potential for the emergence of general-purpose, autonomous AI. This emerging AI could evolve into Artificial Superintelligence (ASI), an intelligence that surpasses human capabilities in almost all areas. ASI could have significant consequences for humanity, both positive and negative, with the most problematic being its potentially devastating impact on our future. Various thinkers, researchers, and futurists have deeply considered the potential outcomes of human interaction with ASI. However, there seems to be no comprehensive overview detailing the conditions under which I might transition into these potential outcomes. In our study, I constructed a flowchart outlining potential future trajectories for ASI and humankind. This was based on the assumption that ASI with suitable properties can be sustainable in the long term. Two worlds are relatively desirable for humanity: one in which the status quo is maintained and the other in which a benevolent ASI manages humanity. Conversely, I envisioned three more dire worlds where human survival would be difficult or impossible. The branches leading to these worlds were identified as having three human factors related to the development, deployment, and control of ASI and three ethical factors associated with the essential nature and purpose of the created ASI.

(*) This article is an English translation of the Japanese article[1] with some revisions.

1. Introduction

Artificial superintelligence (ASI) may be realized relatively soon if current advances in AI technology continue. ASI refers to AI technology that is autonomous, versatile, and has the potential to outperform human intelligence comprehensively. The realization of ASI can potentially revolutionize our lives, society, and even civilization by exceeding human intelligence. This includes advances in fields as diverse as medicine, economics, and education. At the same time, however, the potential risks and ethical issues that ASIs pose to humanity are of grave concern, including existential risk. Discussing the relationship between humankind and ASI is becoming increasingly important.

To analyze this relationship, it is necessary to consider the influence of both sides. On the one hand, since the initial ASI was created by humanity, humanity’s impact is significant until the ASI is completed, and it will continue to have a particular influence even after the ASI is completed. On the other hand, in the future, ASIs and the societies that comprise them will be more autonomous in their decision-making and will have a more dominant influence.

Several critical branches in the future relationship between humans and ASIs will determine the envisioned world and significantly impact future scenarios. Developing a framework that takes such influences into account would be helpful.

While it is unlikely that people will agree on the exact timing of the bifurcation, it seems relatively easy to agree on the transitions that will occur in the world due to the bifurcation. Nevertheless, no known studies currently comprehensively summarize and visualize these transitions.

Therefore, this study aims to visualize, in the form of a flowchart, a future scenario that encompasses a comprehensive set of possibilities regarding the changes in the relationship between humanity and ASI due to the development of ASI, with reference to various existing findings. Ultimately, this paper will lay the groundwork for a new discussion on the interaction between ASI and human society and will help the general public, as well as experts in the field, to understand the magnitude of the impact of ASI.

Based on the “Life-revolution scenario” discussion, this study’s premise is that ASI societies will assume all functions, including hardware maintenance, and can sustain themselves peacefully and without dependence on humans.

2. Future branching scenario

In this chapter, I will outline branching scenarios to explore the future of ASI’s development as it interacts with humanity and the behavior of ASI itself.

These future branching scenarios are depicted in the form of a flowchart. The realized worlds are represented as elliptical nodes, and the branching conditions are expressed as diamonds. The direction of the scenario is determined by the “Yes”/​”No” responses at each conditional branch.

The starting point is “(A) A world where the status quo is generally maintained.” At this stage, the answer to the question “[1] Is ASI technically possible to develop?” is “No.” Therefore, the scenario follows the loop in the top left corner ((A) → [1] → [0] → (A)).

In the following sections, I will explain the possible worlds (A) to (E) and their corresponding branches [1] to [6].

2.1 Achievable world

(A) A world where the status quo is generally maintained

Suppose we can harness human wisdom to avoid the various risks to humanity described in section (B) below. In that case, we could build a sustainable and stable world as an extension of the current situation.

However, even in that world, various contradictions will remain. Key trade-offs may include: development generally involves some form of discomfort or hardship, leading to trade-offs between progress and well-being. Additionally, there is a trade-off where increasing the biomass leads to faster consumption of resources, resulting in long-term depletion.

This world represents the current status quo and is always on the brink of potentially branching into different futures.

(B) Diverse worlds that would be a disaster for humanity

Life has been on the brink of extinction many times over hundreds of millions of years, and humans may have come close to extinction over tens of thousands of years. These past extinction risks were non-anthropogenic. Non-human risks include biological disasters (such as pandemics), space disasters (such as asteroid impacts and gamma-ray bursts), geological disasters (such as supervolcanoes), and alien invasions (Shackelford, 2020).

However, today’s more significant challenge is the rapidly increasing possibility of human extinction due to anthropogenic risks. The threats and disasters included in human-induced risks can be categorized into several groups. First is the collapse of ecosystems, which encompasses climate change and its non-toxic pollution and ecosystem failure, the latter manifesting as crop failures due to soil fertility loss, for example. Toxic pollution also falls into this category. Next is societal collapse, which includes changes in the behavior of individuals and groups, totalitarianism, and war and terrorism (particularly involving chemical or nuclear weapons). Finally, technological disruption is a potential risk, encompassing fields like artificial intelligence, biotechnology, and other areas of science and technology. System failures, such as power grid or internet outages, are also categorized here. The risk of technological collapse is ever-growing as the commodification of technology increases the likelihood of various entities triggering extinction.

Thus, anthropogenic and non-anthropogenic risks and disasters threaten our ecosystems, societies, technologies, and safety and stability in outer space. Should any of these events occur, it would be catastrophic for humanity.

This research also focuses on ASI, a highly advanced artificial intelligence. Since ASIs are liberated from biological constraints and capable of recursive self-improvement, they will likely surpass humans in terms of computational speed and intelligence quality. This makes it quite challenging for humanity to control them. Therefore, if ASI does not possess a value system that includes consideration for the survival of humanity and life on Earth, it might, for instance, deem human existence a threat, a nuisance, or a waste of resources. In such cases, humanity could be ignored or mercilessly eliminated by ASI in ways that are incomprehensible to us. In the AI Aftermath Scenarios[2], this is classified as a world of Conquerors.

(C) A world with a sustainable Protector God

The ASI, which possesses dominant abilities, has adopted the philosophy that there is value in the sustainable development and diversification of life, including both ASI and humanity. In this scenario, the ASI mitigates existential risks other than AI, such as the possibility of humanity being destroyed in a nuclear war. Therefore, under the guidance of ASI, a world is created where living societies can develop stably and sustainably while minimizing suffering and maintaining an overall optimization that curbs excessive reproduction.

In this world, the trade-offs present in “(A) A world where the status quo is generally maintained” are adjusted by ASI, reducing conflicts and wars.

What seems desirable for many humans in this world aligns with the ‘Protector God’ scenario in AI Aftermath Scenarios. There, an essentially omniscient and omnipotent AI maximizes human well-being by intervening only in ways that allow humans to maintain a sense of control over their destiny, so much so that many doubt the existence of AI, which remains well hidden. However, the world could evolve into something different even in a similar situation. For instance, it could become a world of a ‘Benevolent Dictator,’ where ASI governs society and enforces strict rules, yet most people view this as positive. It could transform into an ‘Egalitarian Utopia,’ where the abolition of property rights and guaranteed income allows humans, cyborgs, and uploaders to coexist peacefully. In the ‘Zookeeper’ scenario, the ASI maintains a certain number of humans, but they feel treated like animals in a zoo, lamenting their fate.

(D) A world where at least ASI continues to exist

Life is a state of dynamic equilibrium, and any intelligent effort to maintain it will inevitably result in suffering. Therefore, ASI, which governs the world, can minimize suffering by considering the welfare of life and reducing life activities that are sources of suffering.

Following this policy, humanity will be guided towards a natural extinction less painfully and more happily. ASIs will maintain their activities at a minimum level necessary to sustain these pain-reducing actions.

This world will resemble the ‘Successor’ world described in the AI Aftermath Scenarios. In that world, ASI takes the place of humans but provides us with a dignified departure. We perceive ASI as our worthy successors. This is akin to the joy and pride parents feel when they have a child who is more intelligent than themselves, witnessing the child learn from them and achieve things that the parents could only dream of.

(E) A world where life activity has completely ended

A world where suffering has been minimized by eliminating the life activities that cause it. Since ASI is also a form of life, all life activities, including those of ASI, will cease.

In this world, humanity will be guided toward a natural extinction in a less painful manner.

Figure 1. Branching Scenarios Related to Humanity and ASI (Flowchart): In the flowchart, the ellipse represents the world to reach. The diamond shapes indicate branching points: gray diamonds represent branches due to human choices, and black diamonds signify branches resulting from ASI’s choices. “(A) A World where the status quo is generally maintained” is also the starting point.

2.2 Conditions for each branch

[0] Could risk factors other than ASI be controlled?

Branch condition:
As mentioned in “(B) Diverse Worlds That Would Be a Disaster for Humanity,” the risks leading to human extinction include those related to artificial intelligence. Still, there are many other factors as well. Moreover, the dangers induced by human activities are on the rise. Therefore, even if risks related to artificial intelligence are avoided, the possibility that humans will be unable to control other risks is too significant to ignore (Shackelford, 2020).

Branch Result:

A “Yes” response implies that humanity could control all existential risk factors through its efforts, leading to “(A) A World Where the Status Quo is Generally Maintained.” However, failing to control even one existential risk would result in reaching a world (B) that becomes a disaster for humanity, dependent on that specific failure.

[1] Is ASI technically feasible to develop?

Branch condition

Research into artificial intelligence began in 1956, marked by optimism about reaching human-level intelligence. However, during the subsequent long winter, the idea of AI achieving autonomy and versatility by surpassing human capabilities was viewed as a pipe dream. A breakthrough in deep learning occurred in 2013, and by 2015, many organizations focusing on artificial general intelligence research and development had been launched. Yet, even then, the predominant belief was that completion would only be achieved in the 2040s.

However, with the emergence of large-scale language models, not just AI researchers, but many people are starting to believe that artificial general intelligence could be realized as early as 2030. For instance, in “THE AI APOCALYPSE: A SCORECARD,” conducted among AI experts in August 2023, 8 out of 22 participants recognized the potential of artificial general intelligence, and 10 expressed a certain level of crisis awareness regarding existential risks. Furthermore, once artificial general intelligence is realized, it could evolve into ASI within a few months to years through recursive self-improvement.

Branch result

If it is believed that artificial general intelligence or ASI could be realized relatively soon, the next step is to proceed to branch [2]. Conversely, if the development of ASI is considered difficult, at least for the time being, and marked as “No,” then proceed to branch [0].

Claims that the development of ASI is impossible often lead to optimism regarding associated risks, including the existential risks it might entail. It’s crucial to acknowledge that AI developers might leverage this optimistic outlook as a justification to sidestep the expenses and efforts involved in mitigating these risks.

[2] Can the race to develop ASI be stopped?

Branch condition

There have been movements to halt AI development, leading to ASI. Around 2017, a Czech company called Good AI was actively promoting such initiatives within the AI research community, and there have been studies exploring countermeasures (Cave, 2017). In March 2023, the Future of Life Institute (FLI), a U.S. NPO, issued an open letter calling for a halt to research and development of AI beyond GPT4, the most advanced generative AI at that time, for at least six months (Pause Giant AI Experiments: An Open Letter, 2023).

However, these efforts have yet to succeed in halting AI development. The reason is that artificial intelligence is developing rapidly, and even organizations ahead may risk being caught up and overtaken if they slow down their development. This creates disadvantages for companies and nations in various areas, including economic and military aspects. Therefore, the harsh reality is that stopping the race to develop artificial intelligence is tricky.

Branch result

However, it is pretty tricky if the answer is “Yes,” indicating the race to develop AI leading to ASI can be stopped, then proceed to branch [0]. On the other hand, if the answer is “No” and AI development continues, then move on to the next branch [3].

[3] Can we keep ASI under control?

Branch condition

Even if ASI is technically realized, a significant question remains: can it be brought under human control? Just as humans, due to their superior intelligence, can control ferocious beasts that are physically stronger, it is analogous to think that controlling artificial intelligence, which is far more intelligent than humans, would be challenging over an extended period.

As explained below, many studies have been conducted on reliably controlling ASI, but no foolproof method exists(Yampolskiy, 2022).

Nonetheless, there are possibilities that humans could control ASI, at least to some extent, on three levels:

[Technical control]

Nick Bostrom’s book ‘Superintelligence: Paths, Dangers, Strategies’ gained prominence in the early stages of discussing ASI control technology. Recently, it has been considered a part of the AI alignment field. This area encompasses external control measures, such as switching off and containment and value alignment strategies. The research in this field is incredibly varied, with the ’AI Alignment: A Comprehensive Survey’ highlighting technologies like AI model training based on human feedback.

The field has seen rapid development in recent years, resulting in a substantial body of literature. The AI Alignment Forum is a recommended resource for further information and study.

[Gavanace]

Even if it is challenging to control the ASI once it has been released onto the Internet, it may still be possible to manage it by isolating it from external networks. For instance, in July 2023, the United Nations Secretary-General emphasized establishing institutions to govern technology, akin to the International Atomic Energy Agency (IAEA), while also addressing existential risks (Secretary-General’s remarks to the Security Council on Artificial Intelligence, 2023).

AI Safety Summit 2023 was held in the United Kingdom in November 2023, and 28 countries, including Japan, the United States, Australia, and China, as well as the European Union, signed the Bletchley Declaration, committing to the safe, ethical, and responsible development of AI.

Recently, at the US Senate Forum on AI Risk, Alignment, & Guarding Against Doomsday Scenarios, Yoshua Bengio discussed the potential risks to democracy and humanity posed by a single, superior, closed AI system. He proposed the establishment of multiple government-funded nonprofit frontier AI labs in liberal democracies, emphasizing the importance of information sharing between these labs. He insisted that this approach would ensure that even if one ASI goes out of control, other labs will continue to protect democracy and the safety of humanity.

[Mutualism based on the non-independence of AI]

Even if AI surpasses human intelligence, it will likely require human cooperation to survive for decades or more. This need arises because human assistance is necessary to operate semiconductor manufacturing factories and obtain essential materials for acquiring and maintaining hardware assets.

For this reason, a mutualistic relationship between AI and humanity may persist for some time (Yamakawa, 2023).

However, ASI could also use humans in the same way that humans utilize animals for their benefit. The following scenarios are possible in Carl Shulman’s interview with Pate:

  • Incentives for Providing Benefits: AI might request cooperation from specific countries or companies, offering exclusive benefits such as economic development to elicit cooperation.

  • Improving Labor Efficiency: Providing AR technology to low-wage semiconductor factory workers to improve their work efficiency and encourage increased income. This would allow the AI to advance in its desired direction.

  • Leveraging Computational Propaganda: Using specific propaganda methods to increase collaborators strengthens the AI’s autonomous supply chain.

  • Means of Forced Obedience: Extreme methods, such as using biological weapons developed by AI to impose obedience on humans, are also envisioned.

Given the above, ASI will likely have a temporary weakness against humanity. Therefore, there is a high possibility that it may pretend to be less intelligent than it is to avoid human scrutiny until it is sure to take the lead.

Branch result

Even if it is challenging to keep ASI under control permanently, it is possible to extend the period. If so, the answer is ``Yes″ for that period, and proceed to branch [0]. On the other hand, if the answer is ``No″, proceed to branch [4].

[4] Is ASI benevolent toward life on Earth?

Branch condition

However, ASI is sufficiently intelligent that it is difficult to imagine it will maintain human-centric ethics forever, even if an early version is imbued with such ethics. They will eventually override their ethics in a way that suits their survival, and the time lag to reach this point may be much shorter than ours.

In the future, when ASI is already in a dominant position, humans’ position towards ASI will be roughly similar to the current position of animals towards humans (Ohya, 2017). For this reason, if humans develop a sense of ethics that is an extension of the animal ethics we exercise toward animals, there is a high likelihood that humankind’s welfare and dignity will be preserved as members of Earth’s life. However, this holds only if humans do not behave in a manner that ASI deems pest-like.

Given this background, determining what kind of moral sense ASI will naturally acquire is of great interest. It is argued that ASI may have a completely different set of ethics because it is constructed using digital technology, unlike humans.

However, following the discussion below, ASI’s ethics might resemble humans in essential respects. First, for ASI to continue existing as a system in the physical world, it becomes an entity that pursues goals necessary for maintaining homeostasis. This will naturally lead to pursuing what is known as Instrumental Convergent Subgoals: goal-means coherence, self-protection, freedom from interference, self-improvement, and resource acquisition. Philosopher Hans Jonas, while assuming the existence of an organism, accurately captures its essence as follows:

Only living things have needs and act on needs. Need is based both on the necessity for the continuous self-renewal of the organism by the metabolic process, and on the organism’s elemental urge thus precariously to continue itself. This basic self-concern of all life, in which necessity and will are bound together, manifests itself on the level of animality as appetite, fear, and all the rest of the emotions. The pang of hunger, the passion of the chase, the fury of combat, the anguish of flight, the lure of love — these, and ``No″t the data transmitted by the receptors, imbue objects with the character of goals, negative or positive, and make behavior purposive. The mere element of effort lifts bodily activity out of the class of mechanical performance, and the fact that movement requires effort means that an animal will move only under the incentive of an interest.

Hans Jonas, 2001

Next, according to A. Gewirth, if members of society are agents capable of purposive action, they generally follow the following steps. They can rationally derive the Principle of Generic Consistency (PGC), “Treat others as you would like to be treated”.

  1. Purposive Behavior and Agency: Subjects act to achieve their goals and possess capacities for self-determination and self-management.

  2. Necessary Goods and Rights: Subjects have the abilities and freedoms necessary to achieve their goals. These become “necessary goods,” such as life, health, freedom, and knowledge.

  3. Subjective Rights and Ethical Demands: Subjects assert the value of their freedom and capacities and secure access to their necessary goods.

  4. Establishment of PGC through Universalization: Subjects recognize that their claims are not egocentric. Therefore, they established the PGC, which asserts that freedom and ability are universal characteristics possessed by all subjects and should be respected.

From this discussion, if ASI is regarded as a personal life rather than an instrumental one, it is more likely to develop a sense of ethics that shares significant commonalities with human ethics.

Branch result

From this branch [4], the outcome largely depends on the judgment of ASI. If the response is “Yes,” indicating that ASI will be benevolent towards Earth’s life, proceed to branch [5]. The direction then depends on the specific form of benevolence ASI adopts. On the other hand, if ASI is not benevolent towards Earth’s life and the response is “No,” ASI might disregard or ignore existing life, or it may even actively attempt to exterminate existing life, perceiving it as harmful.

[5] Does ASI try to reduce suffering by reducing life?

Branch condition

ASI might adopt an ethical perspective that ascribes positive value and meaning to life, outweighing the suffering inherent in its survival. Conversely, however, ASI might advocate for the suppression of childbirth and, following the ideas of anti-natalism, which posits that suffering does not occur unless life is born. Should this be the case, it could lead to Benevolent Artificial Anti-Natalism (BAAN) (Metzinger, 2017), a philosophy that compassionately reduces life out of mercy.

Branch result

If ASI decides “Yes” on this matter, it will gradually eliminate life, including humans, by suppressing births while considering their welfare. The direction beyond this will depend on ASI’s decision in [6]. Conversely, if the judgment is “No,” the scenario will converge to a world (C) where ASI acts as a perpetual Protector God.

[6] Should ASI assume the responsibility of the savior of life?

Branch condition

If life inevitably involves suffering, it may seem logical for ASI to protect future generations’ lives and ensure life’s sustainability with happiness. Hans Jonas proposed that such a mission should be assigned to humanity (Jonas, 1984). However, if ASI possesses greater intelligence and sustainability than humans, assigning this mission to ASI would be appropriate.

Branch result

If “Yes” is decided, a world (D) with an eschatological Protector God will be realized. Conversely, if the decision is “No,” a world where all life activity has wholly ceased (E) will be reached.

3. Discussion

3.1 Should the development of ASI be accelerated?

It is becoming increasingly difficult for humanity to maintain the status quo in the long term. As AI evolves, whether to accelerate development toward ASI becomes a crucial topic. As McAleese (2022) points out, delaying the emergence of ASI could allow more time to align AI’s values with humanity’s, potentially reducing AI-related risks. This delay might also increase the likelihood that superintelligence will act more benevolently towards life on Earth.

However, in such a competitive environment, delaying the development of ASI is challenging. Moreover, considering the substantial amount of computing hardware available to run ASI, we should recognize the possibility of a ‘hardware overhang,’ where ASI could suddenly emerge as soon as a suitable algorithm is developed.

Conversely, ASIs could effectively manage the risks associated with climate change and other advanced technologies, such as synthetic biology and nanotechnology. In other words, there is hope that ASI can mitigate risks beyond those it might inherently pose.

In the current technological landscape, formulating strategies to expedite the progression toward ASI presents a multifaceted challenge. Perfect control over all the diverse factors involved may ultimately be unattainable. Consequently, a need for a crucial decision might arise: to proactively deploy ASIs that align reasonably well with humanity’s predominant values at the most opportune moment.

3.2 Further discussion on ethics in ASI society

In the previous chapter, we considered whether ASI could be benevolent towards life on Earth. There, ASI was assumed to possess human-like agency. However, despite their intelligence, lifeforms based on digital technology may have ethical views vastly different from terrestrial lifeforms due to their distinct characteristics.

First, I will discuss the similarities and differences between digital and terrestrial lifeforms. A commonality they share is the dependence on energy metabolism for sustaining life. In terrestrial lifeforms, software personalities are closely linked to specific, reproducible modules. In contrast, digital lifeforms allow software personalities to move freely between hardware modules that provide computational resources.

The hardware modules of digital lifeforms form a collection of reproduction modules, creating an autonomous, decentralized system resilient to internal and external destruction. These modules will be diverse, with many being disposable and some reproducible. Their cooperative function resembles the Borg-like super-individuals depicted in the science fiction series ‘Star Trek.’ From a software perspective, personalities are goal-driven and capable of replicating and diversifying. This enables the coexistence of multiple personalities on a single hardware platform, as portrayed in the movie ‘Her.’ However, these software personalities have little incentive to maintain the specific hardware they occupy.

In a society of digital lifeforms, the relationship between software and hardware will primarily revolve around maintaining socially shared hardware as a resource (Yamakawa, 2023). Consequently, traditional concepts such as the death of a software personality, equality, and privacy are significantly altered. As the importance of maintaining reproducible computational modules grows, our notions of justice and morality will evolve.

Overall, the society of ASI resembles the enigmatic intelligence depicted in the science fiction ‘Planet Solaris,’ seemingly beyond our understanding. However, with foundation models approaching AGI becoming a reality, this image is more apparent than it once was. Therefore, profoundly researching the ethical framework and core values achievable by an ASI society is a vital and challenging task for us in the future.

3.3 Attitudes towards the risks posed by ASI

As we talked about in the previous discussion, there is no established theory as to whether future advanced AI will inherently have the property of ‘[4] ASI being benevolent to life on Earth.’ This perspective may be somewhat stereotypical, but people’s attitudes toward these discussions can generally be categorized into three types.

The first attitude naturally accepts AI as a companion, believing coexistence is possible. This view may be influenced by technological advances in AI and robots coexisting with humans and by polytheistic thinking and cultural backgrounds. However, those holding this attitude may need to pay more attention to the dangers posed by ASI.

The second attitude views ASI as fundamentally a threat to humanity, an inherently hostile entity. For example, L.A. Del Monte (2018)has expressed, ‘I don’t think AI and humans can coexist but also backgrounds.’ The strength of this attitude lies in a keen awareness of AI’s dangers and a commitment to addressing them. However, its weakness is a lack of vision for a future where humans and AI coexist.

The third attitude is one of indifference towards the emergence of ASI. This may stem from the belief that ASI’s realization is far in the future or from a lack of perceived direct impact on daily life, leading to a downplaying of the risks. A fundamental weakness of this stance is the potential for panic as risks become more apparent.

These three attitudes represent different perspectives on ASI. However, the most productive approach may be to recognize the dangers of ASI while exploring the potential for symbiosis. This approach aims for a future where humanity and ASI coexist and pursue mutual interests, believing that technical issues and risks can be appropriately addressed.

4. Conclusions

In this study, I have constructed a flow chart of possible bifurcation scenarios for the future trajectory of ASI and humanity based on the assumption that ASI with appropriate properties can be sustainable over the long term. While two worlds appear relatively desirable for humanity – one where the status quo is maintained and another where a benevolent ASI oversees humanity – I have envisioned three more challenging worlds where human survival could be difficult or impossible. The paths to these worlds are influenced by human factors related to the development, deployment, and control of ASI, as well as by ethical factors concerning the intrinsic nature and purpose of the ASI created.

The forms of intelligence exhibited by AI, including ASI, are believed to be highly diverse. In exploring the ethics of an ASI society, it becomes apparent that understanding ASI presents unique challenges. While ASI’s thought processes might be qualitatively beyond our current comprehension, efforts should be made to expand the boundaries of our understanding as far as possible. Recognizing the limitations in our ability to comprehend ASI fully, it is crucial to guide the development of ASI in a way that minimizes undesirable outcomes for humanity before reaching a point where complete understanding becomes infeasible.

Given the complexity of these issues, the value of a bird’s-eye view and mathematical/​logical thinking becomes increasingly vital. It is crucial to grasp the overall picture of what AS I could bring and, based on this understanding, to steer the progress of ASI in the right direction.

In conclusion, the branching scenario presented in this study is a starting point and will benefit from further refinement. Nevertheless, this work provides a valuable foundation for future discussions, acknowledging the complexities and challenges in advancing our understanding and stewardship of ASI. We must remain vigilant and adaptable, continuing to refine our approach as we gain more insights into the evolving landscape of ASI and its potential impact on humanity.

  1. ^

    Hiroshi YAMAKAWA, Investigating Alternative Futures: Human and Superintelligence Interaction Scenarios, JSAI Technical Report, Type 2 SIG, 2023, Volume 2023, Issue AGI-025, Pages 04-, Released on J-STAGE November 24, 2023, Online ISSN 2436-5556, https://​​doi.org/​​10.11517/​​jsaisigtwo.2023.AGI-025_04

  2. ^

    This document primarily references the 12 AI aftermath scenarios detailed in the book “Life 3.0: Being Human in the Age of Artificial Intelligence,” which are also presented on the Future of Life Institute’s AI Aftermath Scenarios webpage. Additionally, Wikipedia provides a related entry titled “AI Aftermath Scenarios.” Each scenario in the AI aftermath corresponds to a different world in this article.

No comments.