On urgency, priority and collective reaction to AI-Risks: Part I

If you now got convinced that survival and future of everyone you know, including your own, depends on your immediate actions after reading this; you would not hesitate to put your daily tasks on halt to study and meditate on this text as if it was a live message being broadcast from an alien civilization. And afterwards you would not hesitate to drastically alter your life and begin doing what ever is in your power to steer away from the disaster. That is, IF you knew of such circumstance to any significant degree of certainty. Being simply told of such circumstance is not enough, because (simply put) world is full of uncertainties, people are mistaken all the time and because we are not wired to make radical life-altering decisions unless we perceive certain and undeniable danger to our survival or well-being. Decisions of an individual are limited by the degrees of certainty in the circumstances. Human priorities depend on a sense of urgency formed by our understanding of the situation and some situations are both difficult to understand and take time to even explain. If you do not understand the criticality of this situation then to you it becomes acceptable to postpone all this “super important sounding stuff” for later—perhaps see if it still seems urgent some other time around. Surely it can’t be that important if it’s not being the headline news discussed on every channel or directed at you in a live conversation and requiring immediate response? Maybe if it’s important enough it will find some other channel via which to hold your attention? Just like that you’d stop paying attention here and get everyone killed by misprioritizing the criticality of an urgent threat.

Before I begin with the what’s and why’s it needs to be stated that when threat is complex—perceiving it and talking about it becomes difficult, which lowers the certainty in the threat’s criticality. The more certain an individual is of some circumstance the more ready they are to take sizable and appropriate actions regarding those circumstances. The more complex and abstract the danger the less likely we are to perceive it clearly without putting significant amount of time and effort into grasping the situation. The more complex the danger the fewer people are going to perceive it. The easier it is to ignore a threat and the harder it is to understand; the fewer people will end up reacting to it in any way at all. To make this easier to understand I’ve made a simplified visualization graph (Fig.1)[1] below with some assumptions described in the footnotes:

Example Fig.1[1]: a person is likely to understand the immediate threat to their life if they come across a roaring bear on the road, but they may not understand the extreme peril if a child shows them a strange metallic slightly glowing capsule they found on the side of the road – a radioactive capsule is a more complex threat to perceive than a bear is. More complex threats require more foreknowledge to be perceived. The more elaborate and indirect a threat, the more likely it is to go unnoticed. In some situations the threat is complex enough that only few individuals can clearly perceive its danger. In such situation conformity bias and normalcy bias are also in full effect and many will ignore or even ridicule all warnings and discussion. In such situation it is important to first reliably grasp the situation for yourself.

What Fig.1 means is that any existential threat beyond a certain level of complexity inevitably becomes a matter of debate, doubt and politics, which is for example what we have been seeing with climate crisis, biodiversity loss, pandemic and nuclear weapons. The current situation with AI-safety is that the threat is too complex for most people to understand – for most people it’s an out-there threat on par with an alien invasion rather than something comparable to emerging danger of international conflict escalating into a nuclear war. At this stage regardless of the degree of actual threat that artificial intelligence is posing vast majority of people will ignore it due to the complexity factor alone. Think about that! REGARDLESS OF THE DEGREE OF ACTUAL THREAT THAT ARTIFICIAL INTELLIGENCE IS POSING VAST MAJORITY OF PEOPLE WILL IGNORE IT DUE TO THE COMPLEXITY FACTOR ALONE!

First issue is that we need to reliably determine what is the actual degree of AI-threat that we’re facing and gain certainty for ourselves. The second issue would be how to convey that certainty to others. I propose an approach to tackle both issues at once. Assume that you’re an individual who knows nothing about deep learning, machine learning, data science and artificial intelligence and then some group of experts come forward and proclaims that we are facing a very real existential threat which needs immediate international actions from governments world-wide. If you were a scientist not far removed from the experts you could read the details and study to catch-up on the issue personally, but in our example here you’re not an expert and it could take years to catch up – that’s time you do not have and effort you can’t make. Furthermore some other clever people disagree with these experts. What is your optimal course of action?

Luckily for us all there’s another way and the same methodology should be used by everyone regardless if they’re already experts in the matter in question or not. Short simplified answer is “Applied diversified epistemic reasoning”—find highly reliable experts in the field with good track records and not too closely related to one another; get their views and opinions directly or as closely and recently as possible from them and only via reliable sources; Then cross-check between what all of these experts are saying, trust into their expertise and form/​correct your own view based what they think at the moment. Even better if you can engage with experts directly and take into account the pressures they might be facing when speaking publicly. This way you should be able to receive a reasonably good estimation of the situation without even needing to be an expert yourself. Also reasonable people would not argue with something that most experts clearly agree upon and that’s why it becomes doubly important to make reliable documentations on what the recognized experts really think at the moment. However making such documentation is lot of work and hence it becomes important that we make collective effort here and publish our findings in a way that is simple and easy for everyone to understand. I have been working on one such effort, which I intend to release in part II of this post (sometime this month, I assume). I hope that you will consider joining or contributing to my effort.

My methodology is qualitative and quite straightforward; I asked Chat GPT-4 a list of 30 active (as of September 2021) AI-experts in the field and went to find on the Internet what they are saying and doing in 2023. I cross over experts that have not said anything on the matter or are not active anymore and add a new expert to the list in their stead. Essentially I go through a list of experts and try to find out what they think and agree upon, assess their reliability, write notes and save the sources for later reference. Results, findings and outcome will be released in part II of this post. Meanwhile I’ve included other similarly focused and significant existing publications below. I will also add more as a coordinated effort of this post.


AIIMPACTS -survey of AI-experts from 2022 according to which:

  • Median 10% believe that 48% of those who answered assigned a median at least a 10% chance[2] that to human inability to control future advanced AI-systems would cause leading to an extinction or similarly permanent and severe disempowerment of the human species.

  • 69% of respondents think society should prioritize AI safety research more or much more.

List of people who sighed the petition to pause AI-development of “AI systems more powerful than GPT-4”. I’ve also checked each one on my experts-list and marked whether or not they signed.


This is all I have for now. Any suggestions, critique and help will be appreciated! It took me significant amount of time to write and edit this post, partially because I discarded some 250% of what I wrote: I started over at least three times after writing thousands of words and end up summarizing many paragraphs into just few sentences. That is to say there are many important things I want to and will write about in the near future, but this one post in its current form took priority over everything else. Thank you for taking your time to read through!

  1. ^

    Relationship between Threat Complexity and Clear Threat Perception graph: I assume that generally human ability to perceive threats follows a bell curve while threat’s complexity increases linearly. In reality complexity likely follows an exponential curve and there are other considerable factors, but for the sake of everyone’s clarity I’ve decided to leave it demonstratively as such.

  2. ^

    Added: As Vladimir_Nesov pointed out: “A survey was conducted in the summer of 2022 of approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021, and received 738 responses, some partial, for a 17% response rate. When asked about impact of high-level machine intelligence in the long run, 48% of respondents gave at least 10% chance of an extremely bad outcome (e.g. human extinction).”