[Question] Why Do AI researchers Rate the Probability of Doom So Low?

I recently read What do ML researchers think about AI in 2022.


The probability of Doom is sub-10%. Which is high, but as I understand it, in the minds of people like Eliezer Yudkowsky, we’re more likely doomed than not.


I personally lean towards Yudkowsky’s views, because
- I don’t believe human/​evolution-selected minds have thinking power that a machine could not have
- I believe in the Orthogonality Thesis
(I think that those two questions can be defended empirically)
- I think it is easier to make a non-aligned machine than an aligned one
(I believe that research currently being carried out strongly hints at the fact that this is true)
- I believe that more people are working on non-aligned AI than on aligned AI
- I think it would be very hard politically to stop all AI research and successfully prevent anyone from researching it /​ to implement a worldwide ban on AI R&D.


Given all this (and probably other observations that I made), I think we’re doomed.
I feel my heart beating hard, when I think to myself I have to give a number.
I imagine I’m bad at it, it’ll be wrong, it’s more uncomfortable/​inconvenient than just saying “we’re fucked” without any number, but here goes anyway-
I’d say that we’re
(my brain KEEPS on flinching away from coming up with a number, I don’t WANT to actually follow through on all my thoughts and observations about the state of AI and what it means for the Future)-
(I think of all the possible Deus-Ex-Machina that could happen)-
(I imagine how terrible it is if I’m WRONG)-
(Visualizing my probabilities for the AI-doom scenario in hypothetical worlds where I don’t live makes it easier, I think)
My probability of doom from AI is around 80% in the next 50 years.
(And my probability of Doom if AI keeps getting better is 95% (one reason it might not get better, I imagine, is that another X-Risk happens before AI)).
I would be surprised if more than 1 world, out of 5 in our current situation, made it out alive from developping AI.

Edit, a week after the post:
I’d say my P(Doom) in the next 50 years is now between 20-40%.
It’s not that I suddenly think AI is less likely, but I think I put my P(Doom) at 80% before because I lumped all of my fears together as if P(Doom) = P(Outcome I really don’t like).
But those two things are different.
For me, P(Doom) = P(humanity wipes out). This is different than a bad outcome like [A few people own all of the AI and everybody else has a terrible life with 0 chance of overthrowing the system].
To be clear, that situation is terrible and I don’t want to live there, but it’s not doom.


So, my question:

What do AI researchers know, or think they know, that their aggregate P(Doom) is only at 5-10%?

I can see how many just flinch away from the current evidence or thought processes. They think the nicer thoughts.
But so many of them, such that their aggregate P(Doom) is sub-10%?

What do they know that I don’t?
- We’ll need more computing power to run Doom-AI than we will ever have
(but human minds run on brains?)
- We don’t need to worry about it now
(which is a Bad Argument, it doesn’t matter how far away it is, but how much resources (including time) we’ll need, unless we’ll need a not-yet-built-AI to help us build Aligned-AI… dubious.)
- Another AI Winter is likely
- …

I THINK I know that AI researchers mostly believe horrible wrong arguments for why AI won’t happen soon, why it won’t wipe us out or deceive us, why alignment is easy, etc. Mostly: it’s uncomfortable to think, it would hurt their ability to provide for their families as comfortably as they’re currently doing, and it would put them at odds with other researchers /​ future employers.
But in case I’m wrong, I want to ask: What do they know that I don’t, that they feel so safe about the future of the world with SAI in it?

***


I’m finishing reading Harry Potter and the Methods of Rationality for the third time.
(Because Mad Investor Chaos hasn’t been updated since Sept 2 Mhhhhhhh?!)
I’m having somewhat of an existential crisis. Harry gives himself ultimate responsibility. He won’t act the ROLE of caring for his goals; he will actually put all of his efforts and ingenuity into their pursuit.
It’s clear to me that I’m NOT doing that. I don’t have a plan. I’m doing too many things at the same time. And maybe there’s something I can do to help, given my set of skills, intelligence, motivation, hero-narrative, etc.
I’m no Eliezer, I’m a terrible programmer with 0 ML knowledge and a slow-at-maths brain, but I reckon even I, if I trained myself at rationality, persuasion, influence, etc, could further the AI-Alignment agenda, possibly through people/​political/​influence means (over several years) rather than by doing anything technical.

Atm I’m trying (and failing) to Hold Off on Proposing Solutions™️, survey what I know and how I think I know it, look at my options, and decide how to move my life in a direction that could also lower the odds of P(Doom).
I think I would like to be in a world where P(Doom) is 5%. Then, I’d probably think I’m not responsible for it. But I don’t think I’m in that world. Just making sure though.


EDIT a few days after the post:
I found an interesting article, which seems well regarded within the industry—it’s quoted by the Future Fund as “[a] significant original analysis which we consider the new canonical reference on [P(misalignment x-risk|AGI)]”.
The article makes the same claims I did to begin with (see abstract)…
But puts AI risk at >10%: Joseph Carlsmith — Is Power-Seeking AI an Existential Risk?