Ordinary claims require ordinary evidence

Ordinary claims require ordinary evidence

(this is a post version of a YouTube video I made)

One common argument is that making extraordinary claims, such as AI being harmful, requires extraordinary evidence. However, I believe that asserting AI’s potential to be harmful is not an extraordinary claim at all. Rather, it’s grounded in several key axioms that, when examined, are hard to refute.

Why It’s Not an Extraordinary Claim

I think the AI Optimist imagines a particular scenario or set of scenarios (perhaps “Terminator” or [insert fictional franchise here]) and says “that seems improbable”. Perhaps Eliezer comes along and posits one additional scenario, and the Optimist says “all of those combined are improbable”. “Do you have any proof that this [particular tiny set of scenarios] will happen!?” But the space of AI ruin is vast, any failure scenario would ruin everything.

To me, AI ruin seems to a natural consequence of five simple processes and conditions:

The Five Core Axioms

1. AI gets better, never worse: AI’s intelligence, however you define it, is increasing. As new research emerges, the knowledge becomes a permanent part of the record. Like other technological advances, we build on it rather than regress. People constantly throw more resources at AI, training up bigger and bigger models without any regard to safety.

2. Intelligence always helps: Being more intelligent always aids success in the real world. A slight edge in intelligence has allowed humans to dominate the Earth. There is no reason to expect a different outcome with an entity more intelligent than humans.

3. No one knows how to align AI: No one can precisely instruct AI to align with complex human values or happiness. We can optimize for likely prediction of a data point, but no one has written a Python function to rank outcomes by how positive they are for humanity.

4. Resources are finite: Any AI acting in the real world will inevitably compete with humans for resources. These resources, once consumed by AI, won’t be available for human use, leading to potential conflicts.

5. AI cannot be stopped: Once an AI becomes more intelligent and possibly harmful, it’s impossible to halt it. Stopping an unaligned AI requires human decision-making to defeat more-intelligent-than-human decision making, which isn’t possible.

Combined, these axioms point towards AI: becoming smarter, outcompeting humans, being unaligned with human interests, taking resources from humanity, and being unstoppable. These all seem pretty straightforwardly true (at least to me)

The Ultimate Challenge for Humanity

In my opinion, these axioms point towards a simple conclusion: AI risk is the ordinary claim, and the concept of AI being “safe” is the extremist viewpoint, for which no “extraordinary” evidence exists.

Please let me know what mistakes I’ve made here, or where my arguments are wrong.

Self-promo

I’m working on a little project for like-minded people to hang out and chat. It’s at together.lol, please drop by and let me know what you think.