Superintelligence 27: Pathways and enablers

This is part of a weekly reading group on Nick Bostrom’s book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI’s reading guide.


Welcome. This week we discuss the twenty-seventh section in the reading guide: Pathways and enablers.

This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.

There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).

Reading: “Pathways and enablers” from Chapter 14


Summary

  1. Is hardware progress good?

    1. Hardware progress means machine intelligence will arrive sooner, which is probably bad.

    2. More hardware at a given point means less understanding is likely to be needed to build machine intelligence, and brute-force techniques are more likely to be used. These probably increase danger.

    3. More hardware progress suggests there will be more hardware overhang when machine intelligence is developed, and thus a faster intelligence explosion. This seems good inasmuch as it brings a higher chance of a singleton, but bad in other ways:

      1. Less opportunity to respond during the transition

      2. Less possibility of constraining how much hardware an AI can reach

      3. Flattens the playing field, allowing small projects a better chance. These are less likely to be safety-conscious.

    4. Hardware has other indirect effects, e.g. it allowed the internet, which contributes substantially to work like this. But perhaps we have enough hardware now for such things.

    5. On balance, more hardware seems bad, on the impersonal perspective.

  2. Would brain emulation be a good thing to happen?

    1. Brain emulation is coupled with ‘neuromorphic’ AI: if we try to build the former, we may get the latter. This is probably bad.

    2. If we achieved brain emulations, would this be safer than AI? Three putative benefits:

      1. “The performance of brain emulations is better understood”

        1. However we have less idea how modified emulations would behave

        2. Also, AI can be carefully designed to be understood

      2. “Emulations would inherit human values”

        1. This might require higher fidelity than making an economically functional agent

        2. Humans are not that nice, often. It’s not clear that human nature is a desirable template.

      3. “Emulations might produce a slower take-off”

        1. It isn’t clear why it would be slower. Perhaps emulations would be less efficient, and so there would be less hardware overhang. Or perhaps because emulations would not be qualitatively much better than humans, just faster and more populous of them

        2. A slower takeoff may lead to better control

        3. However it also means more chance of a multipolar outcome, and that seems bad.

    3. If brain emulations are developed before AI, there may be a second transition to AI later.

      1. A second transition should be less explosive, because emulations are already many and fast relative to the new AI.

      2. The control problem is probably easier if the cognitive differences are smaller between the controlling entities and the AI.

      3. If emulations are smarter than humans, this would have some of the same benefits as cognitive enhancement, in the second transition.

      4. Emulations would extend the lead of the frontrunner in developing emulation technology, potentially allowing that group to develop AI with little disturbance from others.

      5. On balance, brain emulation probably reduces the risk from the first transition, but added to a second transition this is unclear.

    4. Promoting brain emulation is better if:

      1. You are pessimistic about human resolution of control problem

      2. You are less concerned about neuromorphic AI, a second transition, and multipolar outcomes

      3. You expect the timing of brain emulations and AI development to be close

      4. You prefer superintelligence to arrive neither very early nor very late

  3. The person affecting perspective favors speed: present people are at risk of dying in the next century, and may be saved by advanced technology

Another view

I talked to Kenzi Amodei about her thoughts on this section. Here is a summary of her disagreements:

Bostrom argues that we probably shouldn’t celebrate advances in computer hardware. This seems probably right, but here are counter-considerations to a couple of his arguments.

The great filter

A big reason Bostrom finds fast hardware progress to be broadly undesirable is that he judges the state risks from sitting around in our pre-AI situation to be low, relative to the step risk from AI. But the so called ‘Great Filter’ gives us reason to question this assessment.

The argument goes like this. Observe that there are a lot of stars (we can detect about ~10^22 of them). Next, note that we have never seen any alien civilizations, or distant suggestions of them. There might be aliens out there somewhere, but they certainly haven’t gone out and colonized the universe enough that we would notice them (see ‘The Eerie Silence’ for further discussion of how we might observe aliens).

This implies that somewhere on the path between a star existing, and it being home to a civilization that ventures out and colonizes much of space, there is a ‘Great Filter’: at least one step that is hard to get past. 1/​10^22 hard to get past. We know of somewhat hard steps at the start: a star might not have planets, or the planets may not be suitable for life. We don’t know how hard it is for life to start: this step could be most of the filter for all we know.

If the filter is a step we have passed, there is nothing to worry about. But if it is a step in our future, then probably we will fail at it, like everyone else. And things that stop us from visibly colonizing the stars are may well be existential risks.

At least one way of understanding anthropic reasoning suggests the filter is much more likely to be at a step in our future. Put simply, one is much more likely to find oneself in our current situation if being killed off on the way here is unlikely.

So what could this filter be? One thing we know is that it probably isn’t AI risk, at least of the powerful, tile-the-universe-with-optimal-computations, sort that Bostrom describes. A rogue singleton colonizing the universe would be just as visible as its alien forebears colonizing the universe. From the perspective of the Great Filter, either one would be a ‘success’. But there are no successes that we can see.

What’s more, if we expect to be fairly safe once we have a successful superintelligent singleton, then this points at risks arising before AI.

So overall this argument suggests that AI is less concerning than we think and that other risks (especially early ones) are more concerning than we think. It also suggests that AI is harder than we think.

Which means that if we buy this argument, we should put a lot more weight on the category of ‘everything else’, and especially the bits of it that come before AI. To the extent that known risks like biotechnology and ecological destruction don’t seem plausible, we should more fear unknown unknowns that we aren’t even preparing for.

How much progress is enough?

Bostrom points to positive changes hardware has made to society so far. For instance, hardware allowed personal computers, bringing the internet, and with it the accretion of an AI risk community, producing the ideas in Superintelligence. But then he says probably we have enough: “hardware is already good enough for a great many applications that could facilitate human communication and deliberation, and it is not clear that the pace of progress in these areas is strongly bottlenecked by the rate of hardware improvement.”

This seems intuitively plausible. However one could probably have erroneously made such assessments in all kinds of progress, all over history. Accepting them all would lead to madness, and we have no obvious way of telling them apart.

In the 1800s it probably seemed like we had enough machines to be getting on with, perhaps too many. In the 1800s people probably felt overwhelmingly rich. If the sixties too, it probably seemed like we had plenty of computation, and that hardware wasn’t a great bottleneck to social progress.

If a trend has brought progress so far, and the progress would have been hard to predict in advance, then it seems hard to conclude from one’s present vantage point that progress is basically done.

Notes

1. How is hardware progressing?

I’ve been looking into this lately, at AI Impacts. Here’s a figure of MIPS/​$ growing, from Muehlhauser and Rieber.

(Note: I edited the vertical axis, to remove a typo)

2. Hardware-software indifference curves

It was brought up in this chapter that hardware and software can substitute for each other: if there is endless hardware, you can run worse algorithms, and vice versa. I find it useful to picture this as indifference curves, something like this:

(Image: Hypothetical curves of hardware-software combinations producing the same performance at Go (source).)

I wrote about predicting AI given this kind of model here.

3. The potential for discontinuous AI progress

While we are on the topic of relevant stuff at AI Impacts, I’ve been investigating and quantifying the claim that AI might suddenly undergo huge amounts of abrupt progress (unlike brain emulations, according to Bostrom). As a step, we are finding other things that have undergone huge amounts of progress, such as nuclear weapons and high temperature superconductors:

(Figure originally from here)

4. The person-affecting perspective favors speed less as other prospects improve

I agree with Bostrom that the person-affecting perspective probably favors speeding many technologies, in the status quo. However I think it’s worth noting that people with the person-affecting view should be scared of existential risk again as soon as society has achieved some modest chance of greatly extending life via specific technologies. So if you take the person-affecting view, and think there’s a reasonable chance of very long life extension within the lifetimes of many existing humans, you should be careful about trading off speed and risk of catastrophe.

5. It seems unclear that an emulation transition would be slower than an AI transition.

One reason to expect an emulation transition to proceed faster is that there is an unusual reason to expect abrupt progress there.

6. Beware of brittle arguments

This chapter presented a large number of detailed lines of reasoning for evaluating hardware and brain emulations. This kind of concern might apply.

In-depth investigations

If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser’s list, which contains many suggestions related to parts of Superintelligence. These projects could be attempted at various levels of depth.

  1. Investigate in more depth how hardware progress affects factors of interest

  2. Assess in more depth the likely implications of whole brain emulation

  3. Measure better the hardware and software progress that we see (e.g. some efforts at AI Impacts, MIRI, MIRI and MIRI)

  4. Investigate the extent to which hardware and software can substitute (I describe more projects here)

  5. Investigate the likely timing of whole brain emulation (the Whole Brain Emulation Roadmap is the main work on this)

    If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts.

    How to proceed

    This has been a collection of notes on the chapter. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

    Next week, we will talk about how collaboration and competition affect the strategic picture. To prepare, read “Collaboration” from Chapter 14 The discussion will go live at 6pm Pacific time next Monday 23 March. Sign up to be notified here.