[Linkpost] GatesNotes: The Age of AI has begun

This is a linkpost for https://​​www.gatesnotes.com/​​The-Age-of-AI-Has-Begun#ALChapter6

The Age of AI has begun
Artificial intelligence is as revolutionary as mobile phones and the Internet.

30s Update on Bill Gates’ Views re Alignment:

  • Gates cites Bostrom and Tegmark’s books as having shaped his thinking, but thinks that AI developments of the past few months don’t make the control problem more urgent.

  • Gates asks whether we should try to prevent strong AI from ever being developed, what happens if strong AI’s goals conflict with humanity’s interests; says these questions will get more pressing with time.

Quotations that Convey Key Views

From the section: Risks and problems with AI”:

  • “Three books have shaped my own thinking on this subject: Superintelligence, by Nick Bostrom; Life 3.0 by Max Tegmark; and A Thousand Brains, by Jeff Hawkins.”

    • I don’t agree with everything the authors say, and they don’t agree with each other either. But all three books are well written and thought-provoking.”

  • “There’s the possibility that AIs will run out of control. Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?”

    • Possibly, but this problem is no more urgent today than it was before the AI developments of the past few months.”

      • “[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI.”

  • “Superintelligent AIs are in our future.”

    • “Once developers can generalize a learning algorithm and run it at the speed of a computer—an accomplishment that could be a decade away or a century away—we’ll have an incredibly powerful AGI.”

      • “It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. This will be a profound change.”

  • “These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.