I strongly agree. The basic argument Yud laid out is very convincing to randos who listen. Too convincing honestly. A rando doesn’t need an in-depth mathematical explanation to understand how incredibly likely it is that AI will turn the world into glass.
My go to is:
a rough explanation of intelligence and goal orthogonality
Picture just how inconceivable our level of intelligence is to chimps.
Picture a thousand immortal Einsteins living in a tiny box where a year for them is a couple days for us. How much smarter than us are those Einsteins?
One example of how a super-intelligence could take over the world: mechanical, self-replicating nanobots, novel protein synthesis, brainwashing people.
That’s really it. I also know a couple basic counters to the most common arguments people bring up: government regulation, friendly AI being made first, AI wouldn’t necessarily want to hurt us, etc. Most people are convinced and unfortunately look disheartened.
I strongly agree. The basic argument Yud laid out is very convincing to randos who listen. Too convincing honestly. A rando doesn’t need an in-depth mathematical explanation to understand how incredibly likely it is that AI will turn the world into glass.
My go to is:
a rough explanation of intelligence and goal orthogonality
Picture just how inconceivable our level of intelligence is to chimps.
Picture a thousand immortal Einsteins living in a tiny box where a year for them is a couple days for us. How much smarter than us are those Einsteins?
One example of how a super-intelligence could take over the world: mechanical, self-replicating nanobots, novel protein synthesis, brainwashing people.
That’s really it. I also know a couple basic counters to the most common arguments people bring up: government regulation, friendly AI being made first, AI wouldn’t necessarily want to hurt us, etc. Most people are convinced and unfortunately look disheartened.
Is there reason to believe 1000 Einsteins in a box is possible?