Overwhelming superintelligence sounds like a useful term. A term I started using is independence gaining artificial general intelligence as the threshold for when we need to start being concerned about the AGI’s alignment. An AI program that is sufficiently intelligent to be able to gain independence, such as by creating a self-replicating computer capable of obtaining energy and other things needed to achieve goals without any further assistance from humans.
For example, an independence gaining AGI connected to today’s internet might complete intellectual tasks for money and then use the money to mail order printed circuit boards and other hardware. An independence
gaining AGI with access to 1800s level technology might mine coal and build a steam engine to power a Babbage-like computer and then bootstrap to faster computing elements. An independence gaining AGI on Earth’s moon might be able to produce solar panels and CPUs from the elements in the moon’s crust, and produce an electromagnetic rail to launch probes off the moon. Of course, how smart the AGI has to be to gain independence is a function of what kind of hardware the AGI can get access to. An overwhelming superintelligence might be able to take over the planet with just access to a hardware random number generator and a high precision timer, but a computer controlling a factory could probably be less intelligent and still be able to gain independence.
One of the reasons I started using the term is because human level AGI is vague, and we don’t know if we should be concerned by a human level AGI. Also, to determine if something is human level, we need to specify human level in what? 1950s computers were superhuman at arithmetic, but not chess, so is a 1950s computer human level or not? It may be hard to determine of a given computer + software is capable of gaining independence, but it is a more exact definition than just human level AGI.
How about ‘out-of-control superintelligence’? (Either because it’s uncontrollable or at least not controlled.) Which carries the appropriately alarming connotations that it’s doing its own thing and that we can’t stop it (or aren’t doing so anyway)
I think this may be proving Raemon’s point that there are a wide range of concepts. I consider the lower amount of alignment connotation of independence gaining a feature, not a bug, since we can say things like ethical independence gaining AGI or aligned independence gaining AGI without it sounding like an oxymoron. Also, I am not sure superintelligence is required to gain independence, since it may be possible to just think longer than a human to gain independence without thinking faster. That said, if out-of-control superintelligence is the right concept you are trying to get across, then use that.
Overwhelming superintelligence sounds like a useful term. A term I started using is independence gaining artificial general intelligence as the threshold for when we need to start being concerned about the AGI’s alignment. An AI program that is sufficiently intelligent to be able to gain independence, such as by creating a self-replicating computer capable of obtaining energy and other things needed to achieve goals without any further assistance from humans.
For example, an independence gaining AGI connected to today’s internet might complete intellectual tasks for money and then use the money to mail order printed circuit boards and other hardware. An independence gaining AGI with access to 1800s level technology might mine coal and build a steam engine to power a Babbage-like computer and then bootstrap to faster computing elements. An independence gaining AGI on Earth’s moon might be able to produce solar panels and CPUs from the elements in the moon’s crust, and produce an electromagnetic rail to launch probes off the moon. Of course, how smart the AGI has to be to gain independence is a function of what kind of hardware the AGI can get access to. An overwhelming superintelligence might be able to take over the planet with just access to a hardware random number generator and a high precision timer, but a computer controlling a factory could probably be less intelligent and still be able to gain independence.
One of the reasons I started using the term is because human level AGI is vague, and we don’t know if we should be concerned by a human level AGI. Also, to determine if something is human level, we need to specify human level in what? 1950s computers were superhuman at arithmetic, but not chess, so is a 1950s computer human level or not? It may be hard to determine of a given computer + software is capable of gaining independence, but it is a more exact definition than just human level AGI.
How about ‘out-of-control superintelligence’? (Either because it’s uncontrollable or at least not controlled.) Which carries the appropriately alarming connotations that it’s doing its own thing and that we can’t stop it (or aren’t doing so anyway)
I think this may be proving Raemon’s point that there are a wide range of concepts. I consider the lower amount of alignment connotation of independence gaining a feature, not a bug, since we can say things like ethical independence gaining AGI or aligned independence gaining AGI without it sounding like an oxymoron. Also, I am not sure superintelligence is required to gain independence, since it may be possible to just think longer than a human to gain independence without thinking faster. That said, if out-of-control superintelligence is the right concept you are trying to get across, then use that.