Just to make sure I understand you: if A is a program that has full access to its source code and the specifications of the hardware it’s running on, and A designs a new machine infrastructure and applies pressure to the world (e.g., through money or blackmail or whatever works) to induce humans to build an instance of that machine, B, such that B allows software-mediated hardware modification (for example, by having an automated chip-manufacturing plant attached to it), you would say that B is an “incorrectly-designed” CPU that might allow for a positive feedback loop.
Is that right?
Put a different way: this argument assumes that the existence of intelligent software doesn’t alter our predictions that CPUs will all be “correctly designed.” That might be true, or might not be.
No, this is not a case of an incorrectly designed CPU. This is a case where there’s a human in the loop and where the process of evolution will therefore be slow. It’s not a FOOM if it takes years between improvements, during which time the rest of the world is also improving.
We are very far from having a wholly-automated CPU-builder-plus-machine-assembly-and-install system. This is not a process that I expect a mildly-superhuman intelligence to be able to short-circuit.
Agreed that IF it turns out that existing hardware is incapable of supporting software capable of designing a wholly automated chip factory, THEN humans are a necessary part of the self-improvement cycle for as many iterations as it takes to come up with hardware that is capable of that (plus one final iteration).
I’m not as confident of that premise as you sound, but it’s certainly possible.
Existing hardware might be capable of supporting software capable of designing an automated chip factory. But the assumption required for the FOOM scenario is much stronger than that.
To get an automated self-improving system, it’s not enough to design—you have to actually build. And the necessary factory has to build a lot more than chips. I’m certain that existing hardware attached to general purpose computers is insufficient to build much of anything. And the sort of robotic actuators required to build a wholly automated factory are pretty far from what’s available today. There’s really a lot of manufacturing required to get from clever software to a flexible robotic factory.
I am skeptical that these steps can be done all that quickly or that a merely superhuman AI won’t make costly mistakes along the way. There are lots and lots of details to get right and the AI won’t typically have access to all the relevant facts.
To get an automated self-improving system, it’s not enough to design—you have to actually build. And the necessary factory has to build a lot more than chips.
At least you need to build eventually. That’s after you’ve harvested the resources you can from the internet. Which is a lot. ie. All the early iterations would probably just be software improvements. Hardware improvements can wait until the self improving system is already smart enough to make such tasks simple.
How do you know how much scope there is for software-only optimization? If I understand right, you are assuming that an AGI is able to reliably write the code for a much more capable AGI.
I’m sure this isn’t true in general. At some point you max out the hardware. Before you get to that point, I’d expect the amount of cleverness needed to find more improvements exceeds the ability of the machine. Intractable problems stay intractable no matter how smart you are.
Just how much room do you think there is for iterative software-only reengineering of an AGI, and why?
Just to make sure I understand you: if A is a program that has full access to its source code and the specifications of the hardware it’s running on, and A designs a new machine infrastructure and applies pressure to the world (e.g., through money or blackmail or whatever works) to induce humans to build an instance of that machine, B, such that B allows software-mediated hardware modification (for example, by having an automated chip-manufacturing plant attached to it), you would say that B is an “incorrectly-designed” CPU that might allow for a positive feedback loop.
Is that right?
Put a different way: this argument assumes that the existence of intelligent software doesn’t alter our predictions that CPUs will all be “correctly designed.” That might be true, or might not be.
No, this is not a case of an incorrectly designed CPU. This is a case where there’s a human in the loop and where the process of evolution will therefore be slow. It’s not a FOOM if it takes years between improvements, during which time the rest of the world is also improving.
We are very far from having a wholly-automated CPU-builder-plus-machine-assembly-and-install system. This is not a process that I expect a mildly-superhuman intelligence to be able to short-circuit.
Ah, OK.
Agreed that IF it turns out that existing hardware is incapable of supporting software capable of designing a wholly automated chip factory, THEN humans are a necessary part of the self-improvement cycle for as many iterations as it takes to come up with hardware that is capable of that (plus one final iteration).
I’m not as confident of that premise as you sound, but it’s certainly possible.
Existing hardware might be capable of supporting software capable of designing an automated chip factory. But the assumption required for the FOOM scenario is much stronger than that.
To get an automated self-improving system, it’s not enough to design—you have to actually build. And the necessary factory has to build a lot more than chips. I’m certain that existing hardware attached to general purpose computers is insufficient to build much of anything. And the sort of robotic actuators required to build a wholly automated factory are pretty far from what’s available today. There’s really a lot of manufacturing required to get from clever software to a flexible robotic factory.
I am skeptical that these steps can be done all that quickly or that a merely superhuman AI won’t make costly mistakes along the way. There are lots and lots of details to get right and the AI won’t typically have access to all the relevant facts.
At least you need to build eventually. That’s after you’ve harvested the resources you can from the internet. Which is a lot. ie. All the early iterations would probably just be software improvements. Hardware improvements can wait until the self improving system is already smart enough to make such tasks simple.
How do you know how much scope there is for software-only optimization? If I understand right, you are assuming that an AGI is able to reliably write the code for a much more capable AGI.
I’m sure this isn’t true in general. At some point you max out the hardware. Before you get to that point, I’d expect the amount of cleverness needed to find more improvements exceeds the ability of the machine. Intractable problems stay intractable no matter how smart you are.
Just how much room do you think there is for iterative software-only reengineering of an AGI, and why?