I just noted the obvious fact that it would enable a civilization of billions of John von Neumann’s, who could instantly clone themselves, or summon any number of any other particular human mind in history (with their own particular talents), and also think much faster than normal humans.
I think this makes many oversimplifying assumptions that strongly weaken the conclusion. It ignores:
The economic costs of
Running a single Jon Von Neumann
The cost of duplication/copying
The cost of retraining a copy to do some other task
Groups of humans are more capable than individual humans due to specialisation and division of labour.
A group of identical Von Neumanns are not that useful; the Von Neumanns will need to retrain/reskill to be very useful
That the first human level AGIs will not be human analogues, but superhuman in some aspects and subhuman in others
Basically, I think such arguments ignore the actual economics, and so seem quite fantastical.
The economic costs of running a single John von Neumann
We can guess that based on how much computation is done by a brain and how expensive are chips etc. When I do those calculations (related post), I wind up strongly believing that using an AGI would be cheaper than hiring a human (very shortly after we can do it at all), and eventually much cheaper.
The cost of duplication/copying
I’m not sure what you’re getting at. Sending a file over the internet costs a few cents at most. Can you explain?
The cost of retraining a copy to do some other task
Yes it’s true that adult John von Neumann was good at physics and math and economics and so on, but bad at SQL, because after all SQL hadn’t been invented yet when he was alive. So it would take some time for John von Neumann to learn SQL. But c’mon. It wouldn’t take much time, right? And he would probably get insanely good at it, quite quickly. This happens in the human world all the time. Somebody needs to learn something new to accomplish their task, so they go right ahead and learn it.
I think this points to a major mistake that I see over and over when people are thinking about AGI—it’s a big theme of that post I linked above. The mistake (I claim) is thinking of the core ingredient of intelligence as already knowing how to do something. But I think the core ingredient of intelligence is not knowing how to do something, and then figuring it out. In some cases that involves building a tool or system to do the thing, and in other cases it entails building a new set of mental models that enable you to do the thing.
For example, if I want an AGI to invent a new technology, it can’t go in already understanding that technology, obviously.
Anyway, hopefully we can both agree that (1) other things equal, I’d rather have an AGI that needs less on-the-job learning / figuring-things-out, rather than more, because they’ll get off the ground faster, (2) but this isn’t the end of the world, and indeed it happens for humans all the time, (3) and it’s just a matter of degree, because all hard-to-automate jobs necessarily involve learning new things as you go. (And for easy-to-automate jobs, we can task an AGI with figuring out how to automate it.)
Groups of humans are more capable than individual humans due to specialisation and division of labour.
Doesn’t that also apply to groups of AGIs? If a clone of 25-year-old John von Neumann wants to build a chemical factory, then they he could spin off a dozen more clones, and one could go off and spend a few months becoming an expert in chemical engineering, and a second one could go off and learn accounting, and a third could go off and learn industrial automation, and so on, and then they could all work together to plan the chemical factory, gaining the benefits of specialization and division of labor.
(Realistically, there would already be, for sale, a snapshot of an AGI that has already learned accounting, because that’s a common enough thing.)
(I do actually believe that one AGI that spent more time learning all those different domains one after another would do a better job than that team, for reasons in the last few sentences of this comment. But I could be wrong. Anyway, it doesn’t matter; if assembling groups of AGIs with different knowledge and experience is the best approach, then that’s what’s gonna happen.)
If you think that the original John von Neumann just has some things that it’s constitutionally bad at, then we can (and people certainly will) train new AGIs from scratch with slightly different hyperparameters / neural architectures / whatever, which would be constitutionally good / bad at a different set of things. I don’t expect it to be exorbitantly expensive or time-consuming—again see related post. Again, on my personal account I tend to think that this kind of “genetic diversity” is not so important, and that lots of 25-year-old John von Neumanns (plus on-the-job learning) would be sufficient for complete automation of human labor with no more from-scratch trainings, but that’s pretty much pointless speculation.
Doesn’t that also apply to groups of AGIs? If a clone of 25-year-old John von Neumann wants to build a chemical factory, then they he could spin off a dozen more clones, and one could go off and spend a few months becoming an expert in chemical engineering, and a second one could go off and learn accounting, and a third could go off and learn industrial automation, and so on, and then they could all work together to plan the chemical factory, gaining the benefits of specialization and division of labor.
It’s also likely that they will be able to coordinate better than humans because they will spend less energy to fight about status and the individual interests. AGI are less likely to suffer from problems of moral mazes that hold our large institutions back.
I’m not sure what you’re getting at. Sending a file over the internet costs a few cents at most. Can you explain?
I don’t think that was a relevant objection (at least not one that wasn’t already raised elsewhere). You can ignore it.
Yes it’s true that adult John von Neumann was good at physics and math and economics and so on, but bad at SQL, because after all SQL hadn’t been invented yet when he was alive. So it would take some time for John von Neumann to learn SQL.
I meant more that the optimal allocation of 1,000 Jon Von Neumann level agents isn’t for all of them to catch up to speed with modern maths, physics, economics. I think the returns from intellectual diversity/cognitive specialisation and division of labour would be greater. Probably we’d have most JVNs specialise in different subfields/branches, as opposed to trying to polymath all the STEM.
Doesn’t that also apply to groups of AGIs? If a clone of 25-year-old John von Neumann wants to build a chemical factory, then they he could spin off a dozen more clones, and one could go off and spend a few months becoming an expert in chemical engineering, and a second one could go off and learn accounting, and a third could go off and learn industrial automation, and so on, and then they could all work together to plan the chemical factory, gaining the benefits of specialization and division of labor.
I agree. I was saying that this need for specialisation/division of labour would introduce extra economic costs via retraining/reskilling and such. That I don’t think the society of AIs would instantly transform human society, but there will be a significant lead up time.
Probably we’d have most JVNs specialise in different subfields/branches, as opposed to trying to polymath all the STEM.
I think that’s entirely possible, I don’t feel strongly either way. Did you think that was the crux of a disagreement? I might have lost track of the conversation.
That I don’t think the society of AIs would instantly transform human society, but there will be a significant lead up time.
I agree about “instant”. However:
If “significant lead up time” means “not seconds or days, but rather more like a few years”, well, I don’t think a few years is enough to really make much difference.
If “significant lead up time” means “not years or decades, but rather more than a few centuries”, OK, that definitely moves the needle and would make me go back and rethink things.
And I’m at the first bullet point.
I notice that I wrote “in short order” in my parent comment. Sorry for not being clear. I was imagining a couple years, not instant.
I think this makes many oversimplifying assumptions that strongly weaken the conclusion. It ignores:
The economic costs of
Running a single Jon Von Neumann
The cost of duplication/copying
The cost of retraining a copy to do some other task
Groups of humans are more capable than individual humans due to specialisation and division of labour.
A group of identical Von Neumanns are not that useful; the Von Neumanns will need to retrain/reskill to be very useful
That the first human level AGIs will not be human analogues, but superhuman in some aspects and subhuman in others
Basically, I think such arguments ignore the actual economics, and so seem quite fantastical.
We can guess that based on how much computation is done by a brain and how expensive are chips etc. When I do those calculations (related post), I wind up strongly believing that using an AGI would be cheaper than hiring a human (very shortly after we can do it at all), and eventually much cheaper.
I’m not sure what you’re getting at. Sending a file over the internet costs a few cents at most. Can you explain?
Yes it’s true that adult John von Neumann was good at physics and math and economics and so on, but bad at SQL, because after all SQL hadn’t been invented yet when he was alive. So it would take some time for John von Neumann to learn SQL. But c’mon. It wouldn’t take much time, right? And he would probably get insanely good at it, quite quickly. This happens in the human world all the time. Somebody needs to learn something new to accomplish their task, so they go right ahead and learn it.
I think this points to a major mistake that I see over and over when people are thinking about AGI—it’s a big theme of that post I linked above. The mistake (I claim) is thinking of the core ingredient of intelligence as already knowing how to do something. But I think the core ingredient of intelligence is not knowing how to do something, and then figuring it out. In some cases that involves building a tool or system to do the thing, and in other cases it entails building a new set of mental models that enable you to do the thing.
For example, if I want an AGI to invent a new technology, it can’t go in already understanding that technology, obviously.
Anyway, hopefully we can both agree that (1) other things equal, I’d rather have an AGI that needs less on-the-job learning / figuring-things-out, rather than more, because they’ll get off the ground faster, (2) but this isn’t the end of the world, and indeed it happens for humans all the time, (3) and it’s just a matter of degree, because all hard-to-automate jobs necessarily involve learning new things as you go. (And for easy-to-automate jobs, we can task an AGI with figuring out how to automate it.)
Doesn’t that also apply to groups of AGIs? If a clone of 25-year-old John von Neumann wants to build a chemical factory, then they he could spin off a dozen more clones, and one could go off and spend a few months becoming an expert in chemical engineering, and a second one could go off and learn accounting, and a third could go off and learn industrial automation, and so on, and then they could all work together to plan the chemical factory, gaining the benefits of specialization and division of labor.
(Realistically, there would already be, for sale, a snapshot of an AGI that has already learned accounting, because that’s a common enough thing.)
(I do actually believe that one AGI that spent more time learning all those different domains one after another would do a better job than that team, for reasons in the last few sentences of this comment. But I could be wrong. Anyway, it doesn’t matter; if assembling groups of AGIs with different knowledge and experience is the best approach, then that’s what’s gonna happen.)
If you think that the original John von Neumann just has some things that it’s constitutionally bad at, then we can (and people certainly will) train new AGIs from scratch with slightly different hyperparameters / neural architectures / whatever, which would be constitutionally good / bad at a different set of things. I don’t expect it to be exorbitantly expensive or time-consuming—again see related post. Again, on my personal account I tend to think that this kind of “genetic diversity” is not so important, and that lots of 25-year-old John von Neumanns (plus on-the-job learning) would be sufficient for complete automation of human labor with no more from-scratch trainings, but that’s pretty much pointless speculation.
It’s also likely that they will be able to coordinate better than humans because they will spend less energy to fight about status and the individual interests. AGI are less likely to suffer from problems of moral mazes that hold our large institutions back.
I don’t think that was a relevant objection (at least not one that wasn’t already raised elsewhere). You can ignore it.
I meant more that the optimal allocation of 1,000 Jon Von Neumann level agents isn’t for all of them to catch up to speed with modern maths, physics, economics. I think the returns from intellectual diversity/cognitive specialisation and division of labour would be greater. Probably we’d have most JVNs specialise in different subfields/branches, as opposed to trying to polymath all the STEM.
I agree. I was saying that this need for specialisation/division of labour would introduce extra economic costs via retraining/reskilling and such. That I don’t think the society of AIs would instantly transform human society, but there will be a significant lead up time.
I think that’s entirely possible, I don’t feel strongly either way. Did you think that was the crux of a disagreement? I might have lost track of the conversation.
I agree about “instant”. However:
If “significant lead up time” means “not seconds or days, but rather more like a few years”, well, I don’t think a few years is enough to really make much difference.
If “significant lead up time” means “not years or decades, but rather more than a few centuries”, OK, that definitely moves the needle and would make me go back and rethink things.
And I’m at the first bullet point.
I notice that I wrote “in short order” in my parent comment. Sorry for not being clear. I was imagining a couple years, not instant.