I feel like we’re failing to communicate. Let me recapitulate.
Remember, if the theories were correct and complete, the corresponding simulations would be able to do all the things that the real human cortex can do[5]—vision, language, motor control, reasoning, inventing new scientific paradigms from scratch, founding and running billion-dollar companies, and so on.
So, your argument here is modus tollens:
If we had a “correct and complete” version of the algorithm running in the human cortex (and elsewhere) then the simulations would be able to do all that a human can do.
The simulations cannot do all that a human can do.
Therefore we do not, etc.
I’m questioning 1, by claiming that you need a good training environment + imitation of other entities in order for even the correct algorithm for the human brain to produce interesting behavior.
You respond to this by pointing out that bright, intelligent, curious children do not need school to solve problems. And this is assuredly true. Yet: bright, intelligent, curious children still learned language and an enormous host of various high-level behaviors from imitating adults; they exist in a world with books and artifacts created by other people, from which they can learn; etc, etc. I’m aware of several brilliant people with relatively minimal conventional schooling; I’m aware of no brilliant people who were feral children. Saying that humans turn into problem solving entities without plentiful examples to imitate seems simply not true, and so I remain confident that 1 is a false claim, and the point that bright people exist without school is entirely compatible with this.
I am extremely confident that there is no possible training environment that would lead a collaborative group of these crappy toy models into inventing language, science, and technology from scratch, as humans were able to do historically
Maybe so, but that’s a confidence that you have entirely apart from providing these crappy toy models an actual opportunity to do so. You might be right, but your argument here is still wrong.
Humans did not, really, “invent” language, in the same way that Dijkstra invented an algorithm. The origin of language is subject to dispute, but it’s probably something that happened over centuries or millenia, rather than all at once. So—if you had an algorithm that could invent language from scratch, I don’t think its reasonable to expect it to do so unless you give it centuries of millenia of compute, in a richly textured environment where it’s advantageous to invent language. Which, of course, we have come absolutely nowhere close to doing.
From my perspective you’re being kinda nitpicky, but OK sure, I have now reworded from:
“Remember, if the theories were correct and complete, the corresponding simulations would be able to do all the things that the real human cortex can do…”, to:
“Remember, if the theories were correct and complete, then they could be turned into simulations able to do all the things that the real human cortex can do…”
…and the “could” captures the fact that a simulation can also fail in other ways, e.g. you need to ensure adequate training environment, bug-free code, adequate speed, good hyperparameters, and everything else.
Again, I don’t think “setting up an adequate training environment for ASI capabilities” will be a hard thing for a future programmer to do, but I agree that it’s a thing for a future programmer to do. Some programmer needs to actually do it. It doesn’t just happen automatically. We are in agreement at least about that. :)
When I say “not hard”, what do I have in mind? Well, off the top of my head, I’d guess that a minimal-effort example of a training environment that would probably be adequate for ASI capabilities (but not safety or alignment) (given the right learning algorithm and reward function) would involve an interface to existing RL training environments where the baby-AGI can move around and stack blocks and so on, plus free two-way access to the whole internet, especially YouTube.
if you had an algorithm that could invent language from scratch, I don’t think its reasonable to expect it to do so unless you give it centuries of millenia of compute
I disagree—as I mentioned in the article, a group of kids growing up with no exposure whatsoever to grammatical language will simply create a new grammatical language from scratch, as in Nicaraguan Sign Language and creoles.
I think that’s a characteristic of people talking about different things from within different basins of Traditions of Thought. The points one side makes seem either kinda obvious or weirdly nitpicky in a confusing and irritating way to people in the other side. Like to me, what I’m saying seems obviously central to the whole issue of high p-dooms genealogically descended from Yudkowsky, and confusions around this seem central to stories about high p-doom, rather than nitpicky and stupid.
Thanks for amending though, I appreciate. :) The point about Nicaraguan Sign Language is cool as well.
I feel like we’re failing to communicate. Let me recapitulate.
So, your argument here is modus tollens:
If we had a “correct and complete” version of the algorithm running in the human cortex (and elsewhere) then the simulations would be able to do all that a human can do.
The simulations cannot do all that a human can do. Therefore we do not, etc.
I’m questioning 1, by claiming that you need a good training environment + imitation of other entities in order for even the correct algorithm for the human brain to produce interesting behavior.
You respond to this by pointing out that bright, intelligent, curious children do not need school to solve problems. And this is assuredly true. Yet: bright, intelligent, curious children still learned language and an enormous host of various high-level behaviors from imitating adults; they exist in a world with books and artifacts created by other people, from which they can learn; etc, etc. I’m aware of several brilliant people with relatively minimal conventional schooling; I’m aware of no brilliant people who were feral children. Saying that humans turn into problem solving entities without plentiful examples to imitate seems simply not true, and so I remain confident that 1 is a false claim, and the point that bright people exist without school is entirely compatible with this.
Maybe so, but that’s a confidence that you have entirely apart from providing these crappy toy models an actual opportunity to do so. You might be right, but your argument here is still wrong.
Humans did not, really, “invent” language, in the same way that Dijkstra invented an algorithm. The origin of language is subject to dispute, but it’s probably something that happened over centuries or millenia, rather than all at once. So—if you had an algorithm that could invent language from scratch, I don’t think its reasonable to expect it to do so unless you give it centuries of millenia of compute, in a richly textured environment where it’s advantageous to invent language. Which, of course, we have come absolutely nowhere close to doing.
From my perspective you’re being kinda nitpicky, but OK sure, I have now reworded from:
“Remember, if the theories were correct and complete, the corresponding simulations would be able to do all the things that the real human cortex can do…”, to:
“Remember, if the theories were correct and complete, then they could be turned into simulations able to do all the things that the real human cortex can do…”
…and the “could” captures the fact that a simulation can also fail in other ways, e.g. you need to ensure adequate training environment, bug-free code, adequate speed, good hyperparameters, and everything else.
Again, I don’t think “setting up an adequate training environment for ASI capabilities” will be a hard thing for a future programmer to do, but I agree that it’s a thing for a future programmer to do. Some programmer needs to actually do it. It doesn’t just happen automatically. We are in agreement at least about that. :)
When I say “not hard”, what do I have in mind? Well, off the top of my head, I’d guess that a minimal-effort example of a training environment that would probably be adequate for ASI capabilities (but not safety or alignment) (given the right learning algorithm and reward function) would involve an interface to existing RL training environments where the baby-AGI can move around and stack blocks and so on, plus free two-way access to the whole internet, especially YouTube.
I disagree—as I mentioned in the article, a group of kids growing up with no exposure whatsoever to grammatical language will simply create a new grammatical language from scratch, as in Nicaraguan Sign Language and creoles.
I think that’s a characteristic of people talking about different things from within different basins of Traditions of Thought. The points one side makes seem either kinda obvious or weirdly nitpicky in a confusing and irritating way to people in the other side. Like to me, what I’m saying seems obviously central to the whole issue of high p-dooms genealogically descended from Yudkowsky, and confusions around this seem central to stories about high p-doom, rather than nitpicky and stupid.
Thanks for amending though, I appreciate. :) The point about Nicaraguan Sign Language is cool as well.