The argument from the small size of the genome is more plausible, especially if Eliezer is thinking in terms of Kolmogorov complexity, which is based on the size of the smallest computer program needed to build something. However, it does not follow that if the genome is not very complex, the brain must not be very complex, because the brain may be built not just based on the genome, but also based on information from the outside environment.
I think the question in the original debate could be formulated as something like: How big a solution, in the amount of program code we need to write, do we need to find to be able to implement a self-improving artificial intelligence that will, given an amount of sensory input and opportunities to interact with its environment comparable to that of a human growing up, grow up to human-level cognition.
I don’t see how the other sources of information needed for brain development is a counterargument here. Once you have a machine learning system that will bootstrap itself to sentience given a few years of video feed, you’ve done pretty well indeed.
I don’t also see how the compressibility argument is supposed to work without further qualifiers. You might drop the size of Windows Vista to a third or less if you compressed aggressively at every possible opportunity, but would you get out of the order-of-magnitude ballpark which is about the level of detail the argument operates on?
(I wanted to say here that you’d never get Vista to fit on a 360 kB floppy disk, but on second thought that would also require something like “with any kind of sane human engineering effort you could only spend a millennium or so with”. If you have an arbitrary amount of cleverness for composing the program and an arbitrary amount of time running it, you get into Busy Beaver Land, and a floppy disk suddenly has the potential to be an extremely scary artifact. On the other hand, compared to eldritch computational horrors from Busy Beaver Land the human genome has had a very limited amount of evolutionary design going in it, and has a very limited time to execute itself.
(Well you would probably run into the pigeonhole principle if you wanted to get the actual Vista Vista out of the ultra-compressed thing, but for the purposes of the argument, it’s enough to get a something that does all the sort of things Windows Vista does, no matter what the exact bit pattern.))
Of course, to load the data from the floppy you need a bare minimum of firmware. The disk by itself doesn’t do anything.
By the same token, the human genome doesn’t do anything on it’s own. It requires a human(?*) egg to develop anything significant.
*I’m not aware of any experiments where an animal cell was cloned with the DNA from a different species- what happens if you put a sheep nucleus into a horse egg and implant it in a horse? Is it like trying to load PROdos directly onto modern hardware?
It looks like transfer of embryos between species has been successful, but not clones. I wouldn’t call the panda/rabbit clone in a cat “brought to term”, more like “had promising results”.
One of those two animals seems far easier to bring to term in a cat. The other I am imagining bursting out of the stomach “Aliens” style because there just isn’t any room left!
Does this have any relevance for estimating things at an order-of-magnitude level? To run Windows Vista on some arbitrary future hardware, you’d need an x86 emulator, but an x86 emulator should take much less work and lines of code than a Windows Vista, so you’d still want to eyeball the amount of complexity mostly by the amount of stuff in the Vista part.
An x86 emulator needs all of the same code as the x86 hardware has, plus some. It needs some amount of firmware to load it as well. It’s not hard to emulate hardware, given a robust operating system, but to emulate x86 hardware and run Vista you need the hardware of the future, the firmware of that hardware, an operating system of the future, the emulator for x86 hardware, and Vista. I’m saying that -all of that- is what needs to be included in the 360kB of data for the data to be deterministic- I can easily create a series of bits that will produce wildly different outputs when used as the input to different machines- or an infinite series of machines that take the same bit and produce every possible output.
And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn’t coded for at all in the included DNA (vitamin B12, for example)?
And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn’t coded for at all in the included DNA (vitamin B12, for example)?
I’m not a biologist or a chemist, you tell me. I’d start by asking about the point in evolutionary history of life on Earth those things first showed up for the rough estimate for the evolutionary search-work needed to come up with something similar.
Also, still talking about estimating the amount of implementation work here, not the full stack of semantics in the general case down to atoms. Yes, you do need to know the computer type, and yes, I was implicitly assuming that the 360kB floppy was written to be run on some specific hardware much like a barebones modern PC (that can stay running for an extremely long time and has a mass memory with extremely large capacity). The point of the future computing hardware being arbitrary was important for the estimation question of the x86. Barring the odd hardware with a RUN_WINDOWS_VISTA processor opcode, if I had to plan being dropped in front of an alien computer, having to learn how to use it from a manual, and then implementing an x86 emulator and a Vista-equivalent OS on it, I’d plan for some time to learn how to use the thing, a bit longer to code the x86 emulator now that I have an idea how to do it, and so much more time (RUN_WINDOWS_VISTA opcodes having too low a Solomonoff prior to be worth factoring into plans) doing the Vista-alike that I might as well not even include the first two parts in my estimation.
(Again, the argument as I interpret is is specifically about eyeballing the implementation complexity of “software” on one specific platform and using that to estimate the implementation complexity of a software of similar complexity for another specific platform. The semantics of bitstrings for the case where the evaluation context can be any general thing whatsoever shouldn’t be an issue here.)
I think that they are features of eukaryotic cells, and I can’t find an example of eukaryotic life that doesn’t have them. Animals, plants, and fungi all have both, while bacteria and archaea have neither. In general, mitochondria are required for metabolism in cellular creatures, while the Golgi apparatus is required to create many complex organic structures.
Forced to create a comp sci comparison, I would compare them to the clock and the bus driver. If you had to plan on being dropped in front of an alien computer that didn’t use a clock, execute one instruction at a time, or provide an interrupt mechanism for input devices or peripherals, could you still create an emulator that simulated those hardware pieces? We’ll set a moderate goal- your task is to run DX7 well enough for it to detect an emulated video card. (That being one of the few things which are specifically Windows, rather than generally ‘OS’)
For fun, we can assume that the alien computer accepts inputs and then returns, in constant time, either the output expected of a particular Turing machine or a ‘non-halting’ code. From this alone, it can be proven that it is not a Turing machine; however you don’t have access to what it uses instead of source code, so you cannot feed it a program which loops if it exits, and exits if it loops, but you can program any Turing machine you can describe, along with any data you can provide.
I’m not sure where you’re going with this. I’m not seeing anything here that would argue that the complexity of the actual Vista implementation would increase ten or a hundred-fold, just some added constant difficulties in the beginning. Hypercomputation devices might mess up the nice and simple theory, but I’m waiting for one to show up in the real world before worrying much about those. I’m also still pretty confident that human cells can’t act as Turing oracles, even if they might squeeze a bit of extra computation juice from weird quantum mechanical tricks.
Mechanisms that showed up so early in evolution that all eukaryotes have them took a lot less evolutionary search than the features of human general intelligence, so I wouldn’t rank them anywhere close to the difficulty of human general intelligence in design discoverability.
Organelles might not be Turing oracles, but they can compute folding problems in constant time or less. And I was trying to point out that you can’t implement Vista and an x86 emulator on any other hardware for less than you can Implement Vista directly on x86 hardware.
EDIT: Considering that the evolutionary search took longer to find mitochondria from the beginning of life than it took to find intelligence from mitochondria, I think that mitochondria are harder to make from primordial soup than intelligence is to make from phytoplankton.
I think he’s saying that the brain is not just the genome. What you see as an adult brain also represents a host of environmental factors. Since these environmental factors are complex, so then is the brain.
Yes you could probably use some machine learning algorithm to build a brain with the input of a video feed. But this says relatively little about how the brain actually develops in nature.
Yes you could probably use some machine learning algorithm to build a brain with the input of a video feed. But this says relatively little about how the brain actually develops in nature.
That’s just the thing. It makes a big difference whether we’re talking about a (not necessarily human) brain in general, or a specific, particular brain. Artificial intelligence research is concerned about being able to find any brain design it can understand and work with, while neuroscience is concerned with the particulars of human brain anatomy and often the specific brains of specific people.
Also, I’d be kinda hesitant to dismiss anything that involves being able to build a brain as “saying relatively little” about anything brain-related.
I think the question in the original debate could be formulated as something like: How big a solution, in the amount of program code we need to write, do we need to find to be able to implement a self-improving artificial intelligence that will, given an amount of sensory input and opportunities to interact with its environment comparable to that of a human growing up, grow up to human-level cognition.
I don’t see how the other sources of information needed for brain development is a counterargument here. Once you have a machine learning system that will bootstrap itself to sentience given a few years of video feed, you’ve done pretty well indeed.
I don’t also see how the compressibility argument is supposed to work without further qualifiers. You might drop the size of Windows Vista to a third or less if you compressed aggressively at every possible opportunity, but would you get out of the order-of-magnitude ballpark which is about the level of detail the argument operates on?
(I wanted to say here that you’d never get Vista to fit on a 360 kB floppy disk, but on second thought that would also require something like “with any kind of sane human engineering effort you could only spend a millennium or so with”. If you have an arbitrary amount of cleverness for composing the program and an arbitrary amount of time running it, you get into Busy Beaver Land, and a floppy disk suddenly has the potential to be an extremely scary artifact. On the other hand, compared to eldritch computational horrors from Busy Beaver Land the human genome has had a very limited amount of evolutionary design going in it, and has a very limited time to execute itself.
(Well you would probably run into the pigeonhole principle if you wanted to get the actual Vista Vista out of the ultra-compressed thing, but for the purposes of the argument, it’s enough to get a something that does all the sort of things Windows Vista does, no matter what the exact bit pattern.))
Of course, to load the data from the floppy you need a bare minimum of firmware. The disk by itself doesn’t do anything.
By the same token, the human genome doesn’t do anything on it’s own. It requires a human(?*) egg to develop anything significant.
*I’m not aware of any experiments where an animal cell was cloned with the DNA from a different species- what happens if you put a sheep nucleus into a horse egg and implant it in a horse? Is it like trying to load PROdos directly onto modern hardware?
There was/is a plan to resurrect the Woolly Mammoth, by putting Mammoth dna into an elephant egg.
The Wikipedia page on Interspecific Pregnancy links to an example on a Giant Panda genome put in a rabbit egg and brought to term in a cat. http://en.wikipedia.org/wiki/Interspecific_pregnancy http://www.ncbi.nlm.nih.gov/pubmed/12135908
Also, I think Ventner denucleates the cell of something to serve as the host for his synthetic dna.
It looks like transfer of embryos between species has been successful, but not clones. I wouldn’t call the panda/rabbit clone in a cat “brought to term”, more like “had promising results”.
One of those two animals seems far easier to bring to term in a cat. The other I am imagining bursting out of the stomach “Aliens” style because there just isn’t any room left!
Does this have any relevance for estimating things at an order-of-magnitude level? To run Windows Vista on some arbitrary future hardware, you’d need an x86 emulator, but an x86 emulator should take much less work and lines of code than a Windows Vista, so you’d still want to eyeball the amount of complexity mostly by the amount of stuff in the Vista part.
An x86 emulator needs all of the same code as the x86 hardware has, plus some. It needs some amount of firmware to load it as well. It’s not hard to emulate hardware, given a robust operating system, but to emulate x86 hardware and run Vista you need the hardware of the future, the firmware of that hardware, an operating system of the future, the emulator for x86 hardware, and Vista. I’m saying that -all of that- is what needs to be included in the 360kB of data for the data to be deterministic- I can easily create a series of bits that will produce wildly different outputs when used as the input to different machines- or an infinite series of machines that take the same bit and produce every possible output.
And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn’t coded for at all in the included DNA (vitamin B12, for example)?
I’m not a biologist or a chemist, you tell me. I’d start by asking about the point in evolutionary history of life on Earth those things first showed up for the rough estimate for the evolutionary search-work needed to come up with something similar.
Also, still talking about estimating the amount of implementation work here, not the full stack of semantics in the general case down to atoms. Yes, you do need to know the computer type, and yes, I was implicitly assuming that the 360kB floppy was written to be run on some specific hardware much like a barebones modern PC (that can stay running for an extremely long time and has a mass memory with extremely large capacity). The point of the future computing hardware being arbitrary was important for the estimation question of the x86. Barring the odd hardware with a
RUN_WINDOWS_VISTA
processor opcode, if I had to plan being dropped in front of an alien computer, having to learn how to use it from a manual, and then implementing an x86 emulator and a Vista-equivalent OS on it, I’d plan for some time to learn how to use the thing, a bit longer to code the x86 emulator now that I have an idea how to do it, and so much more time (RUN_WINDOWS_VISTA
opcodes having too low a Solomonoff prior to be worth factoring into plans) doing the Vista-alike that I might as well not even include the first two parts in my estimation.(Again, the argument as I interpret is is specifically about eyeballing the implementation complexity of “software” on one specific platform and using that to estimate the implementation complexity of a software of similar complexity for another specific platform. The semantics of bitstrings for the case where the evaluation context can be any general thing whatsoever shouldn’t be an issue here.)
I think that they are features of eukaryotic cells, and I can’t find an example of eukaryotic life that doesn’t have them. Animals, plants, and fungi all have both, while bacteria and archaea have neither. In general, mitochondria are required for metabolism in cellular creatures, while the Golgi apparatus is required to create many complex organic structures.
Forced to create a comp sci comparison, I would compare them to the clock and the bus driver. If you had to plan on being dropped in front of an alien computer that didn’t use a clock, execute one instruction at a time, or provide an interrupt mechanism for input devices or peripherals, could you still create an emulator that simulated those hardware pieces? We’ll set a moderate goal- your task is to run DX7 well enough for it to detect an emulated video card. (That being one of the few things which are specifically Windows, rather than generally ‘OS’)
For fun, we can assume that the alien computer accepts inputs and then returns, in constant time, either the output expected of a particular Turing machine or a ‘non-halting’ code. From this alone, it can be proven that it is not a Turing machine; however you don’t have access to what it uses instead of source code, so you cannot feed it a program which loops if it exits, and exits if it loops, but you can program any Turing machine you can describe, along with any data you can provide.
I’m not sure where you’re going with this. I’m not seeing anything here that would argue that the complexity of the actual Vista implementation would increase ten or a hundred-fold, just some added constant difficulties in the beginning. Hypercomputation devices might mess up the nice and simple theory, but I’m waiting for one to show up in the real world before worrying much about those. I’m also still pretty confident that human cells can’t act as Turing oracles, even if they might squeeze a bit of extra computation juice from weird quantum mechanical tricks.
Mechanisms that showed up so early in evolution that all eukaryotes have them took a lot less evolutionary search than the features of human general intelligence, so I wouldn’t rank them anywhere close to the difficulty of human general intelligence in design discoverability.
Organelles might not be Turing oracles, but they can compute folding problems in constant time or less. And I was trying to point out that you can’t implement Vista and an x86 emulator on any other hardware for less than you can Implement Vista directly on x86 hardware.
EDIT: Considering that the evolutionary search took longer to find mitochondria from the beginning of life than it took to find intelligence from mitochondria, I think that mitochondria are harder to make from primordial soup than intelligence is to make from phytoplankton.
I think he’s saying that the brain is not just the genome. What you see as an adult brain also represents a host of environmental factors. Since these environmental factors are complex, so then is the brain.
Yes you could probably use some machine learning algorithm to build a brain with the input of a video feed. But this says relatively little about how the brain actually develops in nature.
That’s just the thing. It makes a big difference whether we’re talking about a (not necessarily human) brain in general, or a specific, particular brain. Artificial intelligence research is concerned about being able to find any brain design it can understand and work with, while neuroscience is concerned with the particulars of human brain anatomy and often the specific brains of specific people.
Also, I’d be kinda hesitant to dismiss anything that involves being able to build a brain as “saying relatively little” about anything brain-related.
Thanks for the clarification. You’re right that artificial intelligence and neuroscience are two different fields.