An x86 emulator needs all of the same code as the x86 hardware has, plus some. It needs some amount of firmware to load it as well. It’s not hard to emulate hardware, given a robust operating system, but to emulate x86 hardware and run Vista you need the hardware of the future, the firmware of that hardware, an operating system of the future, the emulator for x86 hardware, and Vista. I’m saying that -all of that- is what needs to be included in the 360kB of data for the data to be deterministic- I can easily create a series of bits that will produce wildly different outputs when used as the input to different machines- or an infinite series of machines that take the same bit and produce every possible output.
And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn’t coded for at all in the included DNA (vitamin B12, for example)?
And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn’t coded for at all in the included DNA (vitamin B12, for example)?
I’m not a biologist or a chemist, you tell me. I’d start by asking about the point in evolutionary history of life on Earth those things first showed up for the rough estimate for the evolutionary search-work needed to come up with something similar.
Also, still talking about estimating the amount of implementation work here, not the full stack of semantics in the general case down to atoms. Yes, you do need to know the computer type, and yes, I was implicitly assuming that the 360kB floppy was written to be run on some specific hardware much like a barebones modern PC (that can stay running for an extremely long time and has a mass memory with extremely large capacity). The point of the future computing hardware being arbitrary was important for the estimation question of the x86. Barring the odd hardware with a RUN_WINDOWS_VISTA processor opcode, if I had to plan being dropped in front of an alien computer, having to learn how to use it from a manual, and then implementing an x86 emulator and a Vista-equivalent OS on it, I’d plan for some time to learn how to use the thing, a bit longer to code the x86 emulator now that I have an idea how to do it, and so much more time (RUN_WINDOWS_VISTA opcodes having too low a Solomonoff prior to be worth factoring into plans) doing the Vista-alike that I might as well not even include the first two parts in my estimation.
(Again, the argument as I interpret is is specifically about eyeballing the implementation complexity of “software” on one specific platform and using that to estimate the implementation complexity of a software of similar complexity for another specific platform. The semantics of bitstrings for the case where the evaluation context can be any general thing whatsoever shouldn’t be an issue here.)
I think that they are features of eukaryotic cells, and I can’t find an example of eukaryotic life that doesn’t have them. Animals, plants, and fungi all have both, while bacteria and archaea have neither. In general, mitochondria are required for metabolism in cellular creatures, while the Golgi apparatus is required to create many complex organic structures.
Forced to create a comp sci comparison, I would compare them to the clock and the bus driver. If you had to plan on being dropped in front of an alien computer that didn’t use a clock, execute one instruction at a time, or provide an interrupt mechanism for input devices or peripherals, could you still create an emulator that simulated those hardware pieces? We’ll set a moderate goal- your task is to run DX7 well enough for it to detect an emulated video card. (That being one of the few things which are specifically Windows, rather than generally ‘OS’)
For fun, we can assume that the alien computer accepts inputs and then returns, in constant time, either the output expected of a particular Turing machine or a ‘non-halting’ code. From this alone, it can be proven that it is not a Turing machine; however you don’t have access to what it uses instead of source code, so you cannot feed it a program which loops if it exits, and exits if it loops, but you can program any Turing machine you can describe, along with any data you can provide.
I’m not sure where you’re going with this. I’m not seeing anything here that would argue that the complexity of the actual Vista implementation would increase ten or a hundred-fold, just some added constant difficulties in the beginning. Hypercomputation devices might mess up the nice and simple theory, but I’m waiting for one to show up in the real world before worrying much about those. I’m also still pretty confident that human cells can’t act as Turing oracles, even if they might squeeze a bit of extra computation juice from weird quantum mechanical tricks.
Mechanisms that showed up so early in evolution that all eukaryotes have them took a lot less evolutionary search than the features of human general intelligence, so I wouldn’t rank them anywhere close to the difficulty of human general intelligence in design discoverability.
Organelles might not be Turing oracles, but they can compute folding problems in constant time or less. And I was trying to point out that you can’t implement Vista and an x86 emulator on any other hardware for less than you can Implement Vista directly on x86 hardware.
EDIT: Considering that the evolutionary search took longer to find mitochondria from the beginning of life than it took to find intelligence from mitochondria, I think that mitochondria are harder to make from primordial soup than intelligence is to make from phytoplankton.
An x86 emulator needs all of the same code as the x86 hardware has, plus some. It needs some amount of firmware to load it as well. It’s not hard to emulate hardware, given a robust operating system, but to emulate x86 hardware and run Vista you need the hardware of the future, the firmware of that hardware, an operating system of the future, the emulator for x86 hardware, and Vista. I’m saying that -all of that- is what needs to be included in the 360kB of data for the data to be deterministic- I can easily create a series of bits that will produce wildly different outputs when used as the input to different machines- or an infinite series of machines that take the same bit and produce every possible output.
And if you are estimating things at an order-of-magnitude level, how many bits is a Golgi Apparatus worth? What about mitochondria? How much is a protein worth, if it isn’t coded for at all in the included DNA (vitamin B12, for example)?
I’m not a biologist or a chemist, you tell me. I’d start by asking about the point in evolutionary history of life on Earth those things first showed up for the rough estimate for the evolutionary search-work needed to come up with something similar.
Also, still talking about estimating the amount of implementation work here, not the full stack of semantics in the general case down to atoms. Yes, you do need to know the computer type, and yes, I was implicitly assuming that the 360kB floppy was written to be run on some specific hardware much like a barebones modern PC (that can stay running for an extremely long time and has a mass memory with extremely large capacity). The point of the future computing hardware being arbitrary was important for the estimation question of the x86. Barring the odd hardware with a
RUN_WINDOWS_VISTA
processor opcode, if I had to plan being dropped in front of an alien computer, having to learn how to use it from a manual, and then implementing an x86 emulator and a Vista-equivalent OS on it, I’d plan for some time to learn how to use the thing, a bit longer to code the x86 emulator now that I have an idea how to do it, and so much more time (RUN_WINDOWS_VISTA
opcodes having too low a Solomonoff prior to be worth factoring into plans) doing the Vista-alike that I might as well not even include the first two parts in my estimation.(Again, the argument as I interpret is is specifically about eyeballing the implementation complexity of “software” on one specific platform and using that to estimate the implementation complexity of a software of similar complexity for another specific platform. The semantics of bitstrings for the case where the evaluation context can be any general thing whatsoever shouldn’t be an issue here.)
I think that they are features of eukaryotic cells, and I can’t find an example of eukaryotic life that doesn’t have them. Animals, plants, and fungi all have both, while bacteria and archaea have neither. In general, mitochondria are required for metabolism in cellular creatures, while the Golgi apparatus is required to create many complex organic structures.
Forced to create a comp sci comparison, I would compare them to the clock and the bus driver. If you had to plan on being dropped in front of an alien computer that didn’t use a clock, execute one instruction at a time, or provide an interrupt mechanism for input devices or peripherals, could you still create an emulator that simulated those hardware pieces? We’ll set a moderate goal- your task is to run DX7 well enough for it to detect an emulated video card. (That being one of the few things which are specifically Windows, rather than generally ‘OS’)
For fun, we can assume that the alien computer accepts inputs and then returns, in constant time, either the output expected of a particular Turing machine or a ‘non-halting’ code. From this alone, it can be proven that it is not a Turing machine; however you don’t have access to what it uses instead of source code, so you cannot feed it a program which loops if it exits, and exits if it loops, but you can program any Turing machine you can describe, along with any data you can provide.
I’m not sure where you’re going with this. I’m not seeing anything here that would argue that the complexity of the actual Vista implementation would increase ten or a hundred-fold, just some added constant difficulties in the beginning. Hypercomputation devices might mess up the nice and simple theory, but I’m waiting for one to show up in the real world before worrying much about those. I’m also still pretty confident that human cells can’t act as Turing oracles, even if they might squeeze a bit of extra computation juice from weird quantum mechanical tricks.
Mechanisms that showed up so early in evolution that all eukaryotes have them took a lot less evolutionary search than the features of human general intelligence, so I wouldn’t rank them anywhere close to the difficulty of human general intelligence in design discoverability.
Organelles might not be Turing oracles, but they can compute folding problems in constant time or less. And I was trying to point out that you can’t implement Vista and an x86 emulator on any other hardware for less than you can Implement Vista directly on x86 hardware.
EDIT: Considering that the evolutionary search took longer to find mitochondria from the beginning of life than it took to find intelligence from mitochondria, I think that mitochondria are harder to make from primordial soup than intelligence is to make from phytoplankton.