It seems like you could justify Occam’s Razor by looking at the past history of discarded explanations. An explanation that is ridiculously complex, yet fits all the observations so far, will probably be broken by the next observation; a simple explanation is less likely to fail in the future. A hypothesis that says “Occam’s Razor will work until October 8th, 2007” falls into the general category of “hypotheses with seemingly random exceptions”, which should have a history of lesser accuracy than hypotheses with justified exceptions or no exceptions. To quote Virtues: “Simplicity is virtuous in belief, design, planning, and justification. When you profess a huge belief with many details, each additional detail is another chance for the belief to be wrong. Each specification adds to your burden; if you can lighten your burden you must do so. There is no straw that lacks the power to break your back. Of artifacts it is said: The most reliable gear is the one that is designed out of the machine. Of plans: A tangled web breaks. A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere.”
Tom_McCabe2
“We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn’t let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they always go together in the actual world.”
Look at airplanes: they all have a bunch of common characteristics like an engine, wings, rudders, etc. If you argued that an airplane was not really “identical” to the pile of parts, but that they just “always went together”, people would look at you like you had three heads. Yet, when applied to brains, people think this argument makes sense. A brain is made up of the frontal cortex, visual cortex, auditory cortex, amygdala, pituitary gland, cerebellum, etc.; that’s just what it is.
“I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.
I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”
Think of Kolmogorov complexity: the most parsimonious hypothesis is the one that can generate the data using the least number of bits when fed into a Turing machine.
“One way is to appeal to Occam’s Razor. Let us prefer the simpler hypothesis that increases to the minimum wage are random. That is bogus.”
Why it is bogus? An ideal stock market, operating over a fixed resource base, must necessarily be random (or at least pseudorandom). If it had any patterns distinguishable by investors, people would exploit those patterns to make money, and in the process eliminate them. The same principle could apply here: the minute a politician discovers a pattern in the economy, he begins exploiting it to get votes, and so erases the pattern by selectively hacking off the parts of it the voters consider bad.
Why would not giving him $5 make it more likely that people would die, as opposed to less likely? The two would seem to cancel out. It’s the same old “what if we are living in a simulation?” argument- it is, at least, possible that me hitting the sequence of letters “QWERTYUIOP” leads to a near-infinity of death and suffering in the “real world”, due to AGI overlords with wacky programming. Yet I do not refrain from hitting those letters, because there’s no entanglement which drives the probabilities in that direction as opposed to some other random direction; my actions do not alter the expected future state of the universe. You could just as easily wind up saving lives as killing people.
- 10 Jun 2010 13:29 UTC; 1 point) 's comment on Open Thread June 2010, Part 2 by (
“Let the differential be negative. Same problem. If the differential is not zero, the AI will exhibit unreasonable behavior. If the AI literally thinks in Solomonoff induction (as I have described), it won’t want the differential to be zero, it will just compute it.”
How can a computation arrive at a nonzero differential, starting with zero data? If I ask a rational AI to calculate the probability of me typing “QWERTYUIOP” saving 3^^^^3 human lives, it knows literally nothing about the causal interactions between me and those lives, because they are totally unobservable.
“Odd, I’ve been reading moral paradoxes for many years and my brain never crashed once, nor have I turned evil.”
Even if it hasn’t happened to you, it’s quite common- think about how many people under Stalin had their brains programmed to murder and torture. Looking back and seeing how your brain could have crashed is scary, because it isn’t particularly improbable; it almost happened to me, more than once.
“Would any commenters care to mug Tiiba? I can’t quite bring myself to do it, but it needs doing.”
If you don’t donate $5 to SIAI, some random guy in China will die of a heart attack because we couldn’t build FAI fast enough. Please donate today.
In this sense, God has screwed over each and every one of us- in three billion bases of DNA, there’s bound to be alleles which we really don’t like.
Maybe you could exploit this, if the question you’re gathering evidence for is important enough to warrant all that costly searching. Spending hours digging through obscure journals is not something most people do for fun, but if you can come up with a pet theory which needs reinforcing, most people would rather do the evidence-gathering than be forced to give it up.
Does this analysis focus on pure, monotone utility, or does it include the huge ripple effect putting dust specks into so many people’s eyes would have? Are these people with normal lives, or created specifically for this one experience?
Shameless nitpick: There’s nothing wrong with the logic that “radiation causes mutations, so more powerful radiation causes more powerful mutations.” If you expose yourself to a thousand rads, you will get a heck of a lot of mutations. The logic breaks down when you expect these mutations to give you super powers, rather than a big mess. It sounds like you’ve got the superhero logic backwards: people did not look at evolutionary theory, understand it incorrectly, and then hypothesize that superheroes should be an expected outcome. They first made up the superheroes, and then looked for anything which might plausibly explain them.
“Because my human-built computer is inferior is virtually every way to the one evolution produced.”
From LOGI:
“Current computer programs definitely possess these mutually synergetic advantages relative to humans:
Computer programs can perform highly repetitive tasks without boredom.
Computer programs can execute complex extended tasks without making that class of human errors caused by distraction or short-term memory overflow in abstract deliberation.
Computer hardware can perform extended sequences of simple steps at much greater serial speeds than human abstract deliberation or even human 200Hz neurons.
Computer programs are fully configurable by the general intelligences called humans. (Evolution, the designer of humans, cannot invoke general intelligence.)”
“Its stupidity is still smarter than the most brilliant human.”
Taking the earlier example of the eye, we’ve already surpassed it in just about every way. We have cameras which can see in much dimmer light, and cameras which can look directly at the Sun without getting fried. We have cameras that can see in radio and gamma rays and everything in between. We have cameras with higher resolution and better-quality optics. We have cameras that can actually detect the wavelength of every incoming photon, rather than being limited to the three-axis human color system. And so on and so forth.
“If you’ve ever dealt with fitting of really complex data, a random walk is often suprinsingly more efficient than any of the refined fitting algorithms.”
See http://sl4.org/wiki/KnowabilityOfFAI. Only in AI would people design algorithms that are literally stupider than a bag of bricks, boost the results back towards maximum entropy, and then argue for the healing power of noise.
“Surely the cumulative power of natural selection is beyond human intelligence?”
Even if it was, why would you want to use it? Evolution has thoroughly screwed over more human beings than every brutal dictator who ever lived, and that’s just humans, never mind the several billion extinct species which litter our planet’s history.
“So the meaningful DNA specifying a human must fit into at most 25 megabytes.”
These are bits of entropy, not bits on a hard drive. It’s mathematically impossible to compress bits of entropy.
“To sum up: The mathematician’s bits here are very close to bits on a hard drive, because every DNA base that matters has to be supported by “one mutation, one death” to overcome per-base copying errors.”
There are only twenty amino acids plus a stop code for each codon, so the theoretical information bound is 4.4 bits/codon, not 6 bits, even for coding DNA. A common amino acid, such as leucine, only requires two base pairs to specify; the third base pair can freely mutate without any phenotypic effects at all.
“Can you provide an argument as to why none of this affects the “speed limit” (not even by a constant factor?)”
For a full explanation, see an evolutionary biology textbook. But basically, the 1 bit/generation bound is information-theoretic; it applies, not just to any species, but to any self-reproducing organism, even one based on RNA or silicon. The specifics of how information is utilized, in our case DNA → mRNA → protein, don’t matter.
“But, more than half of mammals (in many, perhaps most, species) die without reproducing. Wouldn’t this result in a higher rate of selection and, therefore, more functional DNA?”
“Yes, many mammals give birth to more than 4 children, but neither does selection perfectly eliminate all but the most fit organisms. The speed limit on evolution is an upper bound, not an average.”
“But mammals have many ways of weeding out harmful variations, from antler fights to spermatozoa competition. And that’s just if they have the four children. The provided 1 bit/generation figure isn’t an upper bound, either.”
Read a biology textbook, darn it. The DNA contents of a sperm have negligible impact on the sperm’s ability to penetrate the egg. As for antler fights, it doesn’t matter how individuals are removed from the gene pool. They can only be removed at a certain rate or else the species population goes to zero. Note than nonreproduction = death as far as evolution is concerned.
“Life spends a lot of time in non-equilibrium states as well, and those are the states in which evolution can operate most quickly.”
Yes, but they must be balanced by states where it operates more slowly. You can certainly have a situation where 1.5 bits are added in odd years and .5 bits in even years, but it’s a wash: you still get 1 bit/year long term.
“1. A lot—I mean a lot—of crazy assumptions are made without any hard evidence to back them up. (E.g., the “mammals produce on average ~4 offspring, and when they produce more, it’s compensated for by selection’s inefficiencies.”)”
The bit rate is O(log(offspring)), not O(offspring), so even if you produced 16 offspring, that’s only three bits/generation. How many offspring do you think we have? 8,589,934,592? (= 32 bits/generation)? Selection will have inefficiencies, so these are upper bounds.
“This kind of redundancy, along with many other factors, makes me wonder if we need to change the 1 bit by some scaling factor...”
The factor due to redundant coding sequences is 1.36 (1.4 bits/base instead of 2.0). This does increase the amount of storable information, because it makes the degenerative pressure (mutation) work less efficiently. Then again, it’s only a factor of 35%, so the conclusion is still basically the same.
“Defective sperm—which are more-than-normally likely to be carry screwed-up DNA—is far less likely to reach the egg,”
Then the DNA never gets collected by researchers and included in the 10^-8 mutations/generation/base pair figure. If the actual rate of mutations are higher, but the non-detected mutations are weeded out, you still get the exact same result as if the rate of mutations is lower with no weeding-out.
“Of course it does! Just not to the maximum-bit-rate argument.”
True.
“No, they mustn’t. They can theoretically be kept in a constant non-equilibrium.”
Yes, they can be- it doesn’t change the bit rate. Non-equilibria where the population is shrinking must be balanced by non-equilibria where the population is growing, or the population will go to zero or infinity.
This level confusion also seems to show up whenever people talk about “free will”- a computer was programmed by us, but its code can still do things that we never designed it for. Evolution sure as heck never designed people to make condoms and birth control pills, so why can’t a computer do things we never designed it to do?
“Only a forward-flowing algorithm will make the entanglements match up.”
To try and rephrase this in simpler language: You do not know the truth. You want to discover the truth. The only thing you get scored on is how close you are to the truth. If you decide “XYZ is a great guy” because XYZ is writing your paycheck, writing down lots of elaborate arguments will not improve your score, because the only thing you get scored on was already written, before you started writing the arguments. If you start writing arguments and then conclude that XYZ is a great guy, you may improve on your score, because you get a chance to change your mind. Changing your mind becomes mandatory as the truth becomes more complex; if you decide on the truth for personal convenience out of 2^100 or 2^200 possible options, you’re never going to hit it if you don’t work to improve your accuracy.