One possibility, given my (probably wrong) interpretation of the ground rules of the fictional universe, is that the humans go to the baby-eaters and tell them that they’re being invaded. Since we cooperated with them, the baby-eaters might continue to cooperate with us, by agreeing to:
1. reduce their baby-eating activities, and/or
2. send their own baby-eaters ship to blow up the star (since the fictional characters are probably barred by the author from reducing the dilemma by blowing up Huygens or sending a probe ship), so that the humans don’t have to sacrifice themselves.
@Wei: p(n) will approach arbitrarily close to 0 as you increase n.
This doesn’t seem right. A sequence that requires knowledge of BB(k), has O(2^-k) probability according to our Solomonoff Inductor. If the inductor compares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will on average be about half as probable as BB(k).
In other words, P(a particular model of K-complexity k is correct) goes to 0 as k goes to infinity, but the conditional probability, P(a particular model of K-complexity k is correct | a sub-model of that particular model with K-complexity k-1 is correct), does not go to 0 as k goes to infinity.
If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?
Suppose they run a BB evaluator for all of time. They would, indeed, have no way at any point of being certain that the current champion 100-bit program is the actual champion that produces BB(100). However, if they decide to anthropically reason that “for any time t, I am probably alive after time t, even though I have no direct evidence one way or the other once t becomes too large”, then they will believe (with arbitrarily high probability) that the current champion program is the actual champion program, and an arbitrarily high percentage of them will be correct in their belief.
One difference between optimization power and the folk notion of “intelligence”: Suppose the Village Idiot is told the password of an enormous abandoned online bank account. The Village Idiot now has vastly more optimization power than Einstein does; this optimization power is not based on social status nor raw might, but rather on the actions that the Village Idiot can think of taking (most of which start with logging in to account X with password Y) that don’t occur to Einstein. However, we wouldn’t label the Village Idiot as more intelligent than Einstein.
Is the Principle of Least Action infinitely “intelligent” by your definition? The PLA consistently picks a physical solution to the n-body problem that surprises me in the same way Kasparov’s brilliant moves surprise me: I can’t come up with the exact path the n objects will take, but after I see the path that the PLA chose, I find (for each object) the PLA’s path has a smaller action integral than the best path I could have come up with.
An AI whose only goal is to make sure such-and-such coin will not, the next time it’s flipped, turn up heads, can apply only (slightly less than) 1 bit of optimization pressure by your definition, even if it vaporizes the coin and then builds a Dyson sphere to provide infrastructure and resources for its ongoing efforts to probe the Universe to ensure that it wasn’t tricked and that the coin actually was vaporized as it appeared to be.
Count me in.
Chip, I don’t know what you mean by “The AI Institute”, but such discussion would be more on-topic at the SL4 mailing list than in the comments section of a blog posting about optimization rates.
The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you’re correct, compared with meta-reasoning position B, is often a difficult one.
When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic “toxic”. When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say “there is no safe harbor for a rationalist” or “such a person is biased, stupid, and beyond help; he would have gotten to the wrong conclusion anyway, no matter what his meta-reasoning position was. The idiot reasoner, rather than my beautiful heuristic, has to be discarded.” In the absence of hard data, consensus seems difficult; the problem is exacerbated when a novel meta-reasoning argument is brought up in the middle of a debate on a separate disagreement, in which case the opposing sides have even more temptation to “dig in” to separate meta-reasoning positions.
CERN on its LHC:
Studies into the safety of high-energy collisions inside particle accelerators have been conducted in both Europe and the United States by physicists who are not themselves involved in experiments at the LHC… CERN has mandated a group of particle physicists, also not involved in the LHC experiments, to monitor the latest speculations about LHC collisions
Things that CERN is doing right:
The safety reviews were done by people who do not work at the LHC.
There were multiple reviews by independent teams.
There is a group continuing to monitor the situation.
Wilczek was asked to serve on the committee “to pay the wages of his sin, since he’s the one that started all this with his letter.”
Moral: if you’re a practicing scientist, don’t admit the possibility of risk, or you will be punished. (No, this isn’t something I’ve drawn from this case study alone; this is also evident from other case studies, NASA being the most egregious.)
@Vladimir: We can’t bother to investigate every crazy doomsday scenario suggested
This is a strawman; nobody is suggesting investigating “every crazy doomsday scenario suggested”. A strangelet catastrophe is qualitatively possible according to accepted physical theories, and was proposed by a practicing physicist; it’s only after doing quantitative calculations that they can be dismissed as a threat. The point is that such important quantitative calculations need to be produced by less biased processes.
if you manage to get yourself stuck in an advanced rut, dutifully playing Devil’s Advocate won’t get you out of it.
It’s not a binary either/or proposition, but a spectrum; you can be in a sufficiently shallow rut that a mechanical rule of “when reasoning, search for evidence against the proposition you’re currently leaning towards” might rescue you in a situation where you would otherwise fail to come to the correct conclusion. That said, yes, it would indeed be preferable to conduct the search because you actually have “true doubt” and lack overconfidence, rather than by rote, and rather than for the odd reasons that Michael Rose gives.
Dad was an avid skeptic and Martin Gardner / James Randi fan, as well as being an Orthodox Jew. Let that be a lesson on the anti-healing power of compartmentalization
Why do you think that, if he had not compartmentalized, he would have rejected Orthodox Judaism, rather than rejecting skepticism?
“Oh, look, Eliezer is overconfident because he believes in many-worlds.”
I can agree that this is absolutely nonsensical reasoning. The correct reason to believe Eliezer is overconfident is because he’s a human being, and the prior that any given human is overconfident is extremely large.
One might propose heuristics to determine whether person X is more or less overconfident, but “X disagrees strongly with me personally on this controversial issue, therefore he is overconfident” (or stupid or ignorant) is the exact type of flawed reasoning that comes from self-serving biases.
Some physicists speak of “elegance” rather than “simplicity”. This seems to me a bad idea; your judgments of elegance are going to be marred by evolved aesthetic criteria that exist only in your head, rather than in the exterior world, and should only be trusted inasmuch as they point towards smaller, rather than larger, Kolmogorov complexity.
In theory A, the ratio of tiny dimension #1 to tiny dimension #2 is finely-tuned to support life.
In theory B, the ratio of the mass of the electron to the mass of the neutrino is finely-tuned to support life.
An “elegance” advocate might favor A over B, whereas a “simplicity” advocate might be neutral between them.
can you tell me why the subjective probability of finding ourselves in a side of the split world, should be exactly proportional to the square of the thickness of that side?
Po’mi runs a trillion experiments, each of which have a one-trillionth 4D-thickness of saying B but is otherwise A. In his “mainline probability”, he sees the all trillion experiments coming up A. (If he ran a sextillion experiments he’d see about 1 come up B.)
Presumably an external four-dimensional observer sees it differently: He sees only one-trillionth of Po’mi coming up all-A, and the rest of Po’mi saw about 1 B and are huddled in a corner crying that the universe has no order. (Maybe the 4D observer would be unable to see Po’mi at all because Po’mi and all other inhabitants of the lawful “mainline probablity” that we’re talking about have almost infinitesimal thickness from the 4D observer’s point of view.)
If I were Po’mi, I would start looking for a fifth dimension.
It seems worthwhile to also keep in mind other quantum mechanical degrees of freedom, such as spin
Only if the spin’s basis turns out to be relevant in the final ToEILEL (Theory of Everything Including Laboratory Experimental Results) that gives a mechanical algorithm for what probabilities I anticipate.
In contrast, if someone had a demonstrably-correct theory that could tell you the macroscopic position of everything I see, but doesn’t tell you the spin or (directly) the spatial or angular momentum, then the QM Measurement Problem would still be marked “completely solved”. In such a position-basis theory, the answer to any question about spin would be “Mu, it only matters if it affects the position of my macroscopic readout.”
Robin: is there a paper somewhere that elaborates this argument from mixed-state ambiguity?
Scott should add his own recommendations, but I would say here is a good starting introduction.
To my mind, the fact that two different situations of uncertainty over true states lead to the same physical predictions isn’t obviously a reason to reject that type of view regarding what is real.
The anti-MWI position here is that MWI produces different predictions depending on what basis is arbitrarily picked by the predictor; and that the various MWI efforts to “patch” this problem without postulating a new law of physics, are like squaring the circle. I think the anti-MWI’ers math is correct, but I’m not an expert enough to be 100% sure; what really makes me think MWI is wrong is the inability of the MWI’ers, after many decades, to produce an algorithm that you can “turn the crank” on to get the correct probabilities that we see in experiments; they have the tendency of trying to patch this “basis problem” by producing a new framework, which itself contains an arbitrary choice that’s just as bad as the arbitrary choice of basis.
More succinctly, in vanilla MWI you have to pick the correct basis to get the correct experimental results, and you have to peek at the results to get the correct basis.
In many of your prior posts where you bring up MWI, your interpretation doesn’t fundamentally matter to the overall point you’re trying to make in that post; that is, your overall conclusion for that post held or failed regardless of which interpretation is correct, possibly to a greater degree than you tend to realize.
For example: “We used a true randomness source—a quantum device.” The philosophers’ point could equally have been made by choosing the first 2^N digits of pi and finding they correspond by chance to someone’s GLUT.
the colony is in the future light cone of your current self, but no future version of you is in its future light cone.
Right, and if anyone’s still confused how this is possible: wikipedia and a longer explanation
* That-which-we-name “consciousness” happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
not yet understood? Is your position that there’s a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, “of course, that’s what the answer is! We should have realized it all along!”
Question for all: How do you apply Occam’s Razor to cases where there are two competing hypotheses:
A and B are independently true
A is true, and implies B, but in some mysterious way we haven’t yet determined. (For example, “heat is caused by molecular motion” or “quarks are caused by gravitation”, to pick two inferences at opposite ends of the plausibility spectrum.)
I don’t know what the best answer is. Maybe the practical answer is a variant of Solomoff induction: somehow compare “P(A) P(B)” with “P(A) P(B follows logically from A, and we were too dumb to realize that)”, where the P’s are some type of Solomonoff-ish a-priori “2^shortest program” probabilities. But the best answer certainly isn’t, “A is simpler than A + B, so we know hypothesis 2 is correct, without even having to glance at the likelihood that B follows from A.” Otherwise, you would have to conclude that, logically, quarks are caused by gravitation, in some currently-mysterious way that future mathematicians will be certain to discover.
For the record, my belief is that many of the debaters have beliefs that are isomorphic to their opponent’s beliefs. When I hear things like, “You said this is a physical law without material consequences, but I define physical laws as things that have material consequences, so you’re wrong QED!” then that’s a sign that we’re in “does a tree falling in the forest make a noise” territory. Does a conciousness mapping rule “actually exist”? Does the real world “actually exist”? Does pi “actually exist”? Why should I care?
In the end, I care about actions and outcomes, and the algorithms that produce those actions. I don’t care whether you label conciousness as “part of reality” (because it’s something you observe), or “part of your utility function” (because it’s not derivable by an intelligence-in-general), or “part of this complete nutritious breakfast” (because, technically, anything that’s not poisonous can be combined with separate unrelated nutritious items to form a complete nutritious breakfast.)
No, this hasn’t been “argued out”, and even if it had been in the past, the “single best answer” would differ from person to person and from year to year. I would suggest starting a thread on SL4 or on SIAI’s Singularity Discussion list.