I think you might have failed to realize what will determine which cult people will choose. When the Media cult makes their presentation, they’ll be reduced to showing a movie (or equivalent, maybe a lo-fi virtual reality) and saying “look at this fancy media we can create, wouldn’t you like to be able to do that?” But then the Physics and Mathematics cult (I really do fail to see how they could be separated successfully) presents a light bulb, a tesla coil, and possibly a miniature sun and gets to say “this isn’t even the half of what we could do if we wanted to. If you want to know how to do it, you’ll have to deal with us.”
jschulter
The sacrifice put me off a bit too. maybe a barbecue instead? The ox is still dead and exposed to fire, but we don’t wast the utility of tasty tasty animal flesh.
And it definitely would be nice if all prominent scientists/mathematicians got the same responses we see for celebrities, instead of just the select few who become household names.
You’re using different definitions for doubt here, and that is the issue. EY uses “doubt” in the sense of a suspicion that not enough knowledge is currently had to evaluate a specific claim, while you are using it as the opposite of “certainty” (though not consistently, somehow). In saying that doubt should not be lived with he was referencing his previously posted explanation of how these specific suspicions by nature are meant to annihilate themselves. Either you find the evidence you thought was missing or you conclude after some searching that finding it would be a waste of energy and make your judgment based on the evidence you already have, and either way, that doubt is gone.
If you still harbor doubts, in his sense, that Christianity may be true, you should search for that missing evidence immediately or conclude that the effort to find it isn’t worth it and assign the likelihood the ridiculously small probability it deserves. Notice that I did not say that you should claim with certainty that christianilty is false; predicting anything with true 100% certainty is, for a bayesian, truly stupid, because on the absurdly small chance that you’re wrong, you lose the game, having just conceded that you assigned your life a likelyhood of 0%.
It was intended to be clear that all operations are performed and propagated throughout the entire system, I think.
why not accept that the modulus-squared law is real and fundamental, too?
Reading through this, and Hanson’s quick overview page of mangled worlds, I was wondering the same thing myself. For some reason though, seeing you ask the question I hadn’t quite verbalized put the answer right on the tip of my tongue: for the same reason Einstein was so sure of General Relativity. The modulus squared law conflicts with a regularity in the form that the fundamental laws seem to take, specifically their linear evolution, and Eliezer puts stock in that regularity. In fact, he does so sufficiently to let him elevate any theory which accounts for the data while holding the regularity far above those that don’t, similar to how Einstein picked GR out of hypothesis space.
The benefit of the mangled worlds interpretation is that while the universe-amplitude-blobs do have measure (a non-linear element), it is irrelevant to what actually happens. It really only comes into play when trying to understand the interaction between the universe-amplitude-blobs, but it doesn’t play a part in actually describing that interaction. For example, the possible mangling of a world of small measure would be described by normal linear quantum evolution, but since the calculations are not very nice, we can determine whether it would be mangled using that measure. Thus, we are using the measure as a mathematical shortcut to determine generalized behavior, but all evolution is linear, and observations can be explained without the extra hypothesis that “measure is probability”.
Okay, given one sub-decoherence event per planck time, somewhere in the universe, propagating throughout it at some rate less than or equal to the speed of light...we either have constant (one per planck time or less) full decoherence events after some fixed time as each finishes propagating sufficiently, or we have no full decoherence events at all as the sub-decoherences fail to decohere the whole sufficiently.
The latter seems more realistic, especially given the light speed limit, as the expansion of space can completely causally isolate two parts of the universe preventing the propagation of the decoherence.
So, with this understood, we’re left to determine how large a portion of the universe has to be decohered to qualify as a “decoherence event” in terms of the many worlds theories which rely on the term. I honestly doubt that, once a suitable determination has been made, the events will be infrequent in almost any sense of the word. It really does seem, given the massive quantities of interactions in our universe(even just the causally linked subspace of it we inhabit), that the frequency of decoherence events should be ridiculously high. And given some basic uniformity assumptions, the rate should be quite regular too.
Splits happen forward in time for the same reason a glass which has fallen and smashed on the floor doesn’t spring back up and spontaneously reassemble itself. And these “universes” are really just isolated amplitude blobs in the total, timeless, wavefunction: they aren’t created; rather any amplitude blob roughly factorizing as a “world” will eventually decohere into several smaller amplitude blobs also factorizing as “worlds” which as the wavefunction further evolves with time do not interact (i.e. they interact about as often as that glass reassembles).
Hello! I’m currently doing a depth-first read through the sequences, and I’ve been enjoying all of it so far. I’m another one drawn in by HP:MOR, but I found even more here than I could have hoped for.
if someone snapped their fingers and instantly moved all objects to their 100 years hence positions, it would not be the future
I beg to differ. Everybody would remember the motion having taken place; the history of that 100 years would be recorded. There is no way in principle to experimentally distinguish this occurrence from the normal progression of time by 100 years, so I claim they are the same.
Even including Harry Potter and his sudden ability to move particular objects discontinuously 100 years into the future by snapping his fingers, my claim stands. The point is regarding the instantaneous movement of every part of the universe to its future position, in which case inhabitants of the universe will see the signal (fingers snapping) and see nothing out of the ordinary happen. These observers will even continue to observe what happens throughout the next 100 years, or at least it will be indicated as such with 100% complete consistency in any and all records present at the end of those 100 years, including the memories of every living being. The only difference when including Harry in the picture is that our fundamental description of the physical laws change; when the whole universe is moved, not a single one of their consequences is distinguishable from time progressing normally, thus they are still equivalent statements. By introducing unphysical Harry, we develop a way to distinguish the two explanations, but this is irrelevant to our reality.
I approached it similarly (as part of a more general attempt, since this is a minor use of the word), positing the “I could lift that box over there” was a comparison of the physical prowess necessary to complete the task and the amount I currently possess. In Eliezer’s formulation, this is equivalent to determining reachability with constraints, but it’s more of an example of the general procedure than an explanation of it, unfortunately. I’m glad to see that someone else was thinking similarly though.
They surprised me too. (I actually felt the urge to use an unnecessary exclamation point there the priming’s made me so enthusiastic...)
And I think that the status gained from the fact that you noticed being primed probably outweighed any lost due to it us being told it happened. Though now that we’re noticing it, we need to decide which frequency of upvoting we should be using so we can avoid the effect.
This is even easier to game: assuming the school has any merit, any individual you ask should have good incentive to simply say “50%” guaranteeing a perfect score. The very first time you used the test it might be okay, but only if nobody knew that the school’s reputation was at stake.
From what I saw, it seems they figured out that that was their best bet (somehow) fairly quickly. Once Watson lost control, the other two lost very little time in going for the big points.
Posting this before reading the comments to give a summary/response based on my own internal experiences. Quick note: I’m extremely good at internalizing/manipulating information, and about proficient at “reacting”. It might also be worth noting sex (I’m male), since I could definitely see these kinds of thought processes being different on the two standard systems.
This analysis is definitely subject to the “generalizing from one example” problem, considering some large differences between the thought mechanisms you mention and my own. One telling example is the programming/reacting analogy: when programming(and writing, after the first stage of composition) I have this tendency to “hold the whole program in my head” as I’ve heard it called, and in doing so I don’t use an internal monologue at all. In fact, when I’m solving most problems(math, spatial manipulations, logic puzzles) in my mind, my internal monologue is silent, and rather I’m working silently in my headspace- my reasoning methods feel spatial, rather than verbal. When working in a group (cooking is the closest example of “reacting” that I can relate to in terms of necessitated efficiency/urgency) the monologue is still silent and I’m solving problems through psuedospatial manipulation; the significantly smaller amount of problem solving necessary does tend to allow the problem/solution to just remain static in my head for most of the time though while I engage in physical tasks, rather than actively solving it. This for me, leads to a sense that very little focus is used while reacting; some tasks (mincing garlic, dicing onions(crying makes it harder), &c.) however may require close attention, if physically complicated, and this might be the other kind of focus you mention. I can, overall, add another confirming data point to the “silencing your internal monologue is helpful/necessary for reacting properly” hypothesis though.
I also have some possible suggestions, though mileage will likely vary very extremely:
silencing ones internal monologue can be aided by meditation- in fact, they are practically equivalent- so the initial meditation exercises, to “clear ones mind” may prove useful in getting used to doing this, and possibly make it easier.
there’s no need to practice silencing your internal monologue only while “reacting”-try doing it during everyday tasks where intense thought isn’t necessary(eg brushing your teeth), and it might become that much easier.
if your brain works like mine, you may be able to delegate certain tasks to parts of your mind not directly linked to what you consider “you” (one notably common example is how sometimes you realize the solution to a problem you were working on a while ago but not actively thinking about), and if you can get good at this, it works better(for me) than memorizing responses- just let yourself respond on automatic.
The odds of winning the lottery are ordinarily a billion to one. But now the branch in >which you win has your “measure”, your “amount of experience”, temporarily >multiplied by a trillion. So with the brief expenditure of a little extra computing power, >you can subjectively win the lottery—be reasonably sure that when next you open >your eyes, you will see a computer screen flashing “You won!”
As I see it, the odds of being any one of those trillion “me”s in 5 seconds is 10^21 to one(one trillion times one billion). since there are a trillion ways for me to be one of those, the total probability of experiencing winning is still a billion to one. To be more formal:
P(“experiencing winning”)=sum(P(“winning”|”being me #n”)P(“being me #n”)) =sum(P(“winning” and “being me #n”))=10^12*10^-21=10^-9 since “being me #n” partitions the space.
Overall this means I:
anticipate not winning at 5 sec.
anticipate not winning at 15 sec.
don’t have super-psychic-anthropic powers
don’t see why anyone has an issue with this
Checking consistency just in case:
p(“experience win after 15s”) = p(“experience win after 15s”|”experience win after >5s”)p(“experience win after 5s”) + p(“experience win after 15s”|”experience not-win >after 5s”)p(“experience not-win after 5s”).
p(“experience win after 15s”) = (~1)*(10^-9) + (~0)(1-10^-9)=~10^-9=~p(“experience win after 5s”)
Additionally, I should note that the total amount of “people who are me who experience winning” will be 1 trillion at 5 sec. and exactly 1 at 15 sec. This is because those trillion “me”s must all have identical experiences for merging to work, meaning the merged copy only has one set of consistent memories of having won the lottery. I don’t see this as a problem, honestly.
- 12 Dec 2011 21:36 UTC; 0 points) 's comment on The Anthropic Trilemma by (
I saw it more as opposing restrictions on one’s ability to hit oneself in the head with a baseball bat every week. I’m not saying anyone should do it, but if they really want to I don’t feel I have the right to stop them.
I think one of the things that makes learning things hard, given this interpretation, would be difficulty in actually updating the model. It may be that large amounts of surprise, being related to large differences in model produced by updating, make it hard to update, and this is certainly one level of hardness felt when learning. But additionally, there is also likely to be some variance in general ability to update certain models: some people have limited kinesthetic senses would not only be operating with less data to update on, but may also have a more rigid model.
Model rigidity seems to me like a good candidate for the variance between students’ subjective experience of the hardness of learning certain things. It also seems like it would be strongly correlated to the appropriate types of intelligence- kinesthetic intelligence relates to a more easily changed model of physical syntax, procedural intelligence relates to a more easily changed model of procedural syntax, &c.
This also seems to correspond well to my own personal experiences with what is hard and easy to learn- my understanding of how the different elements of the problem can interact changes with speed proportional to how easy the subject seems, eg I can change my understanding of how abstract quantities/qualities interact fairly quickly making math easy to learn, my understanding of systems of social interaction changes very slowly (due in part to difficulty collecting evidence) and thus I was socially awkward for a long time, and it took a lot of effort to overcome.
Well, I have encountered people being (or claiming to be) offended by what in all rights would be an assault on someone else’s status. This could be a form of empathy, or in many cases an attempt to gain status themselves through a show of sympathy. This does seem like a potential occurrence of legitimate offense not caused by a perceived direct or indirect threat to the status of the person being offended, iff the offense is genuine- something which I cannot personally attest to, never having experienced this myself.
The link to that thesis doesn’t seem to work for me.
A quick google turned up one that does