jchan
Reminder to complete this survey by the end of today.
Reminder: The Austin Far-comers Meetup is tomorrow! Here’s the announcement on our mailing list: https://groups.google.com/forum/#!topic/austin-less-wrong/fG6anRooLY0
What chat/meeting tool will be used for this event?
For an outdoor ceremony, you’ll want to avoid open flames because (a) the wind might blow them out, and (b) they’ll attract bugs that die in the flame. Instead you can use lanterns like these. (Peel off the branding sticker for a cleaner look.) The aesthetic ends up being more rugged/industrial than fancy/refined.
Practical considerations when using these lanterns:
The glass window and the upper surface of the lantern get extremely hot (enough to boil water, at least). Use an oven mitt to manipulate these parts.
For this reason, opening and closing the window is cumbersome. To light the lantern or transfer the flame, use a thin bamboo skewer that you can insert through the gap in the top of the lantern. When you’re done with the skewer, douse it in a jar of sand (not water, so you can reuse it).
This method also loses the “Candle #1 [being] the one lighting Candle #2, rather than vice-versa” distinction.
What does the skewer itself symbolize? Perhaps “the generations who died carrying #1 forward to #2 without ever seeing the result” (I dunno, I just made that up now; maybe it doesn’t need to symbolize anything.)
The flame can be extinguished by pushing down the top of the lantern (using an oven mitt) into its “collapsed” position, and then placing an inverted glass bowl on top of it for 3-5 seconds to choke off its oxygen supply. (Glass, rather than ceramic or metal, so that you can see when the flame has gone out.) Then un-collapse the lantern, again using the oven mitt. (See the video on the Amazon page for a demo of collapsing/uncollapsing.)
Or, you can blow sharply through the top of the lantern, but this is difficult if you’re wearing a mask.
If you’ve opened the window in order to pour wax from the candle, collapsing+uncollapsing is the easiest way to re-close the window.
I’d suggest that even a counterfactual donation of $100 to charity not occurring would feel more significant than the frontpage going down for a day.
This suggests an interesting idea: A charity drive for the week leading up to Petrov Day, on condition that the funds will be publicly wasted if anyone pushes the button (e.g. by sending bitcoin to a dead-end address, or donating to two opposing politicians’ campaigns).
I’m trying to wrap my head around this. Would the following be an accurate restatement of the argument?
Start with the Dr. Evil thought experiment, which shows that it’s possible to be coerced into doing something by an agent who has no physical access to you, other than communication.
We can extend this to the case where the agents are in two separate universes, if we suppose that (a) the communication can be replaced with an acausal negotation, with each agent deducing the existence and motives of the other; and that (b) the Earthlings (the ones coercing Dr. Evil) care about what goes on in Dr. Evil’s universe.
Argument for (a): With sufficient computing power, one can run simulations of another universe to figure out what agents live within that universe.
Argument for (b): For example, the Earthlings might want Dr. Evil to write embodied replicas of them in his own universe, thus increasing the measure of their own consciousness. This is not different in kind from you wanting to increase the probability of your own survival—in both cases, the goal is to increase the measure of worlds in which you live.
To promote their goal, when the Earthlings run their simulation of Dr. Evil, they will intervene in the simulation to punish/reward the simulated Dr. Evil depending on whether he does what they (the Earthlings) want.
For his own part, Dr. Evil, if he is using the Solomonoff prior to predict what happens next in his universe, must give some probability to the hypothesis that him being in such a simulation is in fact what explains all of his experiences up till that point (rather than him being a ground-level being). And if that hypothesis is true, then Dr. Evil will expect to be rewarded/punished based on whether he carries out the wishes of the Earthlings. So, he will modify his actions accordingly.
The probability of the simulation hypothesis may be non-negligible, because the Solomonoff prior considers only the complexity of the hypothesis and not that of the computation unfolding from it. In fact, the hypothesis “There is a universe with laws A+B+C, which produces Earthlings who run a simulation with laws X+Y+Z which produces Dr. Evil, but then intervene in the simulation as described in #3” may actually be simpler (and thus more probable) than “There is a universe with laws X+Y+Z which produces Dr. Evil, and those laws hold forever”.
You may be right… I just need a rough headcount now, so if you want to take time to ponder the team name feel free to leave it blank now and then submit the form again later with your suggestion. (Edited the form to say so.)
Same here.
Good to know that this was useful. I hadn’t thought of this meetup as “journalism,” but I suppose it was in a sense.
I think the “normal items that helped” category is especially important, because it’s costly in terms of money, time, and space to get prepper gear specifically for the whole long tail of possible disasters. If resources are limited, then it’s best to focus on buying things that are both useful in everyday life and also are the general kind-of-thing that’s useful in disaster scenarios, even if you can’t specifically anticipate how.
Here’s the way I understand it: A low-entropy state takes fewer bits to describe, and a high-entropy state takes more. Therefore, a high-entropy state can contain a description of a low-entropy state, but not vice-versa. This means that memories of the state of the universe can only point in the direction of decreasing entropy, i.e. into the past.
Maybe we are anthropically more likely to find ourselves in places with low komolgorov complexity descriptions. (“All possible bitstrings, in order” is not a good law of physics, just because it contains us somewhere).
Another way of thinking about this, which amounts to the same thing: Holding the laws of physics constant, the Solomonoff prior will assign much more probability to a universe that evolves from a minimal-entropy initial state, than to one that starts off in thermal equilibrium. In other words:
Description 1: The laws of physics + The Big Bang
Description 2: The laws of physics + some arbitrary configuration of particles
Description 1 is much shorter than Description 2, because the Big Bang is much simpler to describe than some arbitrary configuration of particles. Even after the heat-death of the universe, it’s still simpler to describe it as “the Big Bang, 10^zillion years on” rather than by exhaustive enumeration of all the particles.
This dispenses with the “paradox” of Boltzmann Brains, and Roger Penrose’s puzzle about why the Big Bang had such low entropy despite its overwhelming improbability.
I’ve played a variant like this before, except that only one clue would be active at once—if the clue is neither defeated nor contacted within some amount of time, then we’d move on to another clue, but the first clue can be re-asked later. The amount of state seemed manageable for roadtrips/hikes/etc.
Thinking more about this:
Is it possible to get good at this game?
Does this game teach any useful skills?
I don’t think there’s a generalized skill of being good at this game as such, but you can get good at it when playing with a particular group, as you become more familiar with their thought processes. Playing the game might not develop any individual’s skills, but it can help the group as a whole develop camaraderie by encouraging people to make mental models of each other.
To make it slightly more concrete, we could say: one copy is put in a red room, and the other in a green room; but at first the lights are off, so both rooms are pitch black. I wake up in the darkness and ask myself: when I turn on the light, will I see red or green?
There’s something odd about this question. “Standard LessWrong Reductionism” must regard it as meaningless, because otherwise it would be a question about the scenario that remains unanswered even after all physical facts about it are known, thus refuting reductionism. But from the perspective of the test subject, it certainly seems like a real question.
Can we bite this bullet? I think so. The key is the word “I”—when the question is asked, the asker doesn’t know which physical entity “I” refers to, so it’s unsurprising that the question seems open even though all the physical facts are known. By analogy, if you were given detailed physical data of the two moons of Mars, and then you were asked “Which one is Phobos and which one is Deimos?”, you might not know the answer, but not because there’s some mysterious extra-physical fact about them.
So far so good, but now we face an even tougher bullet: If we accept quantum many-worlds and/or modal realism (as many LWers do), then we must accept that all probability questions are of this same kind, because there are versions of me elsewhere in the multiverse that experience all possible outcomes.
Unless we want to throw out the notion of probabilities altogether, we’ll need some way of understanding self-location problems besides dismissing them as meaningless. But I think the key is in recognizing that probability is ultimately in the map, not the territory, however real it may seem to us—i.e. it is a tool for a rational agent to achieve its goals, and nothing more.
This is interesting, because it seems that you’ve proved the validity of the “Strong Adversarial Argument”, at least in a situation where we can say:
This event is incompatible with XYZ, since Y should have been called.
In other words, we can use the Adversarial Argument (in a normal Bayesian way, not as an acausal negotiation tactic) when we’re in a setting where the rule against hearsay is enforced. But what reason could we have had for adopting that rule in the first place? It could not have been because of the reasoning you’ve laid out here, which presupposes that the rule is already in force! The rule is epistemically self-fulfilling, but its initial justification would have seemed epistemically “irrational”.
So, why do we apply it in a courtroom setting but not in ordinary conversation? In short, because the stakes are higher and there’s a strong positive incentive to deceive.
You mention “Infra-Bayesianism” in that Twitter thread—do you think that’s related to what I’m talking about here?
If the cryptography example is too distracting, we could instead imagine a non-cryptographic means to the same end, e.g. printing the surveys on leaflets which the employees stuff into envelopes and drop into a raffle tumbler.
The point remains, however, because (just as with the blinded signatures) this method of conducting a survey is very much outside-the-norm, and it would be a drastic world-modeling failure to assume that the HR department actually considered the raffle-tumbler method but decided against it because they secretly do want to deanonymize the surveys. Much more likely is that they simply never considered the option.
But if employees did start adopting the rule “don’t trust the anonymity of surveys that aren’t conducted via raffle tumbler”, even though this is epistemically irrational at first, it would eventually compel HR departments to start using the tumbler method, whereupon the odd surveys that still are being conducted by email will stick out, and it would now be rational to mistrust them. In short, the Adversarial Argument is “irrational” but creates the conditions for its own rationality, which is why I describe it as an “acausal negotiation tactic”.
Well done! This is faster than I expected it to be solved.
After thinking about this a bit, I’m not sure I agree. First, gathering everyone together puts all the eggs in one basket, which risks vulnerability to external disruption (e.g. the Nazis taking over Budapest). Second, a brain-drain of intellectuals into one central city deprives up-and-coming students (if they can’t afford to relocate) of teachers and mentors.