I don’t recall seeing this issue brought up before, but it seems that given the sort of medical advances required for cryonics to work, it’s more likely that you would be revived in a “good” future than a “bad” one. This intuition is just based on the general trend that societies with higher levels of technology tend to be better ones to live in.
Adele_L
- 2 Jun 2012 2:44 UTC; 0 points) 's comment on Far negatives of cryonics? by (
Robin Hanson’s endorsement of EY is the best credential he has for me.
Similarly, the fact that Scott Aaronson and John Baez seem to take him seriously are significant credentials he has for me.
Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.
So the idea is that Boltzmann brains would form in smaller fluctuations, while a larger fluctuation would be required to account for us. Since smaller fluctuations are more common, it’s more likely that a given brain is a Boltzmann one.
But does this take into account the fact that one large fluctuation can give rise to trillions of brains? Enough so that it would be more likely that an observer would be in one of these larger ones?
Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs.
Under this idea, wouldn’t the Boltzmann brain have an equivalent belief about “Barack Obama”, which would correspond to some isomorphic thing in its environment? And then, wouldn’t this be extremely unlikely, since by definition, the Boltzmann brain is in a higher entropy place (as it would observe a world isomorphic to ours, which has relatively low entropy)?
Oh okay then.
The belief state in the Boltzmann brain wouldn’t be caused by some external stable macroscopic object.
I don’t think it matters what caused the belief. Just that if it had the same state as your brain, that state would correspond to a brain that observed a place with low entropy.
Oh, okay.
Is there a good introduction to Boltzmann brains somewhere? I don’t seem to understand it very well.
Beliefs, as we understand them…
But wouldn’t a Boltzmann brain understand its “beliefs” the same way, despite them not corresponding to reality?
Could you explain further? Why “ought” it assign it such a probability? As stated, this seems more convincing as an argument that it “ought not” to assign a probability > 1/3^^^3, despite the short “description”.
Hi, I’m new here, so I’m not quite familiar with all the ideas here. However, I am a young mathematician who has some familiarity with how mathematical theories are developed.
Highly intelligent, and especially skilled in maths, probably at the IMO medal-winning level. (FAI team members will need to create lots of new math during the course of the FAI research initiative.)
It might be much cheaper to accept more average mathematicians who meet the other criteria. Generally, to build a new theory, you’ll need a few people who can come up with lots of creative ideas, and lots of people who are capable of understanding the ideas, and then taking those ideas and building them into a fleshed out theory. Many mathematicians accept that they are of the second type, and work towards developing a theory to the point where a new creative type can clearly see what new ideas are needed.
Trustworthy. (Most FAI work is not “Friendliness theory” but instead AI architectures work that could be made more dangerous if released to a wider community that is less concerned with AI safety.)
Shouldn’t this just be a subset of number 5? I’m sure you would rather have someone who would lie to keep AI risk low than someone who would tell the truth no matter what the cost.
Yeah, that was the analogy I had in mind. I wasn’t sure if people here would be familiar with it though.
And yeah, I agree that math competition winners wouldn’t have the ideal distribution, although it probably wouldn’t hurt to recruit from them as well. Also, I may have some bias here, since I never liked competitions and avoided participating in them. But I agree with the points made in that article.
That seems like a good idea, although it’s hard to know what the problem space looks like without going there. My intuition says that it would be a good idea to try to have a good amount of diversity in whatever team is chosen.
I’m not seeing how this is coming close to rediscovering TDT. Could you explain?
Doesn’t regular money have this same problem?
Rockclimbing might be another good and fun option, and can be done by yourself to some extent (bouldering, or borrowing gym staff). For me, it has an added bonus since lots of mathematicians socialize by rockclimbing together.
I agree that it does seem likely that humans (would) value certain classes of computations.
Someone comes along and suggests turning off this person’s emulation on the grounds that no one will know the difference, and we can use the hardware for something else.
You could consider the rest of humainty to be it’s own computation, and then it also seems obvious that it would be wrong for this lone emulation to shut down the rest of humanity.
It also seems unlikely that a full emulation of a human is the only thing that’s valuable. Perhaps there are simpler patterns we could emulate that would be much better in terms of value per dollar?
The first things that comes to mind are babies and pets. However, I don’t think it is as valuable to keep a compuation at baby-level, as it would be to allow it to extend to normal human level (by growing up to an adult, essentially). And for pets, I think at least part of the value comes from the interaction with a human-level computation (since people don’t seem to value arbitrary animals used for meat nearly as much as pets are valued). So I don’t think that either of these cases could be used as a substitute; at least I wouldn’t find it very valuable if we tiled the universe in baby or cat emulations.
It also seems unlikely that a full emulation of a human is the only thing that’s valuable.
I actually find this moderately likely, at least in the sense that I think most people would consider it to be very undesireable to not have a “complete” life experience. What constitutes a “complete” life might vary with culture (is death required for a complete life?), but I think there would be some sort of minimum valuable computation.
I’ve been trying to wrap my head around the SPECKS vs TORTURE argument, and I still haven’t been able to convince myself that TORTURE is the right answer.
One idea that I had would be to apply the whole thing to myself. Suppose Omega comes to me and offers me two choices:
I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with no lasting harm.
I can have a satisfying and fulfilling life for 3^^^3 days, but I’ll wake up with a speck in my eye everyday.
I have to say that I would still pick choice 2 for myself. I know that if I add up the utilities in any standard way, that option 2 is going to be way lower, but I still can’t get myself to choose 1. Even if you move the torture time so that it’s random or at the end (to get rid of near mode thinking), I still intuitively prefer 2 quite strongly.
Even though I can’t formalize why I think option 2 is better, feeling that it is the right choice for myself makes me a bit more confidant that SPECKS would be the right choice as well. Also, this thought experiment makes me think the intuitive choice for SPECKS is less about fairness than I thought.
If anyone has any more insight about this, that would be helpful.
But you would also punish accurate guessers if the average guess is too low.
So once your misery goes below one unit, you get insane gains in utility for small reductions in misery?
I’ve donated $100.
I’m curious to see how well that works for different types of people; please let me know what it was like for you, if you end up doing it.
Yes, I have been lurking for a similar amount of time, but I still am very reluctant to make comments or posts. I think the reason for me is that I am unsure of my rationality skills, and don’t like feeling the status lowering that would come from potential comments criticizing or correct me.
Yes, this is a problem with myself, but yes, more friendliness would make it easier for me to comment.
I think there is some use of devil’s advocacy, at least in the sense that Michael Ruse meant when he said it is used “partly to see how far a position can be pushed before it collapses (and why the collapse),”
I’m a mathematician, so I use proof by contradiction fairly often. A typical application goes like this: I come across a statement which I believe is false, and wish to prove is false. Then I use deductive reasoning on the assumption that it is true. If I can prove something that contradicts what I already know, then I am successful. Otherwise, I update my belief about its truth value (it becomes slightly more likely that it actually is true).
What I do not do is try to make a plausible for why the statement should be true. I only do this for statements I believe might be true. So in the end, the main point of the sequence does agree with my intuition for making effective arguments.