With regards to your SIA objection, I think it is important to clarify exactly what we mean by evidence conservation here. The usual formulation is something like “If I expect to assign credence X to proposition P at future time T, then I should assign credence X to proposition P right now, unless by time T I expect to have lost information in a predictable way”. Now if you are going to be duplicated, then it is not exactly clear what you mean by “I expect to assign … at future time T”, since there will be multiple copies of you that exist at time T. So, maybe you want to get around this by saying that you are referring to the “original” version of you that exists at time T, rather than any duplicates. But then the problem seems to be that by waiting, you will actually lose information in a predictable way! Namely, right now you know that you are not a duplicate, but the future version of you will not know that it is not a duplicate. Since you are losing information, it is not surprising that your probability will predictably change. So, I don’t think SIA violates evidence conservation.
Incidentally, here is an intuition pump that I think supports SIA: suppose I flip a coin and if it is heads then I kill you, tails I keep you alive. Then if you are alive at the end of the experiment, surely you should assign 100% probability to tails (discounting model uncertainty of course). But you could easily reason that this violates evidence conservation: you predictably know that all future agents descended from you will assign 100% probability to tails, while you currently only assign 50% to tails. This points to the importance of precisely defining and analyzing evidence conservation as I have done in the previous paragraph. Additionally, if we generalize to the setting where I make/keep X copies of you if the coin lands heads and Y copies if tails, then SIA gives the elegant formula X/(X+Y) as the probability for heads after the experiment, and it is nice that our straightforward intuitions about the cases X=0 and Y=0 provide a double-check for this formula.
I don’t know what “power spectrum” is, but the second-to-last graph looks pretty obviously like Brownian motion. This makes sense because the differences between consecutive points in the third graph will be approximately Poisson and independently distributed, so if you renormalize so that the expected value of the difference is zero, then the central limit theorem will give you Brownian motion in the limit.
Anyway regarding the relation of your post to Tegmark’s theory, a random sequence can be a perfectly well-defined mathematical object (well maybe you need to consider pseudo-randomness, but that’s not the point) so you are not getting patterns out of something non-mathematical (whatever that would mean) but out of a particular type of mathematical object.
The validity of the author’s point seems to depend on what is the best way to interpret the phrase “losses hurt more than equivalent gains”. Two ways that you could interpret it in which it would be a consequence of loss aversion but not of DMU:
“Having your wealth decrease from X to Y decreases your satisfaction more than having your wealth increase from Y to X increases it.”
“The pain of a small loss is significantly more than the pleasure of a small gain.”
It seems to me that most of the quotes at the end, if you interpret them charitably, mean something like the above. So the post seems like a nitpick to me. It’s great to explain the difference between loss aversion and DMU for people who don’t necessarily know about them, but it’s not clear to me that it means that the quoted people were actually wrong about something.
I would also disagree with point #3, e.g. the last sentence of the Economist quote seems valid as an intuitive explanation of loss aversion but not of DMU.
Sure, probably some of them mean that, but you can’t assume that they all do.
“Exists” is one of the words I tend to taboo. People usually just use it to mean “is part of the Everett branch that I am currently in” but there are also some usages that seem to derive their meaning by analogy, like the existence of mathematical objects. I’m not sure if there is a principled distinction being drawn by those kinds of usages.
Instead I would talk about whether we can sensibly talk about something. And I can imagine people trying to talk about something, and not making any sense, but it doesn’t seem to mean that there is a “thing” they are talking about that “doesn’t exist”.
When people say that a morality is “objectively correct”, they generally don’t mean to imply that it is supported by ”universally compelling arguments“. What they do mean might be a little hard to parse, and I’m not a moral realist and don’t claim to be able to pass their ITT, but in any case it seems to me that the burden of proof is on the one who claims that their position does imply heterogonality.
I see that someone posted in the other thread that they though the most obvious answer is 1⁄2, but why is this the case? I don’t see any obvious intuitive argument for why 1⁄2 is a reasonable answer.
Edit: I guess the idea is to just not perform any update on the statement the guard makes but just use it to infer that “Vulcan Mountain” is equivalent to “Vulcan”, and then answer based on the fact that the latter probability is 1⁄2.
Moral realism plus moral internalism does not imply heterogonality. Just because there is an objectively correct morality, does not mean that any sufficiently powerful optimization process would believe that that morality is correct.
If you assume that the guard’s probability of making this statement (and only this statement) is the same in all circumstances where the statement is true, then the answer is 1⁄3. Otherwise, it depends on what you know about the psychology of the guard.
On the Chatham House website I see
Q. Can participants in a meeting be named as long as what is said is not attributed?A. It is important to think about the spirit of the Rule. For example, sometimes speakers need to be named when publicizing the meeting. The Rule is more about the dissemination of the information after the event—nothing should be done to identify, either explicitly or implicitly, who said what.
which seems reasonable. The comment about not circulating the attendee list beyond the participants is a response to the question ”Can a list of attendees at the meeting be published?“, and my impression is that it is only meant as an answer to this question: i.e. such a list should not be published outside of the meeting, but it is OK if some people happen to come across it randomly. So I think you are just taking the Chatham Rule much more literally than it is intended.
I moved to Berkeley last week and have been coming to coworking and several events at REACH. It is certainly nice to have a place to hang out with rationalist people and start to feel integrated in the community. On the first night I was here I already got to experience some of the rationalist culture here, a doom circle. I don’t think that kind of experience would have been possible without REACH.
I seem to recall people saying of the old meetups that they mostly only allow new people and transients to interact with each other and not with established community members, and I think there is an element of this at REACH, but I have certainly seen a few established people here in the short time I’ve been here. Some people even bring their kids sometimes so if you like playing with kids (which I do) then that is a rewarding experience.
I have been experiencing pretty serious mental health and other problems recently, and I think the community here has been pretty supportive. In particular I told Sarah/Stardust I’d probably go crazy if I couldn’t find some people to have one-on-one conversations with, she was able to help me out by finding someone to put me in contact with.
All in all I think this is a great community and a great community center, keep up the good work!
Ah I see. How could fire be breathed into equations? That concept doesn’t make sense to me.
Yeah, there are definitely both upsides and downsides. It certainly makes me feel more welcome, though I can see that many people would have the opposite experience. Maybe the important thing is that people know what they are getting into.
Update: I moved to Berkeley last week and noticed a huge difference in how the rationalist/EA community deals with these sorts of conversations and how the rest of the world does. Yesterday I was talking to someone I had barely met and they asked “how are you doing?” I said “you just opened a whole can of worms” and we ended up having an interesting discussion, including about how the conversational norms are different here from elsewhere. In general, I think people in this community are both more likely to give an honest answer to such questions, and less likely to ask them if they aren’t interested in an honest answer.
I am not sure why you seem to think I reject MUH?
So, I don’t think that I would have the same kind of intuition about diseases and curses as I do about mathematical objects and existence, even if I didn’t know any possible cause of disease except for curse. But of course my introspection about that could be wrong.
I don’t think that I am separating objects into “sorts of things”. It is more like I am asking the question “what does it mean to be a thing?” and answering it “to be a thing is to be a mathematical object”.
This is discussed in Appendix A of Tegmark’s paper (I guess I am using “mathematical object” synonymously with “mathematical structure”).
What sort of thing is the universe? If it is a mathematical object, then at least we have an answer to the question, and it is not clear how to answer it otherwise. This seems to me to be strong evidence that the universe is a mathematical object.
I don’t have a strong opinion about that. But I don’t think it’s the same as the version with different Everett branches, because different Everett branches can’t interact with each other (and different galaxies can and will, regardless of how much AIs try to stop it).
OK, that makes sense.