Eliezer never wrote an epilogue, and probably isn’t going to, since Alexander Wales already wrote [a better one] (https://fanfiction.net/s/11293489/1/A-Crack-Slash-Epilogue).
Very excited to see a meetup discussing LW articles. Hope it goes well!
I agree with the matching of the concepts, I don’t think it means that there is a clear difference between instrumental and terminal values.
This sounds like fairness as an instrumental value vs. fairness as a terminal value.
Oh yeah, thanks for linking that! Looking over it now, I got some of my ideas from this post when I read it quite a few years ago, and forgot to link it in my main post.
Yeah, it should be noted that anyone who knows me cannot be my client, though I can take on friends of friends as clients. Regarding Reflect specifically, you can select how many matches they give you and/or contact Reflect directly if you are matched with people you know, to help mitigate this issue.
Fair enough, maybe I don’t have enough familiarity with non-MIRI frameworks to make an evaluation of that yet.
The assignment of probabilities to actions doesn’t influence the final decision here. We just need to assign probabilities to everything. They could be anything, and the decision would come out the same.
Aren’t there meaningful constraints here? If I think it’s equally likely that I’m in L-world and R-world and that this is independent of my action, then I have the constraint that P(Left, L-world)=P(Left, R-world) and another constraint that P(Right, L-world)=P(Right, R-world), and if I haven’t decided yet then I have a constraint that P>0 (since at my present state of knowledge I could take any of the actions). But beyond that, positive linear scalings are irrelevant.
Yes, I was only talking about alignmentforum, naturally.
Huh, I never ran into that problem. This might turn out to not be super easy to fix since we are using an external LaTeX library, but we can give it a try.
Unsure about whether a header is a good idea, since the vast majority of posts on LW don’t have LaTeX, and so for them the header field would just be distracting, but we could add something like that only to agentfoundations, which would be fine. I can look into it. Also curious whether other people have similar problems.
You can always move posts back to drafts. We have a plan to add a delete button, but want to make sure there is no way to click it accidentally. If you ping us on Intercom we are also happy to delete posts.
Not deleting comments is intentional, because completely deleting them would make it hard to display the children. You can just edit the content out of them. We are planning to make it so that you can delete your comments that don’t have children, but haven’t gotten around to it.
Another summary: light posts are not universal, if you are the only one looking under yours, odds are, you will find something no one else would notice.
Yes, plenty! My point was that meeting that someone belongs to a reference class of situations you had encountered before and will encounter again.
I’ll admit that I’m skeptical. It’s a cool mathematical trick, but why should we think it is anything more than that?
No, in God’s coin toss the probability is random. At least that’s what I took it as, since it’s described as a coin toss. The reason the answer is 1⁄2 there is just because the number of observations of being in room 1-10 is equal in the heads-case and the tails-case (10 in both). This is the image of the experiment I made in the post. If it was 2000 people in the tails-case, 2 in every room number, then the answer would be 1⁄3 for heads.
On other words, it is similar to the the God coin toss—you can’t update logical uncertainty based on your location?
Another issue is, it seems impossible to delete anything, whether a comment or a draft? (and I guess it goes for posts too?)
Not sure whether this is the right place to voice technical complaints, but. I am unhappy about the handling of LaTeX macros (on which I rely heavily). Currently it seems like you can add macros either as inline equations or as block equations, and these macros are indeed available in the following equations. However, if an equation object contains only macros, it is invisible and seems to be impossible to edit after creation. As a workaround, I can add some text + macros into the same equation object, but this is very hacky. It would be nice if either equation objects with macros would remain visible, or (probably better) there would be a special “header” in each post where I can put the macros. It would be even more amazing if you could load LaTeX packages in that header, but that’s supererogatory.
That should be conclusive from this post, so for everyone who already read it, you can try to predict it before reading further.
So you’d have to input your probability distribution of possible universes, in this case that just has to specify where the filter is for different species. If you think the filter is at the same place for all species, then your distribution should look something like 1⁄3 * filter always late, 1⁄3 * filter always middle, 1⁄3 * filter always early and the SIA doomsday doesn’t apply (you’d have 3 trivial experiments). If you think for some it’s early and for some middle and for some late, then your distribution would just be 1 * filter varies for different species, then you’d have just one experiment which rolls a die in the beginning to decide where it puts the filter for us. Then the argument works. You could also mix those, if you think maybe it’s the same for everyone and maybe not. Then the argument kinda works.
But plausibly the filter, if it exists, is at the same place for everyone. So my theory mostly rejects the argument.