Thanks, that’s helpful. Actually, now that you’ve put it that way, I recall having known this fact at some point in the past.
This result seems strange to me, even though the maths seems to check out. Is there a conceptual explanation of why this should be the case?
It isn’t clear to me how this resolves the problem of Megaprojects. If the shares fall, then perhaps we can tell that the project is likely to fall behind and be assessed a penalty and knowing that will allow some mitigation, but that’s a pretty minor fix.
pR(Ui) already had an R(Ui), then you divided by it, but the original factor disappears so you are left with a divided by R(Ui). But I don’t see where the original factor of R(Ui) went, which would have resulted in cancelling.
“It took someone deciding to make it their fulltime project and getting thousands of dollars in funding, which is roughly what such things normally take”—lots of open source projects get off the ground without money being involved
When you calculate pR(Ui|sub), you perform the following transformation pR(Ui)→pR(Ui)×R0(Ui)/R(Ui), but an R(Ui) seems to go missing. Can anyone explain?
I just thought I’d add a note in case anyone stumbles upon this thread: Stuart has actually now changed his views on anthropic probabilities as detailed here.
Full non-indexical conditioning is broken in other ways too. As I argued before, the core of this idea is essentially a cute trick where by precommitting to only guess on a certain sequence, you can manipulate the chance that at least one copy of you guesses and that the guesses of your copies are correct. Except full non-indexical conditioning doesn’t precommit so the probabilities calculated are for a completely different situation. Hopefully the demonstration of time inconsistency will make it clearer that this approach is incorrect.
What do you consider to be his core insights? Would you consider writing a post on this?
Group rationality is a big one. It wouldn’t surprise me if rationalists are less good on average at co-ordinating than other group because rationalists tend to be more individualistic and have their own opinions of what needs to be done. As an example, how long did it take for us to produce a new LW forum despite half of the people here being programmers? And rationality still doesn’t have its own version of CEA.
I don’t suppose you could explain how it uses P and V? Does it use P to decide which path to go down and V to avoid fully playing it out?
This question is specifically about building it, but that’s a worthwhile clarification.
I think it might be interesting to discuss how story analysis differs from signalling analysis since I expect most people on Less Wrong to be extremely familiar with this. One difference is that people are happy to be given a story about you even if it is imperfect so that they can slot you into a box. Another is that signalling analysis focuses on whether something makes you look good or bad, while story analysis focuses on how engaging a narrative is. It also focuses more on how cultural tropes shape perspectives—ie. the romanticisation of bank robbers.
Seems possible, though malaria nets seems like such a niche industry that it wouldn’t result in much additional human or infrastructural capital
You missed conversational and social signalling value. Travel is an excellent conversation topic as almost everyone has some memories that they’d love to share. Or at least I find it more interesting than most other smalltalk topics as you’re at least learning about other parts of the world. And people who have travelled a lot are seen as more adventurous.
Maybe we should differentiate holding off losing consciousness from holding off dying? Because I know that I can definitely hold off on falling asleep and maybe holding onto consciousness is the same?
If EA focused more on feedback loops, then there’d be less focus on donating money to charity. How would you like these resources to be deployed instead?
See also: If a tree falls on Sleeping Beauty.
I suppose that makes sense if you’re a moral non-realist.
Also, you may care about other people for reasons of morality. Or simply because you like them. Ultimately why you care doesn’t matter and only the fact that you have a preference matters. The morality aspect is inessential.
“So you can’t talk about anthropic “probabilities” without including how much you care about the cost to your copies”—Yeah, but that isn’t anything to do with morality, just individual preferences. And instead of using just a probability, you can define probability and the number of repeats.