I used to want to take over the world once I grew up. Now I realize I was aiming too small.
This is hilarious, but how do we access these “exclusive posts” without paying thousands of dollars?
I understand your point, but I’m not sure if your position is consistent. From a consequentialist standpoint, valuing new life is usually problematic in how it affects the world. The mere addition paradox is a thought experiment that shows that if you’re willing to make even the tiniest sacrifice to create a new, slightly happy person, the continuation of the process suggests that it is moral to replace a small society of joyful people with a large society with very little average happiness. Because of this, many ethicists would prefer not to create new people just because they use up too many resources and therefore decrease the happiness of others. Would you be willing to create slightly happy people if it sacrificed utility in the lives of those who are already there?
Interesting, I didn’t realize there were this many people on the site. How many users have written posts?
I’m noticing that you’re making a lot of posts that are very off-topic. What does an elephant seal, or opening a thesaurus, or finding out your ancestry, have to do with rationality? This would probably be a lot better for shortform posts or a personal blog outside of LessWrong.
Could you explain which research you’re referring to?
It implies the writing is bad- GPT-3 isn’t exactly the best author.
Normally I would say you were being rude, but the last time I saw someone call a post GPT-writing, they were absolutely correct, so I’m going to avoid passing judgement unless lsusr verifies who wrote it.
This has been a really interesting game! It’s good to see I survived, although I barely made it to the end.
I assume that when you run the ‘official’ timeline, the arrival of AbstractSpyTreeBot will bring MeasureBot higher than in this, so MeasureBot will probably reach second place. But even with randomness as a factor, I doubt such a small change would disrupt EarlyBirdMimicBot’s serious advantage. I think we can probably say Multicore is the winner.
I’d be interested in watching you continue the game past round 500- EarlyBirdMimicBot would most likely remain in first, and me in fourth, but I would want to see if things change between MeasureBot and BendBot. However, it might be a while before those rounds could be run- by this point we’re seeing more alternate timelines than Doctor Strange. (Or the Guardians of the Universe, if you’re more of a DC fan.)
LiamGoddard is an EquityBot. It plays 3232 on the first four rounds and then determines the sequence for the rest of the game based on the opponent’s sequence on the first four rounds- if they played 2323, then continue playing 32323232, if they played 3232, then play 232323, if they played 3333 then play a pattern of 3s and 2s that makes sure they don’t outperform cooperation while maximizing my score, if they played something random then just try to keep cooperating. No matter what they played, my selected pattern will continue for the rest of the game.
It is really simple, but I don’t know how to code myself so I wanted to be sure that it was specified carefully. I also didn’t realize at the time that simulators would be allowed. Nevertheless, it’s reached fourth place, which is better than I had expected. Long live the Dark Lord Liam!
What’s this about Inadequate Equilibria’s publication?
I notice that there are a few other posts from the early days of LW that are repeated- maybe we should look through them.
This is a repeat post, and the links aren’t working.
This might apply if you switched dollars with utility (i.e. Pascal’s Mugging) but in this case, the decreasing value of dollars affects the deal. A 1⁄1,000,000 chance for 1,000,000 dollars isn’t as valuable as a dollar, because a million dollars, while valuable, isn’t exactly a million times better than one dollar.
Could you put all of these COVID posts into a single sequence? I’d be interested in looking back at your analyses over time.
When does this start?
I think an essential part of why people make such an irrational decision can be explained by thinking of the probabilities as frequencies. In problem one, 33 out of 34 possible versions of you will receive money, and you’re willing to pay $3,000 to make sure that the 34th can as well. But in problem two, 33 out of 100 will receive money, and yet you’re not willing to pay $3,000 to make sure that the 34th can. The bias here is essentially that people care more about a certainty than the actual probabilities.
Thanks, this is great.