I’m a longtime lurker who has taken interest in the new tagging system.
I think you may have misread the “Unvaccinated and undeterred” graph (which is terrible and misleading).
All the numbers in each section add up to 100%, so it’s saying “53% of people who dined in restaurants were unvaccinated” not “53% of unvaccinated people dined in restaurants”. So you have to consider base rates. The numbers for half-vaccinated people were lower mostly because there are fewer half-vaccinated people than there are of the other groups.
(Saw this on twitter but I don’t remember from who.)
You’ve mentioned Pasek’s Doom a few times before, but I’m still not quite sure what it means. Something about taking your own headspace drama too seriously in self-destructive ways?
Level one: A job title that straightforwardly describes what the person actually does in the job. For example, “Seventh-grade math teacher” or “Data Analyst”
Level two: A job title which claims to describe what the person actually does, but is misleading. For example, if someone was hired as an Electrical Engineer but actually spends their time doing IT work.
You move into level three a bit when the job title is partly about your status in a hierarchy: “Senior Software Engineer” or “Junior Software Engineer”.
As the original post suggests, job titles for high level management are mostly about social status and hierarchy, and therefore are never below level three.
Do you have a special case for someone who already has a job and is searching for a better paying one? That person’s opportunity cost would not be
(future pay per week x number of additional weeks spent searching)
((future pay per week—current pay per week) x number of additional weeks spent searching)
It seems reasonably possible to be confident that a string is human-generated, but if anyone did their job well in round 1, it probably won’t be possible to be confident that a string is computer-generated.
Maybe some of the ones left over will seem slightly more or less random, but probably at some point I’ll just have n strings left over and assign them all probability 62/n, adjusted for whatever uncertainty I had about the ones that seemed human-generated.
Blockbuster failed to invest in internet tech for their movie rental business and was outcompeted by smaller, more savvy startups.
Seconded, DeepMind seems like a natural tag to have given that we have tags for OpenAI, Ought, MIRI, etc.
Last-minute nomination: This is high-quality and timeless, and would look nice in a book.
A: EarlyBirdMimicBot is extremely restrictive about what it simulates, because I was worried about malware. MeasureBot confirmed this fear, though I could have been less restrictive and still avoided it. Therefore, PasswordBot cannot look at its opponent’s source code if it wants EarlyBirdMimicBot to simulate it.
B: EarlyBirdMimicBot’s simulation strategy is brute force, looking at the result of every possible sequence of the next N moves. lsusr required bots to make their moves quickly, so to save on time I only considered the moves 2 and 3 when simulating.
I could have addressed this by simply having a special case behavior against PasswordBots instead of simulating them, but I didn’t think of that.
C: I was actually planning to do this but screwed it up and did not check it properly before uploading. It would have been tit-for-tat against the field if I did it right.
What happens if EarlyBirdMimicBot is less scared to simulate? How much faster does it win?
I actually win less in that case, even if I get there faster. I get perfect cooperation with the deterministic cooperators written in Python, so one or two of them stick around forever if they last long enough. It can be two if one of them starts 2 and the other starts 3 so they cooperate with each other, though I’m not sure if there’s a deterministic Python bot that starts 3.
The clones do not fold; in the early game they play an EquityBot-ish strategy that gives attackers less than cooperation would have gotten them. Only a couple of players were willing to fold in the early game, and usually only after ten or more turns of attack. Attacking for tens of turns to find out whether your opponent is a FoldBot will destroy you in a pool of mostly not FoldBots.
Simulation would be able to tell you who to bully without having to go through that—run the opponent for 100 turns and see if they eventually fold against all 3s. But as always, simulation runs the risk of MeasureBot-style malware.
I thought we were already calling it Sneer Culture.
Looking through the code, yep, my simulation criteria were so conservative that I only simulated the PasswordBots. OscillatingTwoThreeBot was oh so close to only having two open parentheses but it used a decorative set of parentheses in the class definition (as did many others) Looks like I didn’t need simulation anyway.
I am somewhat interested in using the code to explore alternate timelines. Who wins without me? Who wins the clone showdown if it’s allowed to happen? What happens if you start the game at round 90 and make the smart bots use their endgame strategies in a pool full of silly bots? What happens if you remove npc bots and have a pool of only players? Does anything interesting happen if the number of turns per round is 101 rather than 100? I’m probably not interested enough to commit to doing this in a timely manner though.
What marginal submission would win in this pool? Probably just a MimicBot with Measure’s opening game. Using simulation, especially hy-compatible simulation, could make you win more as long as you didn’t simulate MeasureBot, or only simulated it in a separate thread.
It’s been a great ride. Thanks for running the game, lsusr.
Nomination for 2019 review:
I originally tried to read Self-Therapy, but bounced off of it because it was aimed too much at people with major life-impacting traumas. This post was much more approachable, and I liked the robot metaphor. Since reading it, I started to notice the ways in which my own mind is behaving like a manager or firefighter with respect to embarrassing incidents in the past.
In the Mutant Game, the PasswordBots aren’t really on my team. Since it appears to them that the opponent started with 3, they will play 3 on every turn against everybody.
If the round index is always 0, that means the clone truce never ends and you have population dynamics based on which clones are getting 300-200 against which other clones. Not sure if it will be stable or not.
I suspected that my simulation strategy didn’t end up being all that useful, but I’m still curious what bots I managed to simulate at all. Presumably at least the PasswordBots. I guess I can find out when the code is released.
It also significantly affected people’s expectations of the metagame and made them prepare for simulators. BendBot and CloneBot were made deterministic so that simulators could efficiently cooperate with them.
Wow, that was not what I would have predicted from the last set of results. It looks like the trend in my population from when the clones were a little nasty accelerated when the clones got really nasty.
In the true game with AbstractSpyTreeBot, MeasureBot is going to be eating it at the same time I’m eating the clones, but is it going to be a boost as extreme as this?
In the endgame everyone cooperates with everyone else, and it seems to be down to openings and match-breaking strategies. LiamGoddard and MeasureBot always start 3, BendBot alternates between starting 2 and starting 3 on a per-round basis, and EarlyBirdMimicBot randomly starts 2 or 3 using the python random number generator. BendBot uses a fixed pseudorandom sequence to try to break matches, EarlyBirdMimicBot picks 2 or 3 with equal probability, MeasureBot plays 2 with probability 0.69 possibly with some special cases, and I’m not sure what LiamGoddard does.
Silly 2 bot lasted longer than anyone could have reasonably expected. The Clone Wars presumably gave it a second wind, as folding was a sound strategy against the clones’ aggressive plays.
BeauBot in sixth place is the best scoring bot we don’t have a clear explanation of yet.
This was a neat feature on Arbital, nice to see it here as well.
If OscillatingTwoThreeBot (sixth place) is exactly what it says on the tin and always plays 23232323..., you get perfect cooperation with it 100% of the time. Could be a nice minor advantage.
I notice I was slightly declining for a bit until round 10, where I started shooting up again. I’m not sure if it’s because I changed my strategy at that point and scored more or because a bunch of other people changed their strategy at that point and scored less. I think it’s more the latter, particularly increasing clone hostility.
Clones have slightly lost ground since last time. Without critical mass, their increasing hostility will hurt them more than the opponents. It looks like we’re heading for a repeat of history, with Measure as the Zvi to my David. Because MeasureBot always starts 3 in the endgame and I randomize 50⁄50, I think I slowly lose if its starting population is bigger than mine and it’s just us. If there are multiple endgame bots my more cooperative nature could be an advantage.
Actual Zvi’s BendBot has gained significant ground after being in the middle of the pack in earlier rounds. Maybe it handles the middle game especially well. LiamGoddard in fourth place is the highest ranked bot we haven’t gotten any explanation of.
The “true timeline” with AbstractSpyTreeBot is probably going to be this but more extreme, since ASTB feeds Measure even more.
Larks, excellent name choice for your AttackBot.