This looks more like what I would have expected to happen. Congratulations to Multicore.
The automatic cooperation means that once you reach an endgame where everyone is always cooperating, whoever has the biggest share will win, so the game is about entering the endgame with the largest share more than being slightly better at late execution. The other games where things didn’t collapse seemed weird, and it makes sense that it was largely buggy code. The other possibility is incidental perfect cooperation—e.g. if BendBot always starts 2 and Manticore always starts 3, and there are 100 turns, then the game becomes static if everyone else is gone.
I am content with a 3rd place finish given I did it without writing code. This was sharp competition!
If people are running new simulations, some things I’d be curious about to get juices flowing:
What happens if you rerun the thing a few times? Does it always look the same? Graphs seem to have some big semi-random events on them.
What happens if we change 100 turns/round to 101?
What is the simplest bot that, when added to the field, would win?
Could BendBot have won if it had been able to expand its logic to cover more cases and thus get off to a better start, or if it had chosen a slightly better opening sequence? What if it always started 2 and never 3? On reflection alternation was wrong, I shouldn’t have hedged my bets here.
What happens if the reward for self-ID is lowered from 2.5/round to the theoretical maximum, where they have to figure out who starts high and who starts low? That makes an equilibrium more likely (since you can have 2 bots that do better than that against each other, because they’re not identical).
How much does it matter if you add in password bots for various people, or new silly bots, or take silly bots away? What happens if we don’t include any silly bots or password bots? What happens if the password bots are more ofuscated, so EarlyBirdMimicBot doesn’t look at them?
What happens if EarlyBirdMimicBot is less scared to simulate? How much faster does it win?
What happens if EarlyBirdMimicBot is less scared to simulate? How much faster does it win?
I actually win less in that case, even if I get there faster. I get perfect cooperation with the deterministic cooperators written in Python, so one or two of them stick around forever if they last long enough. It can be two if one of them starts 2 and the other starts 3 so they cooperate with each other, though I’m not sure if there’s a deterministic Python bot that starts 3.
A Bully Bot could actually do pretty well here (even without attempting simulation) - you get to exploit all the Silly bots, get the most you can out of the Clone army (more than 50% at the beginning when they are willing to back down, 40% once you have to be Fold Bot against them) and still cooperate or cooperate+ against everyone else (especially if you can trick simulators or pseudo-simulators into folding to you)
The clones do not fold; in the early game they play an EquityBot-ish strategy that gives attackers less than cooperation would have gotten them. Only a couple of players were willing to fold in the early game, and usually only after ten or more turns of attack. Attacking for tens of turns to find out whether your opponent is a FoldBot will destroy you in a pool of mostly not FoldBots.
Simulation would be able to tell you who to bully without having to go through that—run the opponent for 100 turns and see if they eventually fold against all 3s. But as always, simulation runs the risk of MeasureBot-style malware.
If there’s a time limit on running, a quick “loop until 75% of the time limit is used up” will stop any simulator from running more than 1 turn of simulation.
Having now looked over the codes, it looks like no-one expected so many silly bots that would play 0 every round is simulated correctly. So, a bot that did some checking and cooperated with complex things, simulated and crushed silly bots, and folded to the clone army would probably have gotten a superior early lead, and possibly held onto it.
Especially if luser was re-loading the source code from the original file each round, and you took advantage of the rules loophole that prohibited:
Hacking your opponent’s source file (but not your own)
Looking at the game engine stuff
Saving any “information” from one round to another.
But, crucially, not replacing your own source code file deterministically after a particular round. So, after you finish exploiting the silly bots for the first 10-20 rounds, replace your source code with a compliant Clone Bot with an aggressive payload to win after round 90.
I mean I did include an explicit “if they seem to be playing 0 then don’t be an idiot and play 5” line and a similar one to play 4 if they kept playing 1. I had complexity restrictions that prevented me from doing more than that, but I’m confident those lines of codes did good work.
This looks more like what I would have expected to happen. Congratulations to Multicore.
The automatic cooperation means that once you reach an endgame where everyone is always cooperating, whoever has the biggest share will win, so the game is about entering the endgame with the largest share more than being slightly better at late execution. The other games where things didn’t collapse seemed weird, and it makes sense that it was largely buggy code. The other possibility is incidental perfect cooperation—e.g. if BendBot always starts 2 and Manticore always starts 3, and there are 100 turns, then the game becomes static if everyone else is gone.
I am content with a 3rd place finish given I did it without writing code. This was sharp competition!
If people are running new simulations, some things I’d be curious about to get juices flowing:
What happens if you rerun the thing a few times? Does it always look the same? Graphs seem to have some big semi-random events on them.
What happens if we change 100 turns/round to 101?
What is the simplest bot that, when added to the field, would win?
Could BendBot have won if it had been able to expand its logic to cover more cases and thus get off to a better start, or if it had chosen a slightly better opening sequence? What if it always started 2 and never 3? On reflection alternation was wrong, I shouldn’t have hedged my bets here.
What happens if the reward for self-ID is lowered from 2.5/round to the theoretical maximum, where they have to figure out who starts high and who starts low? That makes an equilibrium more likely (since you can have 2 bots that do better than that against each other, because they’re not identical).
How much does it matter if you add in password bots for various people, or new silly bots, or take silly bots away? What happens if we don’t include any silly bots or password bots? What happens if the password bots are more ofuscated, so EarlyBirdMimicBot doesn’t look at them?
What happens if EarlyBirdMimicBot is less scared to simulate? How much faster does it win?
Also, this series should definitely become a sequence. Great job all around, big thanks to lsusr.
Plus, I didn’t even implement the whole thing.
I actually win less in that case, even if I get there faster. I get perfect cooperation with the deterministic cooperators written in Python, so one or two of them stick around forever if they last long enough. It can be two if one of them starts 2 and the other starts 3 so they cooperate with each other, though I’m not sure if there’s a deterministic Python bot that starts 3.
A Bully Bot could actually do pretty well here (even without attempting simulation) - you get to exploit all the Silly bots, get the most you can out of the Clone army (more than 50% at the beginning when they are willing to back down, 40% once you have to be Fold Bot against them) and still cooperate or cooperate+ against everyone else (especially if you can trick simulators or pseudo-simulators into folding to you)
The clones do not fold; in the early game they play an EquityBot-ish strategy that gives attackers less than cooperation would have gotten them. Only a couple of players were willing to fold in the early game, and usually only after ten or more turns of attack. Attacking for tens of turns to find out whether your opponent is a FoldBot will destroy you in a pool of mostly not FoldBots.
Simulation would be able to tell you who to bully without having to go through that—run the opponent for 100 turns and see if they eventually fold against all 3s. But as always, simulation runs the risk of MeasureBot-style malware.
Ah, right, I misread that code.
If there’s a time limit on running, a quick “loop until 75% of the time limit is used up” will stop any simulator from running more than 1 turn of simulation.
Having now looked over the codes, it looks like no-one expected so many silly bots that would play 0 every round is simulated correctly. So, a bot that did some checking and cooperated with complex things, simulated and crushed silly bots, and folded to the clone army would probably have gotten a superior early lead, and possibly held onto it. Especially if luser was re-loading the source code from the original file each round, and you took advantage of the rules loophole that prohibited:
Hacking your opponent’s source file (but not your own)
Looking at the game engine stuff
Saving any “information” from one round to another. But, crucially, not replacing your own source code file deterministically after a particular round. So, after you finish exploiting the silly bots for the first 10-20 rounds, replace your source code with a compliant Clone Bot with an aggressive payload to win after round 90.
I mean I did include an explicit “if they seem to be playing 0 then don’t be an idiot and play 5” line and a similar one to play 4 if they kept playing 1. I had complexity restrictions that prevented me from doing more than that, but I’m confident those lines of codes did good work.