(Mostly pretty brainless exploration here. I have not particularly tried to work out the actual game rules.)
It looks as if
CFHJMPR all do pretty well against individual other cards (just looking at fraction of cases with, say, C on one side versus, say, G on the other where the side with C wins) but the triples that perform best mostly seem to be two of these plus L or S. Of course there may be weird selection effects that make this sort of statistical estimation misleading.
On the other hand, DLSBGW seem individually to do pretty poorly.
These kinda line up with some naive classification by weapon-type: thrown weapons (chakram, hammer, javelin) seem to do well; fire/firearms (flamethrower, matchlock, pyro) do too; implicit or explicit bladed weapons (duelist, lamella, samurai) don’t do so well, and nor do blunt-force weapons (bludgeon, golem). It seems like the wizard kinda belongs in that last set. This is all very handwavy. I vaguely imagine a ruleset where ranged attacks happen first and then short-range ones, or something, but as mentioned above I haven’t so far particularly tried to work out the rules.
I built some brute-force models using a gradient-boosting tree classifier from scikit-learn in Python. (I also tried some other things that performed worse.) I found that I got a small improvement in fit by including not just the presence/absence of each card from each team but also the count of cards from each of the five sets of three implied by the previous paragraph.
Asking the models for the best-performing triples against FRS suggested several with a >= 80% win-rate (though I wouldn’t trust the actual numbers much). Different triples in different randomized runs, but FGP is almost always near the top, and brute-force counting shows FGP doing well against all pairs of cards from FRS and getting 2 wins out of 2 against FRS itself.
I also looked for pairs of cards that seem to show notable interactions (e.g., having both in your hand does notably better or worse than you’d predict from the stats from having each one in your hand separately) and apparently found some, but adding these pairs as features before doing model-fitting apparently made the models worse so I haven’t used them.
My currently-proposed team to play against FRS is
FGP.
I haven’t looked at PVP at all so far.
Avid readers of the D&D.Sci series will remember that in the previous “League of …” episode I initially pessimized instead of optimizing. I did that here too, but I think I’ve fixed that before posting anything here. (But I haven’t re-checked all the heuristic handwaving bits and there may be debris from my screwup in there.)
At the time of writing this I have not looked at anyone else’s comments.
(Mostly pretty brainless exploration here. I have not particularly tried to work out the actual game rules.)
It looks as if
CFHJMPR all do pretty well against individual other cards (just looking at fraction of cases with, say, C on one side versus, say, G on the other where the side with C wins) but the triples that perform best mostly seem to be two of these plus L or S. Of course there may be weird selection effects that make this sort of statistical estimation misleading.
On the other hand, DLSBGW seem individually to do pretty poorly.
These kinda line up with some naive classification by weapon-type: thrown weapons (chakram, hammer, javelin) seem to do well; fire/firearms (flamethrower, matchlock, pyro) do too; implicit or explicit bladed weapons (duelist, lamella, samurai) don’t do so well, and nor do blunt-force weapons (bludgeon, golem). It seems like the wizard kinda belongs in that last set. This is all very handwavy. I vaguely imagine a ruleset where ranged attacks happen first and then short-range ones, or something, but as mentioned above I haven’t so far particularly tried to work out the rules.
I built some brute-force models using a gradient-boosting tree classifier from scikit-learn in Python. (I also tried some other things that performed worse.) I found that I got a small improvement in fit by including not just the presence/absence of each card from each team but also the count of cards from each of the five sets of three implied by the previous paragraph.
Asking the models for the best-performing triples against FRS suggested several with a >= 80% win-rate (though I wouldn’t trust the actual numbers much). Different triples in different randomized runs, but FGP is almost always near the top, and brute-force counting shows FGP doing well against all pairs of cards from FRS and getting 2 wins out of 2 against FRS itself.
I also looked for pairs of cards that seem to show notable interactions (e.g., having both in your hand does notably better or worse than you’d predict from the stats from having each one in your hand separately) and apparently found some, but adding these pairs as features before doing model-fitting apparently made the models worse so I haven’t used them.
My currently-proposed team to play against FRS is
FGP.
I haven’t looked at PVP at all so far.
Avid readers of the D&D.Sci series will remember that in the previous “League of …” episode I initially pessimized instead of optimizing. I did that here too, but I think I’ve fixed that before posting anything here. (But I haven’t re-checked all the heuristic handwaving bits and there may be debris from my screwup in there.)
At the time of writing this I have not looked at anyone else’s comments.