I failed to stick the landing for PVE; looking at gjm’s work, it seems like what I was most missing was feature-engineering while/before building ML models. I’ll know better next time.
For PVP, I did much better. My strategy was guessing (correctly, as it turned out) that everyone else would include a Professor, noticing that they’re weak to Javelineers, and making sure to include one as my backmidline.
Reflections on the challenge:
I really appreciated this challenge, largely because I got to use it as an excuse to teach myself to build Neural Nets, and try out an Interpretability idea I had (this went nowhere, but at least failed definitively/interestingly).
I have no criticisms, or at least none which don’t double as compliments. The ruleset was complicated and unwieldy, increasing the rarity of “aha!” moments and natural stopping points during analysis, and making it hard to get an intuitive sense of how a given matchup would shake out (even after the rules were revealed) . . . but that’s exactly what made it such a useful testing ground, and such valuable preparation for real-world problems.
I think calling anything I did “feature engineering” is pretty generous :-). (I haven’t checked whether the model still likes FGP without the unprincipled feature-tweaking I did. It might.)
Reflections on my performance:
I failed to stick the landing for PVE; looking at gjm’s work, it seems like what I was most missing was feature-engineering while/before building ML models. I’ll know better next time.
For PVP, I did much better. My strategy was guessing (correctly, as it turned out) that everyone else would include a Professor, noticing that they’re weak to Javelineers, and making sure to include one as my
backmidline.Reflections on the challenge:
I really appreciated this challenge, largely because I got to use it as an excuse to teach myself to build Neural Nets, and try out an Interpretability idea I had (this went nowhere, but at least failed definitively/interestingly).
I have no criticisms, or at least none which don’t double as compliments. The ruleset was complicated and unwieldy, increasing the rarity of “aha!” moments and natural stopping points during analysis, and making it hard to get an intuitive sense of how a given matchup would shake out (even after the rules were revealed) . . . but that’s exactly what made it such a useful testing ground, and such valuable preparation for real-world problems.
I think calling anything I did “feature engineering” is pretty generous :-). (I haven’t checked whether the model still likes FGP without the unprincipled feature-tweaking I did. It might.)