looked like the stats were independtly randomly generated and cut off at sum >= 60.
dexterity was useless
looked like there was a big advantage to having stats >= 8. For strengh, strength=8 was almost as good as strength=20
I fit a regularized logistic regression and a neural net, but couldn’t get validation accuracy greater than 70%, which was only a little better than the 65% baseline of random guessing. I realized that the data is not very informative and I don’t know how results are calculated, so I better stick with a conservative model like Nearest Neighbors classifier, and try a few different models. I fit a KNN classifier, gradient boosting on decision trees, and regularized logistic regression (all with validation accuracy 70%), and chose a point which gave near the top scores for all three classifiers. (It had all stats >=8, too.)
Cheers to simon, ericf and myself, for offering an optimal solution! And cheers to abstractapplic for organizing the challenge.
The leaderboard (if you’re not here, I couldn’t figure out what your final decision was, or you added more than 10 points):
simon, ericf 0.9375
[(‘CHA’, 8), (‘CON’, 15), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 8), (‘WIS’, 15)]
seed 0.9375
[(‘CHA’, 8), (‘CON’, 14), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 8), (‘WIS’, 16)]
Samuel Clamons 0.8095
[(‘CHA’, 8), (‘CON’, 17), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 7), (‘WIS’, 14)]
Asgard 0.7857
[(‘CHA’, 9), (‘CON’, 16), (‘DEX’, 14), (‘INT’, 13), (‘STR’, 8), (‘WIS’, 12)]
Measure 0.7308
[(‘CHA’, 8), (‘CON’, 14), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 6), (‘WIS’, 18)]
kiwiakos 0.6774
[(‘CHA’, 7), (‘CON’, 15), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 6), (‘WIS’, 18)]
Alexey 0.6500
[(‘CHA’, 11), (‘CON’, 14), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 6), (‘WIS’, 15)]
newcom 0.6471
[(‘CHA’, 11), (‘CON’, 16), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 7), (‘WIS’, 12)]
AABoyles, Pongo, GuySrinivasan 0.6389
[(‘CHA’, 6), (‘CON’, 14), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 6), (‘WIS’, 20)]
Yongee 0.6364
[(‘CHA’, 5), (‘CON’, 14), (‘DEX’, 13), (‘INT’, 20), (‘STR’, 8), (‘WIS’, 12)]
Deccludor 0.6098 [(‘CHA’, 5), (‘CON’, 20), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 6), (‘WIS’, 15)]
Randomini 0.4688 [(‘CHA’, 4), (‘CON’, 14), (‘DEX’, 13), (‘INT’, 13), (‘STR’, 16), (‘WIS’, 12)]
From plotting the data, I saw that:
looked like the stats were independtly randomly generated and cut off at sum >= 60.
dexterity was useless
looked like there was a big advantage to having stats >= 8. For strengh, strength=8 was almost as good as strength=20
I fit a regularized logistic regression and a neural net, but couldn’t get validation accuracy greater than 70%, which was only a little better than the 65% baseline of random guessing. I realized that the data is not very informative and I don’t know how results are calculated, so I better stick with a conservative model like Nearest Neighbors classifier, and try a few different models. I fit a KNN classifier, gradient boosting on decision trees, and regularized logistic regression (all with validation accuracy 70%), and chose a point which gave near the top scores for all three classifiers. (It had all stats >=8, too.)