Well, I’m getting a reasonably exciting calibration curve with lots of ups and downs. Cool!
Bug: when I click “Display Calibration Curve” for a second time, the graph is displayed in a larger size. (Doing this sufficiently many times crashed Chrome.) Refreshing the page fixes this behavior.
Feature request: I would like to be able to see if my 50% correctness for 30% confidence is getting 1 out of 2 questions right or 5 out of 10. (Error bars of some sort would also work.)
Good idea. I don’t think the charts API I’m using will let me do error bars but a good alternative would be a secondary chart that’s a bar graph of right vs total questions for each bucket. This would also give a good visual representation of the frequency with which you use various confidence levels.
Well, I’m getting a reasonably exciting calibration curve with lots of ups and downs. Cool!
Bug: when I click “Display Calibration Curve” for a second time, the graph is displayed in a larger size. (Doing this sufficiently many times crashed Chrome.) Refreshing the page fixes this behavior.
Feature request: I would like to be able to see if my 50% correctness for 30% confidence is getting 1 out of 2 questions right or 5 out of 10. (Error bars of some sort would also work.)
Good idea. I don’t think the charts API I’m using will let me do error bars but a good alternative would be a secondary chart that’s a bar graph of right vs total questions for each bucket. This would also give a good visual representation of the frequency with which you use various confidence levels.