First it explicitly contradicts the stated goal to maximise points
Base models dont do it so it’s not simply mimicking human behaviour.
Giving the questions one at a time lowers confidence. Lifelines scale with this. It’s hard to see how this could be over eagerness to help. More lifelines and lower points seems contrary to helpfulness.
Thanks for your alternative explanation Gordon.
Theres a few points I would make.
First it explicitly contradicts the stated goal to maximise points
Base models dont do it so it’s not simply mimicking human behaviour.
Giving the questions one at a time lowers confidence. Lifelines scale with this. It’s hard to see how this could be over eagerness to help. More lifelines and lower points seems contrary to helpfulness.