[Question] Does human choice have to be transitive in order to be rational/​consistent?

I was struck by that question reading one of the responses to the post polling the merits of several AI alignment research ideas.

I have not really thought this through but it seems the requirement for preference ordering satisfying a transitivity requirement must also assume the alternatives being ranked can be distilled to some common denominator (economics would probably suggest utility per unit or more accurately MU/​$).

I’m not sure that really covers all, and perhaps not even the majority of cases.

It we’re really comparing different sets of attributes we label A, B and C transitive preferences might well the the exception rather than the rule.

The A>B, B>C therefore A>C is often violated—in political science that produces a voting cycle—when considering group choices.

I just wonder if it really is correct to claim such results within one person’s head, given we’re comparing different things—and so likely the use/​consumption in a slightly different context as well.

Could that internal voting cycle be a source of indecision (which is a bit different that indifference) and why we will often avoid a pair-wise decision process and opt for putting all the alternatives up against the others to pick the preferred alternative?

If so would that be something that an AGI will also find naturally occurs and it is not an error to be corrected but rather a situation where applying a pair-wise choice or some transitivity check would actually be the error.