[Question] Does human choice have to be transitive in order to be rational/​consistent?

I was struck by that ques­tion read­ing one of the re­sponses to the post pol­ling the mer­its of sev­eral AI al­ign­ment re­search ideas.

I have not re­ally thought this through but it seems the re­quire­ment for prefer­ence or­der­ing satis­fy­ing a tran­si­tivity re­quire­ment must also as­sume the al­ter­na­tives be­ing ranked can be dis­til­led to some com­mon de­nom­i­na­tor (eco­nomics would prob­a­bly sug­gest util­ity per unit or more ac­cu­rately MU/​$).

I’m not sure that re­ally cov­ers all, and per­haps not even the ma­jor­ity of cases.

It we’re re­ally com­par­ing differ­ent sets of at­tributes we la­bel A, B and C tran­si­tive prefer­ences might well the the ex­cep­tion rather than the rule.

The A>B, B>C there­fore A>C is of­ten vi­o­lated—in poli­ti­cal sci­ence that pro­duces a vot­ing cy­cle—when con­sid­er­ing group choices.

I just won­der if it re­ally is cor­rect to claim such re­sults within one per­son’s head, given we’re com­par­ing differ­ent things—and so likely the use/​con­sump­tion in a slightly differ­ent con­text as well.

Could that in­ter­nal vot­ing cy­cle be a source of in­de­ci­sion (which is a bit differ­ent that in­differ­ence) and why we will of­ten avoid a pair-wise de­ci­sion pro­cess and opt for putting all the al­ter­na­tives up against the oth­ers to pick the preferred al­ter­na­tive?

If so would that be some­thing that an AGI will also find nat­u­rally oc­curs and it is not an er­ror to be cor­rected but rather a situ­a­tion where ap­ply­ing a pair-wise choice or some tran­si­tivity check would ac­tu­ally be the er­ror.