I’m not sure even AI could have a cardinal utility function, since to be a meaningful ‘value’ it most motivate action; its theoretical mathematical relationship to other utility-gains and losses is irrelevant to this actual action.
The point isn’t whether robots (or people) could be affected by cardinal magnitudes. The point is that only the valuation which causes action in the subjective present (Bergsonian present) actually motivates people. How they come to have that value might be predictable by direct cardinal ratios (indeed, if you’re a mental determinist and materialist this is true of humans). The point is that this psychological or physiological fact is the origin of a particular motivation. Teleological entities, however, only feel and act right now, and only on their most highly ranked values—however those values came to be most highly ranked. And there is no assessable ‘ratio’ of satisfaction gained or loss, only expectations of better or worse.
Watch out for the Mind Projection Fallacy; the fact that the relative magnitudes of consciously considered numbers don’t motivate human beings accordingly has little to do with how an AI could be programmed.
I mean, “if X>2Y then do Z” is a really easy sort of rule to program.
The point isn’t whether robots (or people) could be affected by cardinal magnitudes. The point is that only the valuation which causes action in the subjective present (Bergsonian present) actually motivates people. How they come to have that value might be predictable by direct cardinal ratios (indeed, if you’re a mental determinist and materialist this is true of humans). The point is that this psychological or physiological fact is the origin of a particular motivation. Teleological entities, however, only feel and act right now, and only on their most highly ranked values—however those values came to be most highly ranked. And there is no assessable ‘ratio’ of satisfaction gained or loss, only expectations of better or worse.