“Von Neumann-Morgenstern axioms”
Isn’t it interesting that...Morgenstern was influenced by the second and third generation ‘Austrian’ economists (including Mises directly) and then he gets associated with AI due to his work with Von Neumann; which is frequently correlated itself with laissez-faire, libertarian and techno-commercialism?
To actually address the post, I do agree that utility functions are ordinal but not cardinal. I’m not sure even AI could have a cardinal utility function, since to be a meaningful ‘value’ it most motivate action; its theoretical mathematical relationship to other utility-gains and losses is irrelevant to this actual action. Likewise, a ‘mathematical’ or cardinal utility function can just as easily be described as a psychological or functional ‘system’ utilized, which itself would seem irrelevant to the ranking of values actually involved in purposeful action.
I’m not sure even AI could have a cardinal utility function, since to be a meaningful ‘value’ it most motivate action; its theoretical mathematical relationship to other utility-gains and losses is irrelevant to this actual action.
The point isn’t whether robots (or people) could be affected by cardinal magnitudes. The point is that only the valuation which causes action in the subjective present (Bergsonian present) actually motivates people. How they come to have that value might be predictable by direct cardinal ratios (indeed, if you’re a mental determinist and materialist this is true of humans). The point is that this psychological or physiological fact is the origin of a particular motivation. Teleological entities, however, only feel and act right now, and only on their most highly ranked values—however those values came to be most highly ranked. And there is no assessable ‘ratio’ of satisfaction gained or loss, only expectations of better or worse.
“Von Neumann-Morgenstern axioms” Isn’t it interesting that...Morgenstern was influenced by the second and third generation ‘Austrian’ economists (including Mises directly) and then he gets associated with AI due to his work with Von Neumann; which is frequently correlated itself with laissez-faire, libertarian and techno-commercialism?
To actually address the post, I do agree that utility functions are ordinal but not cardinal. I’m not sure even AI could have a cardinal utility function, since to be a meaningful ‘value’ it most motivate action; its theoretical mathematical relationship to other utility-gains and losses is irrelevant to this actual action. Likewise, a ‘mathematical’ or cardinal utility function can just as easily be described as a psychological or functional ‘system’ utilized, which itself would seem irrelevant to the ranking of values actually involved in purposeful action.
Watch out for the Mind Projection Fallacy; the fact that the relative magnitudes of consciously considered numbers don’t motivate human beings accordingly has little to do with how an AI could be programmed.
I mean, “if X>2Y then do Z” is a really easy sort of rule to program.
The point isn’t whether robots (or people) could be affected by cardinal magnitudes. The point is that only the valuation which causes action in the subjective present (Bergsonian present) actually motivates people. How they come to have that value might be predictable by direct cardinal ratios (indeed, if you’re a mental determinist and materialist this is true of humans). The point is that this psychological or physiological fact is the origin of a particular motivation. Teleological entities, however, only feel and act right now, and only on their most highly ranked values—however those values came to be most highly ranked. And there is no assessable ‘ratio’ of satisfaction gained or loss, only expectations of better or worse.