You either entirely misunderstood what I’m saying, or stopped reading before you got to the thesis statement of this article. You also appear to be using a different definition of the word “instability” than I am.
This wasn’t meant to be a defense of HapMax; I used it as an example only because it’s familiar and simple enough to use without pulling too much focus from the main point, which was about utility functions in general, including ones that are close enough to valid for you to care about them and including algorithmically-constructed utility functions as in CEV, and not about HapMax in particular. I realize that there are many other things wrong with HapMax and that it is not salvageable.
When I say that HapMax is unstable, I mean that a bug in one subdivision drastically alters the whole thing. Even if there were no utility monster, one might imagine a bug or cosmic ray hit causing an ordinary person to be treated as one.
You seem to be thinking of stability under self-modification, as opposed to what I’m talking about which is stability under introduction of localized qualitative errors.
No—there’s nothing unstable or buggy about HapMax. The utility monster is a large change in the input that causes a large change in the output. Instability is when a small change in the input causes a large change in the output. HapMax is stable by any measure I can think of. You just don’t like HapMax because you don’t think you implement it.
(If you could really perceive the vast orgasmic pleasure of the monster, rather than just reading a text description of it, you might find that you do implement HapMax.)
You either entirely misunderstood what I’m saying, or stopped reading before you got to the thesis statement of this article. You also appear to be using a different definition of the word “instability” than I am.
This wasn’t meant to be a defense of HapMax; I used it as an example only because it’s familiar and simple enough to use without pulling too much focus from the main point, which was about utility functions in general, including ones that are close enough to valid for you to care about them and including algorithmically-constructed utility functions as in CEV, and not about HapMax in particular. I realize that there are many other things wrong with HapMax and that it is not salvageable.
When I say that HapMax is unstable, I mean that a bug in one subdivision drastically alters the whole thing. Even if there were no utility monster, one might imagine a bug or cosmic ray hit causing an ordinary person to be treated as one.
You seem to be thinking of stability under self-modification, as opposed to what I’m talking about which is stability under introduction of localized qualitative errors.
A better word for your concept, then, might be “robustness”, rather than “stability”.
No—there’s nothing unstable or buggy about HapMax. The utility monster is a large change in the input that causes a large change in the output. Instability is when a small change in the input causes a large change in the output. HapMax is stable by any measure I can think of. You just don’t like HapMax because you don’t think you implement it.
(If you could really perceive the vast orgasmic pleasure of the monster, rather than just reading a text description of it, you might find that you do implement HapMax.)