That definition feels too broad to me. Typically akrasia has two further atttributes:
Improper time discounting: we don’t spend an hour a day exercising even though we believe it would make us lose weight, with a huge hedonic payoff if you maximize hedons over a time horizon of a year.
Feeling so bad due to not doing the necessary task that we don’t really enjoy ourselves no matter what we do instead (and frequently leading to doing nothing for long periods of time). Hedonically, even doing the homework usually feels a lot better (after the first ten minutes) than putting it off, and we know this from experience—but we just can’t get started!
Which is why it’s pretty blatantly obvious that humans aren’t utility maximizers on our native hardware. We’re not even contextual utility maximizers; we’re state-dependent error minimizers, where what errors we’re trying to minimize are based heavily on short-term priming and longer-term time-decayed perceptual averages like “how much relaxation time I’ve had” or “how much i’ve gotten done lately”.
Consciously and rationally, we can argue we ought to maximize utility, but our behavior and emotions are still controlled by the error-minimizing hardware, to the extent that it motivates all sorts of bizarre rationalizations about utility, trying to force the consciously-appealing idea of utility maximization to contort itself enough to not too badly violate our error-minimizing intuitions. (That is, if we weren’t error-minimizers, we wouldn’t feel the need to reduce the difference between our intuitive notions of morality, etc. and our more “logical” inclinations.)
Which is why it’s pretty blatantly obvious that humans aren’t utility maximizers on our native hardware. We’re not even contextual utility maximizers; we’re state-dependent error minimizers, where what errors we’re trying to minimize are based heavily on short-term priming and longer-term time-decayed perceptual averages like “how much relaxation time I’ve had” or “how much i’ve gotten done lately”.
Consciously and rationally, we can argue we ought to maximize utility, but our behavior and emotions are still controlled by the error-minimizing hardware, to the extent that it motivates all sorts of bizarre rationalizations about utility, trying to force the consciously-appealing idea of utility maximization to contort itself enough to not too badly violate our error-minimizing intuitions. (That is, if we weren’t error-minimizers, we wouldn’t feel the need to reduce the difference between our intuitive notions of morality, etc. and our more “logical” inclinations.)
Then, can you tell me what utility is? What is it that I ought to maximize? (As I expanded on in my toplevel comment)
Something that people argue they ought to maximize, but have trouble precisely defining. ;-)