Given the imprecise nature of the question, the moment mathematical precision was introduced, I became extremely skeptical this would be productive. I was not disappointed, though I understand the math well enough. My issue is not with your formulae but with their relevance.
The two biggest problems in analyzing the value of self-improvement are that we don’t know what it’s worth and, worse, it’s endogenous—improving ourselves yields direct utility (if we value our “character,” “virtue,” or what-have-you), indirect utility (improving our ability to obtain additional goals), and may itself change our utility function (e.g. self-modifying to be a person who cares more about physical fitness has altered the coefficient of many junk foods in my utility function).
It’s not wholly irrelevant, but the inputs are so ill-defined as to render formalizing it of no practical value.
I’m thinking of writing a sequel called When to Self-Improve in Practice
If this is an accurate description, I’d be very much interested in reading it.
The context was optimizing job earnings, not transhumanist brain modifications. I think the model is reasonable in that context, if a bit hard to apply.
When I read “self-improvement” I don’t immediately think “investment to enhance one’s future earnings,” though I admit it does make some sense. The endogeneity problem largely disappears if you define your utility in monetary terms, but uncertainty still abounds, and actual problems may remain (most noticeably, self-investment may lead to lower utility if it doesn’t pay off, since you feel like you’re worth more than you get; while important, this is not reflected in a purely monetary model). Since your actual concern is probably utility and not money, that issue is significant.
Also, “transhumanist brain modifications” are hardly necessary to generate utility function changes. Most forms of self-improvement in the personal (as opposed to professional) sense are likely to either require or result in changes in one’s utility function.
I don’t think we disagree on anything substantive. You might find the post’s title misleading for a limited model like this, but I prefer it to something more disclaimer-heavy. For instance:
“A Toy Model Of Optimizing A Scalar-Valued Function Given Some Predictable Ability To Spend Time On Increasing The Rate of Change, But With A Discount Rate Included; Which Model May Be Of Some Analogous Application To Simple Work-Related Self-Optimization (Not Counting Self-Optimization Of Types That May Substantively Change One’s Goals And Valuations)”.
I agree on the first part. The rephrasing is perhaps a straw man. “The Math of When to Invest in Oneself,” would get the exact point across without the ambiguity of “self-improvement.”
Given the imprecise nature of the question, the moment mathematical precision was introduced, I became extremely skeptical this would be productive. I was not disappointed, though I understand the math well enough. My issue is not with your formulae but with their relevance.
The two biggest problems in analyzing the value of self-improvement are that we don’t know what it’s worth and, worse, it’s endogenous—improving ourselves yields direct utility (if we value our “character,” “virtue,” or what-have-you), indirect utility (improving our ability to obtain additional goals), and may itself change our utility function (e.g. self-modifying to be a person who cares more about physical fitness has altered the coefficient of many junk foods in my utility function).
It’s not wholly irrelevant, but the inputs are so ill-defined as to render formalizing it of no practical value.
If this is an accurate description, I’d be very much interested in reading it.
The context was optimizing job earnings, not transhumanist brain modifications. I think the model is reasonable in that context, if a bit hard to apply.
When I read “self-improvement” I don’t immediately think “investment to enhance one’s future earnings,” though I admit it does make some sense. The endogeneity problem largely disappears if you define your utility in monetary terms, but uncertainty still abounds, and actual problems may remain (most noticeably, self-investment may lead to lower utility if it doesn’t pay off, since you feel like you’re worth more than you get; while important, this is not reflected in a purely monetary model). Since your actual concern is probably utility and not money, that issue is significant.
Also, “transhumanist brain modifications” are hardly necessary to generate utility function changes. Most forms of self-improvement in the personal (as opposed to professional) sense are likely to either require or result in changes in one’s utility function.
I don’t think we disagree on anything substantive. You might find the post’s title misleading for a limited model like this, but I prefer it to something more disclaimer-heavy. For instance:
“A Toy Model Of Optimizing A Scalar-Valued Function Given Some Predictable Ability To Spend Time On Increasing The Rate of Change, But With A Discount Rate Included; Which Model May Be Of Some Analogous Application To Simple Work-Related Self-Optimization (Not Counting Self-Optimization Of Types That May Substantively Change One’s Goals And Valuations)”.
I agree on the first part. The rephrasing is perhaps a straw man. “The Math of When to Invest in Oneself,” would get the exact point across without the ambiguity of “self-improvement.”
Fair enough; it was just too fun not to post.
(Of course, they actually did titles like that in the 17th century.)
I think I have a simpler utility function than you do :)