This can particularly make sense in cases where we have already invested a lot of effort into something. But if we haven’t – as is the case to varying degrees in these examples – then it would, typically, be really surprising if we just ended up close to the optimum by default.
Who is “we?” You, personally? All society? Your ancestral lineage going back to LUCA? Selection effects, cultural transmission of knowledge, and instinct all provide ways activities can be optimized without conscious personal effort. In many domains, assuming approximate optimality by default should absolutely be your baseline assumption. And then there’s the metalevel to consider, on which your default assumptions about approximate optimality for any domain you might consider are also optimized by default. Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
Who is “we?” You, personally? All society? Your ancestral lineage going back to LUCA?
Well, depends on the case. When speaking of a person’s productivity or sleep, it’s primarily the person. When speaking of information flow within a company, it’s the company. When speaking of the education system within a country (or whatever the most suitable legislative level is), it’s those who have built the education system in its current form.
But the influence of cultural and evolutionary influences indeed is an important point. It may indeed be that sleep tends to be close to optimal for most people for such reasons. But even then: if there are easy ways to make it worse, it may at the very least be worth checking if you aren’t accidentally doing these preventable things (such as exposing yourself to bright displays in the evening, or consuming caffeine in the afternoon/evening).
Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
I agree I haven’t really argued in the post for why and when this shouldn’t be the case. A slightly weaker form of what I’m claiming in the post may just be: it’s worth checking if optimality is actually plausible in any given case. And then it doesn’t matter that much which prior you’re starting from. Maybe you assume your intuition about optimality is usually right, but it can still be worth checking individual cases rather than following the gut instinct of “this thing is probably optimal because that’s what my intuition says and hence I won’t bother trying to improve it”.
The question how many things are optimal and how well calibrated your intuition is really comes down to the underlying distributions, and in context to what type of thing any given person typically has (and might notice) futility assumptions. What I was getting at in the post is basically some form of “instead of dismissing some thing as futile-to-improve directly, maybe catch yourself and occasionally spend a few seconds thinking whether this is really plausible”. I think the cost of that action is really low[1], even if it turns out that 90% of things of this type you encounter happen to be already optimal (and I don’t think that’s what people will find!).
The cost may end up being higher if this causes you to waste time on trying to improve things that end up being futile or optimal already. But that’s imho beyond this post. I’m not talking about how to accurately evaluate these things, just that our snap judgments are not perfect, and we should catch ourselves when applying them carelessly.
Who is “we?” You, personally? All society? Your ancestral lineage going back to LUCA? Selection effects, cultural transmission of knowledge, and instinct all provide ways activities can be optimized without conscious personal effort. In many domains, assuming approximate optimality by default should absolutely be your baseline assumption. And then there’s the metalevel to consider, on which your default assumptions about approximate optimality for any domain you might consider are also optimized by default. Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
Well, depends on the case. When speaking of a person’s productivity or sleep, it’s primarily the person. When speaking of information flow within a company, it’s the company. When speaking of the education system within a country (or whatever the most suitable legislative level is), it’s those who have built the education system in its current form.
But the influence of cultural and evolutionary influences indeed is an important point. It may indeed be that sleep tends to be close to optimal for most people for such reasons. But even then: if there are easy ways to make it worse, it may at the very least be worth checking if you aren’t accidentally doing these preventable things (such as exposing yourself to bright displays in the evening, or consuming caffeine in the afternoon/evening).
I agree I haven’t really argued in the post for why and when this shouldn’t be the case. A slightly weaker form of what I’m claiming in the post may just be: it’s worth checking if optimality is actually plausible in any given case. And then it doesn’t matter that much which prior you’re starting from. Maybe you assume your intuition about optimality is usually right, but it can still be worth checking individual cases rather than following the gut instinct of “this thing is probably optimal because that’s what my intuition says and hence I won’t bother trying to improve it”.
The question how many things are optimal and how well calibrated your intuition is really comes down to the underlying distributions, and in context to what type of thing any given person typically has (and might notice) futility assumptions. What I was getting at in the post is basically some form of “instead of dismissing some thing as futile-to-improve directly, maybe catch yourself and occasionally spend a few seconds thinking whether this is really plausible”. I think the cost of that action is really low[1], even if it turns out that 90% of things of this type you encounter happen to be already optimal (and I don’t think that’s what people will find!).
The cost may end up being higher if this causes you to waste time on trying to improve things that end up being futile or optimal already. But that’s imho beyond this post. I’m not talking about how to accurately evaluate these things, just that our snap judgments are not perfect, and we should catch ourselves when applying them carelessly.