Who is “we?” You, personally? All society? Your ancestral lineage going back to LUCA?
Well, depends on the case. When speaking of a person’s productivity or sleep, it’s primarily the person. When speaking of information flow within a company, it’s the company. When speaking of the education system within a country (or whatever the most suitable legislative level is), it’s those who have built the education system in its current form.
But the influence of cultural and evolutionary influences indeed is an important point. It may indeed be that sleep tends to be close to optimal for most people for such reasons. But even then: if there are easy ways to make it worse, it may at the very least be worth checking if you aren’t accidentally doing these preventable things (such as exposing yourself to bright displays in the evening, or consuming caffeine in the afternoon/evening).
Perhaps your prior should be that your optimality assumptions are roughly optimal, then reason from that starting point! If not, why not?
I agree I haven’t really argued in the post for why and when this shouldn’t be the case. A slightly weaker form of what I’m claiming in the post may just be: it’s worth checking if optimality is actually plausible in any given case. And then it doesn’t matter that much which prior you’re starting from. Maybe you assume your intuition about optimality is usually right, but it can still be worth checking individual cases rather than following the gut instinct of “this thing is probably optimal because that’s what my intuition says and hence I won’t bother trying to improve it”.
The question how many things are optimal and how well calibrated your intuition is really comes down to the underlying distributions, and in context to what type of thing any given person typically has (and might notice) futility assumptions. What I was getting at in the post is basically some form of “instead of dismissing some thing as futile-to-improve directly, maybe catch yourself and occasionally spend a few seconds thinking whether this is really plausible”. I think the cost of that action is really low[1], even if it turns out that 90% of things of this type you encounter happen to be already optimal (and I don’t think that’s what people will find!).
The cost may end up being higher if this causes you to waste time on trying to improve things that end up being futile or optimal already. But that’s imho beyond this post. I’m not talking about how to accurately evaluate these things, just that our snap judgments are not perfect, and we should catch ourselves when applying them carelessly.
Well, depends on the case. When speaking of a person’s productivity or sleep, it’s primarily the person. When speaking of information flow within a company, it’s the company. When speaking of the education system within a country (or whatever the most suitable legislative level is), it’s those who have built the education system in its current form.
But the influence of cultural and evolutionary influences indeed is an important point. It may indeed be that sleep tends to be close to optimal for most people for such reasons. But even then: if there are easy ways to make it worse, it may at the very least be worth checking if you aren’t accidentally doing these preventable things (such as exposing yourself to bright displays in the evening, or consuming caffeine in the afternoon/evening).
I agree I haven’t really argued in the post for why and when this shouldn’t be the case. A slightly weaker form of what I’m claiming in the post may just be: it’s worth checking if optimality is actually plausible in any given case. And then it doesn’t matter that much which prior you’re starting from. Maybe you assume your intuition about optimality is usually right, but it can still be worth checking individual cases rather than following the gut instinct of “this thing is probably optimal because that’s what my intuition says and hence I won’t bother trying to improve it”.
The question how many things are optimal and how well calibrated your intuition is really comes down to the underlying distributions, and in context to what type of thing any given person typically has (and might notice) futility assumptions. What I was getting at in the post is basically some form of “instead of dismissing some thing as futile-to-improve directly, maybe catch yourself and occasionally spend a few seconds thinking whether this is really plausible”. I think the cost of that action is really low[1], even if it turns out that 90% of things of this type you encounter happen to be already optimal (and I don’t think that’s what people will find!).
The cost may end up being higher if this causes you to waste time on trying to improve things that end up being futile or optimal already. But that’s imho beyond this post. I’m not talking about how to accurately evaluate these things, just that our snap judgments are not perfect, and we should catch ourselves when applying them carelessly.