As a possibility, buying current beach-front property is consistent with believing in global warming if you also believe that it is hard enough to predict where the new beach-front will be that it is cheaper (say, per future-discounted year of residence) to buy property on the current beach and then at the new location of the beach, than it is to buy any combination of properties today.
The inheritance question is actually rather different, as it is about buying beach-front-property-futures in the present.
I suspect that, while it is a legitimate distinction, dividing these skill-rankings into life domains:
A) Confuses what I feel to be your main (or at least, old) idea of agency, which focuses on the habit of intentionally improving situations, with the domain-specific knowledge required to be successful in improving a situation.
Mostly, I don’t like the idea of redefining the word agency to be the product of domain-skills, generic rationality skills, and the habit of using rationality in that domain… because that’s the same thing as succeeding in that domain (what we call winning) - well, minus situation effects, anyways. It seems far better to me to use “agency” to refer only to the habitual application of rationality.
You still find that agency is domain specific, but now it is separate from domain skills; give someone who is an agent in a domain some knowledge about the operative principles of that domain, and they start improving their situation; give a non-agent better information and you have the average Lifehacker reader: they read all this advice and don’t implement any of it.
B) Isn’t near fine-grained enough.
Besides the usual psych 100 stuff about people remembering things better in the same environment they learned them in (How many environments can you think of, now, how many life domains; what’s the ratio between those numbers? In the hundreds?), an anecdote which really drove the point home for me:
I have a game I’m familiar with (Echoes), which requires concurrent joystick and mouse input, and I like to train myself to use various messed-up control schemes (for instance, axis inversion). For several days I have to make my movements using a very attention-hungry, slow, deliberate process; over time this process gets faster and less attention hungry, reducing the frequency and severity of slip-ups until I am once again good at the game. I feel the parallels to a rationality practice are obvious.
Relevantly, the preference for the new control scheme then persists for some time… but, for instance, the last one only activated when some deep pattern matching hardware noticed that I had my hand on the joystick AND was playing that game AND was dodging (menus were no problem)… if I withdrew any of those conditions, mouse control was again fluent; but put your hand back on the joystick, and three seconds later...
So, I suppose my point in this subsection is that you cannot safely assume that because you’ve observed yourself being “agenty” in (say) several relationship situations, you are acting with agency in any particular relationship, topic, time, place, or situation.
(Also, I expect, the above game-learning situation would provide a really good way to screen substances and other interventions for rationality effects, but I haven’t done enough experimentation with that to draw any conclusions about the technique or any specific substances.)