Explanations can be about a few different things. I’ve been having some luck splitting them into variant (who, what, where) and invariant (how, why, what) parts. Aristotle and Sun Tzu had their own type systems for causality, and I’m curious why I haven’t been able to find much in either philosophy or computer science about it. One guess is that I just haven’t found the right keywords yet, but I have gone hunting around citation chains from knowledge representation and Pearl’s stuff, as well as the stuff from here and here. The modal logic stuff seems promising but most of what’s been built on it seems like epicycles.
In the ‘keep the organization from being overrun’ sense, see also sealioning. The search space of worthwhile things is very large and idiosyncratically explored by well meaning, intelligent people. Aggressive value laden ‘logical arguments’ often point to a tacit value to have everyone converge on the same set of metaheuristics. This is because the person doing this has a strong need for internal consistency that they are externalizing onto their social space. And there’s nothing wrong with wanting internal consistency. But if pressed hard, it is anti-truth seeking as an aggregate strategy because you lose out on the consilience of having different people pursuing different search methods. Epistemology is a team sport. The objection would be ‘but if we don’t then argue about what we’ve discovered what’s the point?’ The point is that adversarial processes as a part of the truth seeking process needs to be consensual. This applies doubly when you aren’t in a 101 space and people might be sick of a dynamic where simple seeming questions with complicated answers make newer members feel entitled to the effort needed to explain said complicated answers. This is one of the reasons well written blog posts that can be referenced by name can be so helpful for community discourse.
I like this post by the way and my comment wasn’t an objection to it.
In general: making somatic reactions part of the mutual understanding of what is happening. The more you are aware of when you’re abstracting from a somatic reaction the more you’ll recognize when others are doing it too. E-Prime is aimed at this, as the step of symbolizing one thing as another thing (X *is* Y) is a moment in which you can catch this.
I feel like the elephant in the room is that convincing logical arguments are often only weak to moderate evidence for something.
What does it mean to pay utility?
The bar that is set for appeals to consequences imply the sort of equilibrium world you’ll end up in. Erring on the side of higher is better, because it is hard to go the other way because epistemic standards tend to slide in the face of local incentives.
I also want to note an argumentative tactic that occurs on the tacit level whereby people will push you into a state where you need to expend more energy on average per truth bit than they do, so they eventually win by attrition. Related to evaporative cooling. The subjective experience of this feels like talking to the cops. You sense that no big wins are available (because they have their bottom line) but big losses are, so you stop talking. If you’ve encountered this dynamic, you recognize things like this
> “You still haven’t refuted my argument. If you don’t do so, I win by default.”
as part of the supporting framework for the dynamic and it will make you very angry...which others will then use as part of the dynamic which makes you angry which......
Slipping out of production possibility frontier zero-sum relations involves slipping sideways along some other dimension. Saddle points in gradient descent. This is yet another reason why ‘If a problem seems hard, the representation is probably wrong’ often holds.
Higher in more expensive markets, but yes.
Workflowy. Dynalist and others have more features, but I don’t want more features. More features means more decisions. I organize by month and do a review at the end of months, plus a year end review when collapsing into my archive tab. Tags for things like book notes, quotes, routines etc.
Since The Rationality Quotient mostly showed that Rationality isn’t much of a thing on top of g, that means that despite not caring about the quality of arguments too much, other people aren’t suffering worse life outcomes. One can take it as an opportunity to be curious about why that might be. What might others who seem to be less explicitly/verbally committed to truth be getting right in other ways? I’ve found that spiritual communities are good for this, and more open to reflection than most, once the right semantic flags are understood and translated.
I call it decision leverage, for handiness, and looking for it changes casual conversation as well as analysis.
Human civilizations have survivorship bias such that you’re seeing the iterated output of those that found parameter insensitive parts of critical distributions. Exogenous shocks are common, and civs that weren’t metastable disappeared. This happened on both the cultural and genetic level (you are already a civilization of trillions of cells). As the pace of change accelerates things might get more dangerous as people push farther/harder within the span of a single lifetime before natural self corrections kick in (four horsemen). The current trend could be seen as the human race essentially tinkering with cancer (runaway, uncontrollable growth) to see if it will grant us immortality before it kills us.
I liked the vulnerability of admitting wanting attention and the exploration of multiple consequences of that, the way coping strategies evolve and change as a result of pressures.
I really appreciated this post.
What Liron said. A small number of days accounts for most stock gains. Sharpe concluded that you’d need to make calls right about 74% of the time to win by timing.
There are ways to systematically make money. Diversification, not timing, low time preference, leverage, not being loss averse and other factors allow you to harvest money off of the poorly diversified, the market timers, the high time preference, the leverage averse, and the loss averse over time.
The popular saying is that bond yield inversions have predicted 6 of the last 3 recessions. In general, just run your same searches and append the word ‘myth’, ‘misconception’, ‘doesn’t’ etc. If you use positive search methods the internet will tell you whatever you want.
More generally, market timers lose hard. Set your allocations such that you’re comfortable with max drawdowns and plug money into it reliably. Recessions last on average 9 months and have an average draw down of 20%, not much to worry about in the long run.
I think people have alarm fatigue due to insufficient resolution of their priority function to make decisions without tons of effort. There are a large number of things in the world that are deeply important, but of upstream importance is knowing that there’s an order that will have to de facto be followed whether you are aware of the things that determine said order or not. And that you won’t personally have time to get to almost any of the urgent things.
I’m pointing this out because lots of things shouting for attention with highest priority signals is a recipe for burnout, while at the same time we do in fact need people still capable of sitting bolt upright occasionally.
So is the bet for the construction of a financial instrument that moves like a leveraged case-shiller housing index for a selected city with tracking error + fees smaller by some factor than the average frictional costs in the buying market?
edit: oh in this model we’re mostly concerned with the rent hedging value rather than the speculative value (that being seen as mostly a thing that subsidizes the rent hedge here) so we’d really want something that pays you when the rent index deviates substantially from its trend. If we were happy with just hedging the long term trend I think that could be done cheaply with fixed income levered appropriately, but I’d have to show that it won’t suddenly change correlations under various conditions.
edit 2: I think there’s some aether variable-ing going on here. Rental market are much more liquid and display lower volatility than housing markets. So in order to purchase a hedge against rental price (cash flow) appreciation we expose ourselves to much more volatile net worth swings? That only makes sense for people with poor savings (almost all ‘normal’ people. But normal people aren’t trying to buy in the most expensive markets anyway).
The actual numbers are somewhat idiosyncratic but my main point is that in the process of investigating the numbers most people don’t evaluate downsides because they don’t occur to them. Once these costs are taken into account the marginal buyer will flip on the decision. In extremely hot markets you are much more likely to be like the marginal buyer rather than the average buyer.