I’m slightly confused. Is it that we’re learning about which world we are in or, given that counterfactuals don’t actually exist, are we learning what our own decision theory is given some stream of events/worldline?
I think specifically porting quasi-transitivity from social choice back to decision theory is an interesting direction. i.e. VNM Axiom 2 is not sufficient for describing how preference transitivity might work. Some prior work referencing Harsanyi’s proposed solution here: http://www.harveylederman.com/Aggregating%20Extended%20Preferences.pdf
Try throwing in some coherence therapy cues as well. It sounds like your stance is still that the subconscious is dumb in many ways. Be open to the idea that your conscious mind is the who is wrong with significant probability.
Non central nit: So, you know the things in your past, so there is no need for probability there
Doesn’t seem true.
>In fact I’m basically making that bet every day I decide not to be leveraged in the current market.
that’s what I was curious about.
the price is now very high
What does the outside view say about that claim? https://www.bloomberg.com/view/articles/2017-10-10/cape-has-a-dismal-record-as-predictor-of-stock-performance
ie, why do you expect what you said not to be priced into current expectations?
Are you willing to bet on <2% expected real returns?
A single house in a single market levered up 5x isn’t the good kind of diversification and will make your expected portfolio performance worse. Lot’s of people diversify into a reit index, don’t know if that’s optimal or not.
There is no compelling reason to think such outperformance will continue into the future.
The market disagrees with you.
the US was then a wild frontier economy recovering from a ruinous civil war, and no-one would make a big bet on that… predictions are hard to make, especially about the future.
which is why you hold a globally diversified portfolio. You don’t place ‘bets.’
Related: if you think buying a house is an obvious win please use the NY times rent vs buy calculator. Tl;dr if buying vs renting was an obvious win the market would arbitrage that. In most popular cities buying a house costs more than renting in the long run due to speculation and bias.
A related perhaps useful model: one interpretation of some Buddhist claims is that by default/habit we/evolution hit upon using affective circuitry as representative/a processing aid in propagating probabilities in a belief network. It is incredibly common for people to assume that if they short circuit this that their values/beliefs or what have you and thus ability to act will disappear. The surprising thing is that they don’t. It appears that some other way of processing things is possible. People who reach certain fruitions often report surprise that they go in to work the next day and everything seems normal.
Really dig the list of criteria for building robust organizational capital. Wonder if there was more in the book on distribution of enforcement costs, which AFAIK has only been highlighted by fairly recent research as being s critical parameter.
I tried to track it down but could not. There appear to be some incentives fueling development in this area. (http://www.abc.net.au/news/rural/2015-04-21/oral-pain-relief-cattle/6409468)
IIRC an analysis was done of the cost to administer opioids to livestock at scale and it winds up at pennies a pound. The only reason we don’t is negative consumer perception (appreciable quantities do not wind up in the consumed meat), similar to irradiation vs additives for food preservation. Animal charities have been reluctant to pursue further research because of fear of pushing a narrative that makes it okay/gives people an ethical out since opioids don’t actually eliminate all the suffering just alleviate some fraction. There is similar contention around ‘improved’ living standard for the livestock.
Insight porn is more fun than training. LW does not have a training culture component. I blame the lack of a clear skill tree. I thought, a long time ago, that CFAR would eventually turn something like the rationality checklist into a proper skill tree with feedback loops.
asymmetry in the penalties for type 1 vs type 2 errors.
in this frame the differences between the characters is how granular their levers for changing things is, which seems closer to correct to me. Edgar simply has much too large jump sizes to ever get lucky and land in a white zone.
That makes sense. I’d frame that last bit more as: which bit, if revealed would screen off the largest part of the dataset? Which might bridge this to more standard search strategies. Have you seen Argumentation in Artificial Intelligence?
Is this asking whether ontology generation via debate is guaranteed to converge? Is this moving aumann’s agreement ‘up a level’?
Lossiness is itself an optimized for quantity and varies in importance across differing domains with differing payoff structures. Clashes are often the result of two locally valid choices of lossiness function conflicting when attempts are made to propagate them more globally.
Better definitions->loses less of the things that I think are important and more of the things I think are unimportant. People who have faced a different payoff structure will have strenuous objections. Law of large numbers states that you will be able to find people who have faced a completely perverse data set in terms of edge cases and thus have a radically different payoff structure. If there are such people at both ends of a particular distribution then you get that effect no matter which you optimize for.
Monocultures make this worse because in effect it prevents people from taking their ball and going home ie deciding to use alternative functions for assignation of meaning.