I assume this is where a “gut-check” came from.
I’m reminded of Malcolm Ocean’s article “questions are not just for asking.” Open curiosity feels more like holding a question while active curiosity is asking it. He also links to my favorite web-comic ever which seems to advocate a sort of open curiosity.
Off topic but… Is there something I don’t know about Einstein’s preferred pronouns? Did he prefer ey and eir over he and him?
I am familiar with derivatives. I don’t remember the properties of logarithms but I half remember the base change one :).
Babble vs. prune.
I’m not sure if this is the way I would think of it but I can kind of see it. I more think of them as different responses to the same sorts of stressors.
After having someone else on the EA forum also point me to the data on commodities, I’m now updating the post.
I was at a talk at the EA hotel that claimed there’s evidence that a specific type of compassion meditation for 30 minutes a day for a few weeks has large effects sizes on compassion. I would be surprised however if this caused people to work on large global problems. I woudn’t be surprised if the combination of interventions that improve compassion and interventions that improve rationality caused more people to work on large global problems.
> How my own driving skill differs from the average person feels to me a straightforward known unknown.
I didn’t think of model where this mattered. I was more thinking of a model like “number of mistakes goes up linearly with alcohol consumption” than “number of mistakes gets multiplied by alcohol consumption”. If the latter than this becomes an opaque risk (that can be measured by measuring your number of mistakes in a given time period).
> For a business that sells crops it’s reasonable to buy options to protect against risk that come from the uncertainty about future prices.
Agreed. It also seems reasonable when selecting what commodity to sell to do a straight up expected value calculation based on historical data, and choose the one that has the the highest expected value. When thinking about it, perhaps there’s “semi-transparent risks” that are not that dynamic or adversarial but do have black swans, and that should be it’s own category above transparent risks, under which commodities and utilities would go. However, I think the better way to handle this is to treat the chance of black swan as model uncertainty that has knightian risk, and otherwise treat the investment as transparent based on historical data.
Sort of both. Both optionality and pilot in the plane principle are like “guiding principles” of anti-fragility and effectuation from which the subsequent principles fall out. However, they’re also good principles in their own rights and subsets of the broader concept. It might be that I should change the picture to reflect the second thing instead of the first thing, to prevent confusions like this one.
A good exercise to see if you grock anti-fragility or effectuation is to go through each principle and explain how it follows from either Optionality or the Pilot-in-Plane principle respectively
Thanks! I do get the purpose/idea behind kelly criterion, but I don’t get how to actually do the math, nor how to intuitively think about it when making decisions the way I intuitively think about expected value.
I didn’t make the leap from bits of information to feedback loops but it makes intuitive sense. Transmiting information that compresses by giving you the tools to figure out the information yourself seems useful.
I also have this visceral feeling. It feels like a “subquestions” feature could fix both these issues.
That claim is something that often seems to be true, but it’s one of the things I’m unsure of as a general rule. I do know that in practice when I try to mitigate risk in my own projects, and I think of anti-fragile and effectuative strategies, they tend to be at odds with each other (this is true of both the “0 to 1 Companies” and “AGI Risk” examples below”)
The difference between hormesis and the lemonade principle is one of mindset.
In general, the anti-fragile mindset is “you don’t get to choose the game but you can make yourself stronger according to the rules.” Hormesis from that mindset is “Given the rules of this game, how can I create a policy that tends to make me stronger to the different types of risks?”
The effectuative mindset is “rig the game, then play it.” From that perspective, the lemonade principle looks more like “Given that I failed to rig this game, how can I use the information I just acquired to rig a new game.”
You’re a farmer of a commodity and there’s an unexpected drought. The hormetic mindset is “store a bit more water in the future.” (and do this every time there’s a draught). The lemonade mindset is “Start a draught insurance company that pays out in water.”
Looking forward to this. Feel free to send me an invite to look over the google doc.
Really enjoyed this. I’ve found myself using this concept a few times in my thoughts just the past couple days since I read this.
I mostly agree with this. If rationality means “systematized winning” them I’m comfortable including Vibing in it, but if it means something more specific than I wouldn’t include this in rationality. However, I still think it belongs on LessWrong, which is more about creating common knowledge to allow for systematized winning.
Yes I think I have different intuitions than Taleb here. When you think about Risk in terms of the strategies you use to deal with it, it doesn’t make sense to use for instance anti-fragility to deal with drunk driving on a personal level. It might make sense to use anti-fragility in general for risks of death, but the inputs for your anti-fragile decision should basically take the statistics for drunk driving at face value. I think it’s pretty similar to a lottery ticket in that 99% of the risk is transparent, and a remaining small amount is model uncertainty due to unknown unknowns (maybe someone will rig lottery) .The lucidic fallacy in that sense applies to every risk, because there’s always some small amount of model uncertainty (maybe a malicious demon is confusing me).
One way to think about this is that your base risk is transparent and your model uncertainty is Knightian—this is a sensible way to approach all transparent risks, and it’s part of the justification for the barbell strategy.
I would say almost all global catastrophic risks would be classified as Knightian Risk. An exception might be something like an asteroid strike, which would be more opaque.
Edit: changed meteor to asteroid.
Yes, this is very related to the benefits of what vibing is. I think that “communication with emotional flow” is as close to a succinct description of vibing as I’ve gotten to. By respecting the emotional energy in the room you can be honest without breaking the vibe of flow.
On a more meta note, I really appreciate that all of your comments on my posts seem to make an effort to model the norms I put in my commenting guidelines. I don’t know if it’s intentional or not but it is appreciated.