a 404 error for a user page
The name does have an uncommon symbol (é), which doesn’t show up in the url, if that changes anything.
Or maybe, “Robust Agency” makes sense as a thing to call one overall cluster of strategies, but it’s a subset of “Deliberate Agency.”
Where might “Robust Agency” not overlap with “Deliberate Agency”?
Another source of maintenance difficulties is the laziness when writing the software documentation.
Perhaps this is a variant of 4⁄2 - lack of documentation writing skill or because ‘it takes time away from writing code’.
Is this a good place to post bugs? (Like consistently getting a 404 error for a user page, which prevents subscription.)
the don’t-compute-evil rule is pretty efficient even if it were arbitrarily chosen.
What if it’s more general—say, a prior to first employ actions you’ve used before that have worked well? (I don’t have a go to example of something good to do that people usually don’t. Just ‘most people don’t go skydiving, and most people don’t think about going skydiving.’)
I will leave the advice on how to improve estimation for another time (please let me know if you are interested in this)
This is interesting.
These figures [are] not unrealistic either
how does that work?
I thought that the way it worked was living beings that get cancer and die are less likely to have kids.
It’s interesting to think about the review effort in this light. (Also, material about doing group rationality stuff can fit in with timeless content, but less in a oneshot way.)
The lying theory is tricky, as it can explain anything.
The lying theory can explain away any “evidence”, but not tell you what the truth is—at best it can tell you where the truth is not.
This is a really great post.
EDIT: I’m confused by the downvote. Is there any specific critique?
Are mazes are where our human and/or social capital pays off?
Are mazes where our human and/or social capital pays off?
What’s the reason for the difference here?:
I realize some people have already become so trapped in mazes that they cannot walk away.
If you actually can’t walk away, see the last two questions.
If you actually can’t afford to quit, see the last three questions.
Are these words being used similarly or differently? (They both seem to be words associated with magic, but that could be a coincidence.)
Stop commoditizing startup wisdom, I feel it creates more failures than successes.
Advice should be pre-registered, so there isn’t publication bias from startup founders that succeed?
A classic cause of rationalization. Expecting good things feels better than expecting bad things, so you’ll want to believe it will all come out all right.
The opposite of wishful thinking. I’m not sure what the psychological root is, but it seems common in our community.
Together these may be black and white thinking.
If anyone has experience trying to develop this skill, please leave a comment.
Imagine two worlds—one where you come to conclusion A, one where you come to conclusion B. Do you have a strong reaction?
Nevertheless, if you find yourself arriving at the same conclusion as a large group of idiots,
Examples? (Aside from, ‘the same conclusion as a group because you like the group’.)
Vague language (and lack of detail) period.
and very easy to rationalize.
If you adjust (the results of calculations) away from a conclusion ‘because you’re biased’, then the direction you adjust them in is the way you’re biased.
What is Rationalization and Why is it Bad?
What is Rationalization and What are the benefits of identifying it?
It does not provide a lot of tools for you to use in protecting yourself from rationalization.
I look forward to this. Wait, my rationalization, or others’? Both?
If rationalization looks just like logic, can we ever escape Cartesian Doubt?
What is Cartesian Doubt? (If rationalization looks just like logic, how can we tell the difference?)
Given that any evidence of awakeness is a thing that can be dreamed,
Sort of. Have you ever read a book in a dream? (There’s evidence of dreaming that isn’t produced by reality (in practice this may be a matter of rates).)
The simplest ideal of thinking deals extensively with uncertainty of external facts, but trusts its own reasoning implicitly. Directly imitating this, when your own reasoning is not 100% trustworthy, is a bad plan.
Been wondering about how this could be programmed (aside from ‘run the calculation 3 times’).
Hopefully this sequence will provide some alternatives.
Thanks for making this!
Interestingly, this notion is the central topic of the film Jurasic Park:
Miller’s principle: p(x|p(x) = y) = y
It could condition on this, making a much longer claim,p(B|(p(B|ω)=0.3333+δ),ω,p(B|(p(B|ω)=0.3333+δ),ω))=0+γ)
This equation didn’t have a final = and right side.
They can’t sell your degree, it’s valueless to them.
What if they could? It’s a piece of paper, that people pay a lot of money for. (This would create an incentive for universities to have attendees that don’t later have their degrees get repossessed.)