asymmetry in the penalties for type 1 vs type 2 errors.
in this frame the differences between the characters is how granular their levers for changing things is, which seems closer to correct to me. Edgar simply has much too large jump sizes to ever get lucky and land in a white zone.
That makes sense. I’d frame that last bit more as: which bit, if revealed would screen off the largest part of the dataset? Which might bridge this to more standard search strategies. Have you seen Argumentation in Artificial Intelligence?
Is this asking whether ontology generation via debate is guaranteed to converge? Is this moving aumann’s agreement ‘up a level’?
Lossiness is itself an optimized for quantity and varies in importance across differing domains with differing payoff structures. Clashes are often the result of two locally valid choices of lossiness function conflicting when attempts are made to propagate them more globally.
Better definitions->loses less of the things that I think are important and more of the things I think are unimportant. People who have faced a different payoff structure will have strenuous objections. Law of large numbers states that you will be able to find people who have faced a completely perverse data set in terms of edge cases and thus have a radically different payoff structure. If there are such people at both ends of a particular distribution then you get that effect no matter which you optimize for.
Monocultures make this worse because in effect it prevents people from taking their ball and going home ie deciding to use alternative functions for assignation of meaning.
Is there a term for distinguishing between disagreements where the two people are arguing over how to weight the different factors in a model vs disagreements/misunderstandings where they have different factors in their models? Extra layer of complication when languaging doesn’t directly point at the factors but only references them. I’ve been calling them factor disagreements vs model disagreements.
I bring this up because I see written stuff needing to do scaffolding to build this concept up from scratch repeatedly.
It’s also a useful concept for delineating two types of cruxing. The kind where you discover that two different weightings for a factor can be calibrated by data, and the kind where you find out things about unshared parts of the ontology of the problem.
That’s pretty interesting. Some practices seem squarely aimed at converting people who are dystonic to syntonic. Heavy overlap with locus of control training.
The question was somewhat rhetorical. I’m pointing back at the generative seed for this post.
Variable reinforcement. Decreasing payouts over time in any given domain. But always some new arena to jump to. Insights can be lotus flavored.
But then they flow more easily once you can Look. How does that play out? For people at different stages of the addiction cycle?
What does it look like to be genre savvy about insights becoming cheaper/more numerous over time?
you are always already Looking no matter which glasses you have on. The point of swapping glasses and noticing that you are swapping glasses is to notice what remains the same no matter which pair you are wearing, in the same way that you see the flaws in the lens by moving it against the background and noticing the scratch is invariant.
Less valuable than trying is my guess. I don’t mean some heroic effort, I mean like a yoda timer. If you do try it and it *doesn’t* work that would be an interesting data point for me.
Religion impressed from childhood and deconverted from.
It’s not so much that these are the examples so much that these are examples where you have the opportunity to notice the contrast. Everything is glasses. Your skepticism is glasses.
Section 2 is very important and there’s more there. I would highly recommend performing the section 2 heuristic on the section 2 phenomena.
Evidence in support of first principles reasoning generally resorts to cherry picking IME. In contrast, when I look through what methodology I can find on breakthrough thinkers in biographies and autobiographies, I find something less like ‘a flash of inside view brilliance’ and more like ‘tried something over and over again in the presence of feedback loops and kept trying to find simple models that would explain most/the core of the data’ (to account for noise in the data gathering process). Once a simple model was found, tested/extended to establish the domain of validity. These thinkers themselves seem to often point out multiple false starts where elegant inside view models were developed but eventually needed to be abandoned. We don’t see as many of those looking back since people rarely record them unless their abandonment was noisy. Scott points to several in the history of depression models IIRC.
Which I suppose is to say that I don’t think you can pick winners using first principles reasoning even though first principles reasoning is how we move forward. Like an exploratory/confirmatory thing.
I do agree that ‘thinking isn’t so great’ serves much more as an excuse to avoid the 99% perspiration than it is a claim about the 1% inspiration. The ‘thinking isn’t so great’ can be helpful when it helps point people towards the idea that ‘summon sapience’ includes more than symbolic based analytic techniques. Presence is expensive, especially at first. So people try to avoid it.
Extremely useful and conveniently timely for my own work. Thanks for writing this up.
I wouldn’t be in favor of adding explicit rules for goodheart related reasons. I think prizes and grants should have the minimum rules to account for basic logistics and the rest should be illegible.
On reflection I strongly agree that social pressure around counterfactualness is a net harm for motivation.
> selecting for counterfactualness without producing weird side effects seems hard
agree, I just thought the winner in this case was over the top enough to not be in the fuzzy boundary but clearly on the other side.
Humanity has fallen into an ontological divot in the search space. This has epistemic considerations due to ontological commitments (Duhem-Quine indispensability).
This probably sounds a bit vague so I’ll use a metaphor. Imagine that the sapir-worf hypothesis was true and that the world had already gone through a cycle of amplification of newspeak, including screening off the part of the language that would cause someone to independently reinvent the sapir-worf hypothesis.