Hey, I’m Owen.
I think rationality is pretty rad.
I found your categorization of three ways to improve explanations to be useful, and they seem like they cover most of the issues.
However, I feel like the brunt of the article itself was too short to give me a good sense of what canonical forms are like in math, or how to apply them conversationally. In particular, I think having more examples (or making the examples clearer) for each item on your list would have been helpful.
Also, I personally would have also enjoyed a more technical explanation of how to think about canonical forms mathematically. (Which I would guess would help me understand the connection to conversations.)
I pattern-matched many of your between-task ambiguities to the different types of scheduling algorithms that can occur in operating systems.
I’ve been working through the textbook as well. I know I’ve commented a few times before, but, once again, thanks for writing up your thoughts for each chapter. They’ve been useful for summarizing / checking my own understanding.
For some recent meta-analyses, the OSC’s paper on reproducibility in social science has ~100 studies, and I think you can explore those and others at osf.io.
In general, I know that anchoring effects are quite reliable with a large effect size, and many priming effects have in recent years failed to replicate.
Wait, sorry, I misunderstood what you needed. Please disregard.
Desmos has a handy interactive calculator where you can adjust the parameters to get a better feel for what’s going on. I think that can potentially help.
The new design appears to have higher contrast between the foreground and background, which I’m a fan of. It’s an improvement, I think.
(Also hoping for reduced page weight and performance tweaks, but I get that they’re already in progress :P)
Specific stories from this list that I’ve enjoyed:
Following the Phoenix: probably my favorite continuation fic that ups the ante in an interesting way with a satisfying ending
Significant Digits: the famous one that got EY’s recommendation for worldbuilding. Very cool exploration of a potential future of HPMOR, but the characters’ personalities deviate from canon, perhaps too much.
Orders of Magnitude: an extension (side-quel?) to SD that also goes deep on the worldbuilding.
Reductionism for the Win: satisfying alternative ending arc.
Minds, Names, and Faces: also a fairly good alternative ending arc.
Revial also looks promising but I haven’t read it fully.
Oh, right, that’s a fair point.
Did a cursory look through Twitter and found several critical accounts spreading it, so as gilch said, it’s already happening to an extent :/
Is anyone worried about Streisand effect type scenarios with this?
I get that the alternative is Scott being likely doxxed by the article being published, so this support against the NYT seems like a much better outcome.
At the same time, it seems like this might also lead to some malicious people being more motivated (now that they’ve heard of Scott through these channels) to figure out who he is and then share that to people who Scott would like to not know?
Yes, having them to the margin is much much better. :)
Can other people comment about the UX of preview on hover?
I dislike it because the pop-ups are often quite large, like on gwern.net, where they can completely block whatever it is I’m reading. Arbital-style tool-tips and the Wikipedia ones are borderline okay as they aren’t too large, but I find that the visual contrast is often too jarring for me :/
I think that, while it’s true that some people might do this, this seems like an especially steep price to pay if it’s the only benefit afforded to us by rationalization. (I realize you’re not necessarily claiming that here, just pointing out that rationalization seems to have some possible social benefits for a certain group of people.)
If we are crunching the numbers, though, it seems like the flip side is much much more common, i.e. people doing things to benefit themselves under ostensibly altruistic motivations.
Also I want to point out that, perhaps against better design judgment, in actual industry most of modern software engineering has embraced the “agile” methodology where the product is being iterated in small sprints. This means that the design team checks in with the users’ needs, changes are made, tests are added, and the cycle begins again. (Simplifying things here, of course.)
It was more common in the past to spend much more time understanding the clients’ needs, in what is termed the “waterfall” methodology, where code is shipped perhaps only a few times a year, rather than bi-weekly (or whatever your agile sprint duration is).
Just a note that your windfall clause link to your website is broken.
https://cullenokeefe.com/windfall-clause takes me a “We couldn’t find the page you’re looking for” error.
That seems reasonable, yeah.
Goodhart’s Law also seems relevant to invoke here, if we’re talking about goal vs incentive mismatch.
You’re right that I’m making assumptions about insights which not always be applicable. And I don’t mean to claim that theory isn’t useful. This post is partially also for me to push back against some default theorizing that happens.
I think that sometimes the right thing to do is to focus on just “reporting the data”, so to speak, if we use an analogy from research papers. There are experimental papers which might do some speculation, but their focus is on the results. Then there are also papers which try to do more theorizing and synthesis.
I guess I’m trying to discourage what I see as experimental papers focusing too much on the theorizing aspect.