Doing? Very little. I lack the localized[1] subject-matter knowledge you do about your own project, so any ivory tower advice I’d give would likely make things worse, at least in the short-run if not even more.
Thinking? Only in so far as your thinking reflects in its entirety your writing about this topic, which I find unlikely. Nevertheless, the writing itself (as I mentioned above) does not directly address the topic of status considerations, instead merely gesturing around it and focusing on technical skills instead. In the early-stage planning of a procedure like yours, this works fine because it’s easy to argue down people’s status-based skepticism as long as you’re working on a well-understood topic where you can easily refute it (cf. footnote 1). In the middle-game and endgame, when you are facing harder problems, perhaps even problems hard enough that nobody has ever successfully solved, it stops working as well because this is a qualitatively different environment. There’s a problem[2] of generalizing out of distribution, of a sharp left turn of sorts. Particularly likely to be the case when dealing with people who have already been exposed to promises/vibes about LW making society find truth faster than science, do better than Einstein (or not even bother), grok Bayesianism and grasp the deep truth of reality, etc., and then got hit in the face with said reality saying “no.” (See also the Eliezer excerpt here.)
I didn’t claim you should do impossible things. I said “you can do impossible-seeming things”.
No, that’s not correct. What you claimed, and what I responded to, is (ad literam quote) “impossible problems can be defeated.” And as I said, that’s obviously false in the standard usage of these terms, and instead only makes sense in a different semantic interpretation; it is this interpretation that causes the status problems.[3] “Solve impossible problems” sounds much more metal and cooler than “solve impossible-seeming problems,” and carries with it an associated status-skepticism-inducing danger. When this issue appears, it’s particularly important to have specific, concrete examples to point to. Examples of difficult, actually seemingly-important problems that got solved, to point to;[4] not just 4-star instead of 2-star problems in a physics textbook.
Particularly when talking about the project in the broadest terms, as you did in your shortform post, instead of narrow descriptions of specific subtasks like solving Thinking Physics.
No, that’s not correct. What you claimed, and what I responded to, is (ad literam quote) “impossible problems can be defeated.”
Only if you, like, didn’t read any of the surrounding context. If you are not capable of distinguishing “this is slight poetic license for a thing that I just explained fairly clearly, and then immediately caveated”, I think that’s a you problem.
Perhaps so: it would be a reader problem if they aren’t interpreting the vibe of the text correctly. Just like it would be a reader problem if they don’t believe they can solve impossible (or “impossible-seeming”) problems when confronted with solid logic otherwise.
And yet, what if that’s what they do?[1] We don’t live in the should-universe, where people’s individual problems get assigned to them alone and don’t affect everyone else.
Doing? Very little. I lack the localized[1] subject-matter knowledge you do about your own project, so any ivory tower advice I’d give would likely make things worse, at least in the short-run if not even more.
Thinking? Only in so far as your thinking reflects in its entirety your writing about this topic, which I find unlikely. Nevertheless, the writing itself (as I mentioned above) does not directly address the topic of status considerations, instead merely gesturing around it and focusing on technical skills instead. In the early-stage planning of a procedure like yours, this works fine because it’s easy to argue down people’s status-based skepticism as long as you’re working on a well-understood topic where you can easily refute it (cf. footnote 1). In the middle-game and endgame, when you are facing harder problems, perhaps even problems hard enough that nobody has ever successfully solved, it stops working as well because this is a qualitatively different environment. There’s a problem[2] of generalizing out of distribution, of a sharp left turn of sorts. Particularly likely to be the case when dealing with people who have already been exposed to promises/vibes about LW making society find truth faster than science, do better than Einstein (or not even bother), grok Bayesianism and grasp the deep truth of reality, etc., and then got hit in the face with said reality saying “no.” (See also the Eliezer excerpt here.)
Responding to your other comment here as well, for simplicity.
No, that’s not correct. What you claimed, and what I responded to, is (ad literam quote) “impossible problems can be defeated.” And as I said, that’s obviously false in the standard usage of these terms, and instead only makes sense in a different semantic interpretation; it is this interpretation that causes the status problems.[3] “Solve impossible problems” sounds much more metal and cooler than “solve impossible-seeming problems,” and carries with it an associated status-skepticism-inducing danger. When this issue appears, it’s particularly important to have specific, concrete examples to point to. Examples of difficult, actually seemingly-important problems that got solved, to point to;[4] not just 4-star instead of 2-star problems in a physics textbook.
And often ineffable, subconscious, S1-type
With a status-blind strategy
Particularly when talking about the project in the broadest terms, as you did in your shortform post, instead of narrow descriptions of specific subtasks like solving Thinking Physics.
Such as the probabilistic solution MIRI + Christiano came up with to overcome the Loebian obstacle to tiling agents.
Only if you, like, didn’t read any of the surrounding context. If you are not capable of distinguishing “this is slight poetic license for a thing that I just explained fairly clearly, and then immediately caveated”, I think that’s a you problem.
Perhaps so: it would be a reader problem if they aren’t interpreting the vibe of the text correctly. Just like it would be a reader problem if they don’t believe they can solve impossible (or “impossible-seeming”) problems when confronted with solid logic otherwise.
And yet, what if that’s what they do?[1] We don’t live in the should-universe, where people’s individual problems get assigned to them alone and don’t affect everyone else.
Or would do, in the middlegame and endgame, as I have claimed above?