In Struggling like a Shadowmoth, I note “sometimes, the only way you can learn something is by getting thrown into a situation where no one can save you and you have to figure out a skill or mindset, digging it out in soulful struggle.” (as exemplified by the story of a moth in a cocoon, that will be too weak to fly if you cut it loose while it struggles to get out).
This prompts the obvious question “do you have to train skills via shadowmothing?”. Learning things via painful struggle kinda sucks. Do we gotta?
I think the answer is “yes, for now”, but, this is a skill issue. If your pedagogy didn’t suck, you’d probably be able to teach people things by telling them simple instructions. But, sometimes figuring out those step-by-step instructions is many years/decades/centuries away from getting invented.
I’ve historically taught “Metastrategic Brainstorming” via throwing people into impossible-feeling problems and forcing to figure out how to deal with them until they locate the metastrategic-brainstorm-muscle within themselves and learn to use it. But, over the past year I’ve started accumulating some pieces of how to find that mental muscle, without just flailing. (I’ve gotten started writing it up here, but there’s a lot of pieces and I’m busy.)
But, sometimes knowledge is “experiential.” There’s “procedural knowledge of how to generate strategies”, and then there’s “believing deep in your soul that impossible-seeming problems can be defeated.”
Can you teach that without shadowmothing?
Maybe.
Not yet. Or at least I don’t know how to. But, I feel like I have a sense of what the building blocks might look like.
That is where my Subskills of “Listening to Wisdom” post is pointed. What are the necessary skills for listening to someone saying “impossible problems can be defeated” and really hearing them? What are the skills for communicating “impossible problems can be defeated” in a way that people will actually hear?
It’s a major step-up, have stories like Shut up and do the impossible in the context of the sequences (Eliezer’s writing is often aiming to convey the experiential sense of what doing rationality is like). Nonetheless, I read that post, and Class Project, and that was enough to motivate me to try building a training program that would teach me how to do that. But, I was still missing a piece in my soul until I actually faced a (relatively minor) impossible-seeming-problem and then found new tools to deal with and dealt with it. (there are probably tiers of believing in “impossibility can be defeated” and I am probably only on like, the 2nd-lowest-one. But, it’s still a noticeable mindset shift)
“Subskills of Listening to Wisdom” is aiming to someday, hopefully, fill the gaps there. But, admittedly, so far my answer is “well, it seems like if you learn a lot of introspective and epistemic and emotional-regulation skills, maybe you can listen to an old grizzled veteran’s story and deliberately feel the impact and reorganize your soulful insides as if you had actually had the experience.”
But, like, I think it’s 15 subskills, and learning them seems like way more work than “throwing someone at some impossible problems until they figure it out.”
But maybe, someday, each of those skills will be reduced to nice simple step-by-step instructions too.
What are the skills for communicating “impossible problems can be defeated” in a way that people will actually hear?
One of them seems to be the recognition that, as written, the statement is obviously false. Impossible means it cannot be defeated. Only by warping the definition of the word away from what it implies in common usage, and doing so for Rule of Cool purposes, does the statement actually make sense. But even when the reader/listener recognizes that, they are likely to turn up their nose at you because you’re Trying Too Hard to sound Cool and Awesome instead of using simple, descriptive words that don’t sneak in connotations about how Amazing you are for having done the “impossible.”[1]
Another one seems to be giving examples of supposedly “impossible” problems that have been defeated already.[2]Always give examples![3] An ounce of history is worth a volume of logic.[4] No matter how compellingly-written or self-consistent a purely theoretical framework seems to be, if it doesn’t map onto concrete results, people will dismiss it as useless.[5]Show us the cake, dammit!
“I’ve done the impossible!” and “I’ve done something really hard!” have different connotations, and thus demand different status-assignments, in typical parlance.
Specifically, defeated by you personally. Or at least by someone you know who is employing the same broad cognitive strategies as the ones you’re trying to teach. Newton and Einstein don’t count, unless you can convincingly argue why their example is similar in relevant ways to yours.
Examples that you can explain, mind you! Eliezer is optimizing for Deep Mystery in Shut up and do the impossible!, and he doesn’t describe any specific cognitive procedures he employed. It’s cool as a piece of antimeme-destroying evocative writing, but that’s not enough.
I was fairly careful here to say “impossible-feeling” and “impossible-seeming” at least a few of the times here, although looks like that ended up being 3⁄7 times.
The point here is not “you can defeat actual impossible problems”, the point is “people’s standards for what feels ‘actually impossible’ are way out of whack.”
This post is the short version of Subskills of “Listening to Wisdom”, where I go into a lot of examples and spell them out and I think is mostly caveated correctly[1]. That post is 12,000 words long, this is the shortform for roughly conveying why you might want to read it. That necessarily has fewer examples and caveats.
But the most relevant bit here is relatively standalone and probably worth copy-pasting here:
[Example 3] “The Thinking Physics student”
You’ve given a group of workshop participants some Thinking Physicsproblems.
As ~always, some people are like “I can’t do this because I don’t have enough physics knowledge.” You explain that the idea is that they figure out physics from first principles and their existing experience. They look at you skeptically but shrug and start dutifully doing the assignment.
An hour later, they think they probably have the answer. They look at the answer. They are wrong. You ask them to ask themselves “how could I have thought that faster?”, and they say “Well, it was impossible. Like I said, I didn’t have enough physics knowledge.”
You talk about how the idea is to approach this like the original physicists, who didn’t have physics knowledge and needed to figure it out anyway. And, while some of these problems are selected for being counterintuitive, they’re also selected for “smart non-physicists can often figure them out in a few hours.”
They hear: “The instructor wanted me to do something that would burn out my brain and also not work.” (In their defense, their instructor probably didn’t actually explain the mental motions very well)
They end up spending quite a lot of effort and attention on loudly reiterating why it was impossible, and ~0 effort on figuring how they could have solved it anyway.
Okay, like, not actually impossible problems, obviously those are actually impossible. But, in the moment, when the student is frustrated and the problem is opaque and their working memory is full and they don’t have room to distinguish “this feels impossible” from “this is impossible”, the thing they actually need to know in their bones is that impossibility can be defeated, sometimes, if you reach beyond your excuses and frames and actually think creatively.
It is actually, practically important in my experience to have separate mental habits for handling “this is very hard” and “this is impossible.” They just feel very different in people’s brains and trigger different cognitive patterns.
Giving people the prompt “make a list of reasons why this is hard” and “make a list of reasons why this is impossible” generate fairly different output which lead to fairly different solutions to the (not actually impossible or even really all that hard) problems[2] people are trying to solve.
“Impossible” starts at “the teacher gave me an unfair class assignment.” (Like, the people who aren’t believing me aren’t disbelieving they can cure cancer in a month. They are disbelieving they could have solved a physics puzzle)
(I will say fairly explicitly here: I have not done anything I’d expect anyone else to look at and say “wow Ray can done something impossible-seeming”. The thing I claimed in the above post is “I’m probably on, like, the second tier of “believing in your heart you can solve impossible-seeming-problems”, which is, you know, the one right after “class exercises you don’t see any way to accomplish.”
((FYI I get a vibe that your comment was responding almost entirely on the dimension of “this feels like it’s claiming too much status”, and while I think the correct amount of attention to pay to that is nonzero, I also think it’s a very distracting conversation leads away from ‘what are the actual important gears here’ and your comment looks more like it’s trying to take me down a peg than figure out what’s true/useful))
Where the correct amount of caveated is not “maximally”, I think it’s both not actually correct to bog down your writing with every possible epistemic disclaimer.
Idk, I think you sell yourself short a bit here. If I asked a random person to make a new holiday[1], certainly not all of them, but a good fraction of them would say “Bah! Impossible!”
Definitional note. Holiday on the level of “many many people in a subculture go to a mass-like thing, sing songs, and possibly take pilgramages”, not on the level of “The UN now recognizes International Waffle Day”.
Maybe but also I did that like 14 years before training to do things-that-felt-impossible-ish so it’s at least not evidence of that training being useful.
((FYI I get a vibe that your comment was responding almost entirely on the dimension of “this feels like it’s claiming too much status”, and while I think the correct amount of attention to pay to that is nonzero, I also think it’s a very distracting conversation leads away from ‘what are the actual important gears here’ and your comment looks more like it’s trying to take me down a peg than figure out what’s true/useful))
I disagree it’s distracting. I suspect “this is claiming too much status” is a reason, or maybe even the key reason, why people might be skeptical of both this style of communication and of the overall project you’ve embarked on.[1] I don’t believe separating this and the gears-level[2] is useful or reasonable, because getting this is a gear for the whole endeavor.
(In any case, my original comment is only useful to you to the extent you’re interested in alternative responses to your original question, “What are the skills for communicating “impossible problems can be defeated” in a way that people will actually hear?” I don’t see explicit status discussions or considerations anywhere in your longer post about this topic. There are spots where your writing seems to approach them, but then it backs away towards what I’d describe as more “technical” gears-level matters; perhaps this means you think status considerations aren’t worth mentioning explicitly as such. And perhaps this also means your original question was only meant as a rhetorical tool and not a request for commentary. In that case, oh well.)
This post is the short version of Subskills of “Listening to Wisdom”, where I go into a lot of examples and spell them out and I think is mostly caveated correctly
Separately from the above, I believe the examples in this post, and their surrounding rhetoric, are unconvincing. Perhaps it’s worthwhile for me to write a broader post expressing my skepticism about these kinds of topics at some point.
Including for rationalists. See: Hero Licensing, Turntrout’s reporting of status games among alignment researchers, Anna Salamon’s comment here and the surrounding context, etc, as hopefully illustrative examples.
The specific context where this comes up is “person is trying to do a physics problem they don’t see how to do”, where I think there has been little/no discussion of “do crazy impossible things” beforehand. (in some cases, there has been such discussion, but the people have pretty explicitly/enthusiastically opted into it)
I don’t really see why your frame here would be very relevant there.
I believe you that many people reading my blogposts might have the reaction you’re having and that that would be a blocker for them getting into it, but I’m not particularly worried about that. But, I think you are flatly wrong about the phenomenon in the situation I’m most focused on.
Yes, my frame is not very relevant when the end goal is to get people to solve textbook physics problems.[1] To the extent the end goal becomes something broader than that (such as advancing the art of human rationality, iterating feedback loops and learning broader lessons about them to apply to confusing topics like community-building, AI safety, etc., all the good stuff LW says it’s about), my frame becomes relevant.[2]
Solving Thinking Physics is meaningful to LW as a stepping stone towards grokking the broader rigorous, grounded-to-reality thinking patterns this endeavor endows you with. It’s not a stand-alone purpose one would write a whole sequence about[3] (or even one frontpage post, frankly).
But in that case, pointing to examples of such problems being solved in the past is trivially easy to do anyway (the problem-writers, the professors teaching this material, you in cases when you’ve learned the relevant material, etc).
I’m reacting negatively to your comments because you are saying “One of them seems to be the recognition that, as written, the statement is obviously false.”. Which seems false, so, I don’t get the rest of your argument that seems based on a false thing.
I didn’t claim you should do impossible things. I said “you can do impossible-seeming things”. That seems obviously true. I agree you should be clear with people on “I mean viscerally impossible seeming things, not literally impossible things and your sense-of-what-is-impossible is miscalibrated.”
After doing that, what remaining problem you are anticipating? I am misunderstanding your initial sentence? Do you disagree that “you can do impossible-seeming things and your sense-of-what-is-impossible is miscalibrated” will be true for at least many people?
It seemed like your whole first paragraph was filled with mundane falsehood and I want to get that sorted out before worrying about the rest of your frame.
Doing? Very little. I lack the localized[1] subject-matter knowledge you do about your own project, so any ivory tower advice I’d give would likely make things worse, at least in the short-run if not even more.
Thinking? Only in so far as your thinking reflects in its entirety your writing about this topic, which I find unlikely. Nevertheless, the writing itself (as I mentioned above) does not directly address the topic of status considerations, instead merely gesturing around it and focusing on technical skills instead. In the early-stage planning of a procedure like yours, this works fine because it’s easy to argue down people’s status-based skepticism as long as you’re working on a well-understood topic where you can easily refute it (cf. footnote 1). In the middle-game and endgame, when you are facing harder problems, perhaps even problems hard enough that nobody has ever successfully solved, it stops working as well because this is a qualitatively different environment. There’s a problem[2] of generalizing out of distribution, of a sharp left turn of sorts. Particularly likely to be the case when dealing with people who have already been exposed to promises/vibes about LW making society find truth faster than science, do better than Einstein (or not even bother), grok Bayesianism and grasp the deep truth of reality, etc., and then got hit in the face with said reality saying “no.” (See also the Eliezer excerpt here.)
I didn’t claim you should do impossible things. I said “you can do impossible-seeming things”.
No, that’s not correct. What you claimed, and what I responded to, is (ad literam quote) “impossible problems can be defeated.” And as I said, that’s obviously false in the standard usage of these terms, and instead only makes sense in a different semantic interpretation; it is this interpretation that causes the status problems.[3] “Solve impossible problems” sounds much more metal and cooler than “solve impossible-seeming problems,” and carries with it an associated status-skepticism-inducing danger. When this issue appears, it’s particularly important to have specific, concrete examples to point to. Examples of difficult, actually seemingly-important problems that got solved, to point to;[4] not just 4-star instead of 2-star problems in a physics textbook.
Particularly when talking about the project in the broadest terms, as you did in your shortform post, instead of narrow descriptions of specific subtasks like solving Thinking Physics.
No, that’s not correct. What you claimed, and what I responded to, is (ad literam quote) “impossible problems can be defeated.”
Only if you, like, didn’t read any of the surrounding context. If you are not capable of distinguishing “this is slight poetic license for a thing that I just explained fairly clearly, and then immediately caveated”, I think that’s a you problem.
Perhaps so: it would be a reader problem if they aren’t interpreting the vibe of the text correctly. Just like it would be a reader problem if they don’t believe they can solve impossible (or “impossible-seeming”) problems when confronted with solid logic otherwise.
And yet, what if that’s what they do?[1] We don’t live in the should-universe, where people’s individual problems get assigned to them alone and don’t affect everyone else.
Connected some dots between my past posts:
In Struggling like a Shadowmoth, I note “sometimes, the only way you can learn something is by getting thrown into a situation where no one can save you and you have to figure out a skill or mindset, digging it out in soulful struggle.” (as exemplified by the story of a moth in a cocoon, that will be too weak to fly if you cut it loose while it struggles to get out).
This prompts the obvious question “do you have to train skills via shadowmothing?”. Learning things via painful struggle kinda sucks. Do we gotta?
I think the answer is “yes, for now”, but, this is a skill issue. If your pedagogy didn’t suck, you’d probably be able to teach people things by telling them simple instructions. But, sometimes figuring out those step-by-step instructions is many years/decades/centuries away from getting invented.
I’ve historically taught “Metastrategic Brainstorming” via throwing people into impossible-feeling problems and forcing to figure out how to deal with them until they locate the metastrategic-brainstorm-muscle within themselves and learn to use it. But, over the past year I’ve started accumulating some pieces of how to find that mental muscle, without just flailing. (I’ve gotten started writing it up here, but there’s a lot of pieces and I’m busy.)
But, sometimes knowledge is “experiential.” There’s “procedural knowledge of how to generate strategies”, and then there’s “believing deep in your soul that impossible-seeming problems can be defeated.”
Can you teach that without shadowmothing?
Maybe.
Not yet. Or at least I don’t know how to. But, I feel like I have a sense of what the building blocks might look like.
That is where my Subskills of “Listening to Wisdom” post is pointed. What are the necessary skills for listening to someone saying “impossible problems can be defeated” and really hearing them? What are the skills for communicating “impossible problems can be defeated” in a way that people will actually hear?
It’s a major step-up, have stories like Shut up and do the impossible in the context of the sequences (Eliezer’s writing is often aiming to convey the experiential sense of what doing rationality is like). Nonetheless, I read that post, and Class Project, and that was enough to motivate me to try building a training program that would teach me how to do that. But, I was still missing a piece in my soul until I actually faced a (relatively minor) impossible-seeming-problem and then found new tools to deal with and dealt with it. (there are probably tiers of believing in “impossibility can be defeated” and I am probably only on like, the 2nd-lowest-one. But, it’s still a noticeable mindset shift)
“Subskills of Listening to Wisdom” is aiming to someday, hopefully, fill the gaps there. But, admittedly, so far my answer is “well, it seems like if you learn a lot of introspective and epistemic and emotional-regulation skills, maybe you can listen to an old grizzled veteran’s story and deliberately feel the impact and reorganize your soulful insides as if you had actually had the experience.”
But, like, I think it’s 15 subskills, and learning them seems like way more work than “throwing someone at some impossible problems until they figure it out.”
But maybe, someday, each of those skills will be reduced to nice simple step-by-step instructions too.
One of them seems to be the recognition that, as written, the statement is obviously false. Impossible means it cannot be defeated. Only by warping the definition of the word away from what it implies in common usage, and doing so for Rule of Cool purposes, does the statement actually make sense. But even when the reader/listener recognizes that, they are likely to turn up their nose at you because you’re Trying Too Hard to sound Cool and Awesome instead of using simple, descriptive words that don’t sneak in connotations about how Amazing you are for having done the “impossible.”[1]
Another one seems to be giving examples of supposedly “impossible” problems that have been defeated already.[2] Always give examples![3] An ounce of history is worth a volume of logic.[4] No matter how compellingly-written or self-consistent a purely theoretical framework seems to be, if it doesn’t map onto concrete results, people will dismiss it as useless.[5] Show us the cake, dammit!
“I’ve done the impossible!” and “I’ve done something really hard!” have different connotations, and thus demand different status-assignments, in typical parlance.
Specifically, defeated by you personally. Or at least by someone you know who is employing the same broad cognitive strategies as the ones you’re trying to teach. Newton and Einstein don’t count, unless you can convincingly argue why their example is similar in relevant ways to yours.
Examples that you can explain, mind you! Eliezer is optimizing for Deep Mystery in Shut up and do the impossible!, and he doesn’t describe any specific cognitive procedures he employed. It’s cool as a piece of antimeme-destroying evocative writing, but that’s not enough.
In certain situations, given the right caveats.
And they will be right to do so!
I was fairly careful here to say “impossible-feeling” and “impossible-seeming” at least a few of the times here, although looks like that ended up being 3⁄7 times.
The point here is not “you can defeat actual impossible problems”, the point is “people’s standards for what feels ‘actually impossible’ are way out of whack.”
This post is the short version of Subskills of “Listening to Wisdom”, where I go into a lot of examples and spell them out and I think is mostly caveated correctly[1]. That post is 12,000 words long, this is the shortform for roughly conveying why you might want to read it. That necessarily has fewer examples and caveats.
But the most relevant bit here is relatively standalone and probably worth copy-pasting here:
It is actually, practically important in my experience to have separate mental habits for handling “this is very hard” and “this is impossible.” They just feel very different in people’s brains and trigger different cognitive patterns.
Giving people the prompt “make a list of reasons why this is hard” and “make a list of reasons why this is impossible” generate fairly different output which lead to fairly different solutions to the (not actually impossible or even really all that hard) problems[2] people are trying to solve.
“Impossible” starts at “the teacher gave me an unfair class assignment.” (Like, the people who aren’t believing me aren’t disbelieving they can cure cancer in a month. They are disbelieving they could have solved a physics puzzle)
(I will say fairly explicitly here: I have not done anything I’d expect anyone else to look at and say “wow Ray can done something impossible-seeming”. The thing I claimed in the above post is “I’m probably on, like, the second tier of “believing in your heart you can solve impossible-seeming-problems”, which is, you know, the one right after “class exercises you don’t see any way to accomplish.”
((FYI I get a vibe that your comment was responding almost entirely on the dimension of “this feels like it’s claiming too much status”, and while I think the correct amount of attention to pay to that is nonzero, I also think it’s a very distracting conversation leads away from ‘what are the actual important gears here’ and your comment looks more like it’s trying to take me down a peg than figure out what’s true/useful))
Where the correct amount of caveated is not “maximally”, I think it’s both not actually correct to bog down your writing with every possible epistemic disclaimer.
Idk, I think you sell yourself short a bit here. If I asked a random person to make a new holiday[1], certainly not all of them, but a good fraction of them would say “Bah! Impossible!”
Definitional note. Holiday on the level of “many many people in a subculture go to a mass-like thing, sing songs, and possibly take pilgramages”, not on the level of “The UN now recognizes International Waffle Day”.
Maybe but also I did that like 14 years before training to do things-that-felt-impossible-ish so it’s at least not evidence of that training being useful.
I disagree it’s distracting. I suspect “this is claiming too much status” is a reason, or maybe even the key reason, why people might be skeptical of both this style of communication and of the overall project you’ve embarked on.[1] I don’t believe separating this and the gears-level[2] is useful or reasonable, because getting this is a gear for the whole endeavor.
(In any case, my original comment is only useful to you to the extent you’re interested in alternative responses to your original question, “What are the skills for communicating “impossible problems can be defeated” in a way that people will actually hear?” I don’t see explicit status discussions or considerations anywhere in your longer post about this topic. There are spots where your writing seems to approach them, but then it backs away towards what I’d describe as more “technical” gears-level matters; perhaps this means you think status considerations aren’t worth mentioning explicitly as such. And perhaps this also means your original question was only meant as a rhetorical tool and not a request for commentary. In that case, oh well.)
Separately from the above, I believe the examples in this post, and their surrounding rhetoric, are unconvincing. Perhaps it’s worthwhile for me to write a broader post expressing my skepticism about these kinds of topics at some point.
Including for rationalists. See: Hero Licensing, Turntrout’s reporting of status games among alignment researchers, Anna Salamon’s comment here and the surrounding context, etc, as hopefully illustrative examples.
Conceptually, at least. Practically, you can write about/focus on whatever you’d like.
The specific context where this comes up is “person is trying to do a physics problem they don’t see how to do”, where I think there has been little/no discussion of “do crazy impossible things” beforehand. (in some cases, there has been such discussion, but the people have pretty explicitly/enthusiastically opted into it)
I don’t really see why your frame here would be very relevant there.
I believe you that many people reading my blogposts might have the reaction you’re having and that that would be a blocker for them getting into it, but I’m not particularly worried about that. But, I think you are flatly wrong about the phenomenon in the situation I’m most focused on.
Yes, my frame is not very relevant when the end goal is to get people to solve textbook physics problems.[1] To the extent the end goal becomes something broader than that (such as advancing the art of human rationality, iterating feedback loops and learning broader lessons about them to apply to confusing topics like community-building, AI safety, etc., all the good stuff LW says it’s about), my frame becomes relevant.[2]
Solving Thinking Physics is meaningful to LW as a stepping stone towards grokking the broader rigorous, grounded-to-reality thinking patterns this endeavor endows you with. It’s not a stand-alone purpose one would write a whole sequence about[3] (or even one frontpage post, frankly).
But in that case, pointing to examples of such problems being solved in the past is trivially easy to do anyway (the problem-writers, the professors teaching this material, you in cases when you’ve learned the relevant material, etc).
Or so I claim.
At least not a good, useful, relevant sequence. In practice, people write whatever they want on this site, it seems.
Can you give a more specific example of what you think I should be doing or thinking about differently?
Actually, maybe more useful to say:
I’m reacting negatively to your comments because you are saying “One of them seems to be the recognition that, as written, the statement is obviously false.”. Which seems false, so, I don’t get the rest of your argument that seems based on a false thing.
I didn’t claim you should do impossible things. I said “you can do impossible-seeming things”. That seems obviously true. I agree you should be clear with people on “I mean viscerally impossible seeming things, not literally impossible things and your sense-of-what-is-impossible is miscalibrated.”
After doing that, what remaining problem you are anticipating? I am misunderstanding your initial sentence? Do you disagree that “you can do impossible-seeming things and your sense-of-what-is-impossible is miscalibrated” will be true for at least many people?
It seemed like your whole first paragraph was filled with mundane falsehood and I want to get that sorted out before worrying about the rest of your frame.
Doing? Very little. I lack the localized[1] subject-matter knowledge you do about your own project, so any ivory tower advice I’d give would likely make things worse, at least in the short-run if not even more.
Thinking? Only in so far as your thinking reflects in its entirety your writing about this topic, which I find unlikely. Nevertheless, the writing itself (as I mentioned above) does not directly address the topic of status considerations, instead merely gesturing around it and focusing on technical skills instead. In the early-stage planning of a procedure like yours, this works fine because it’s easy to argue down people’s status-based skepticism as long as you’re working on a well-understood topic where you can easily refute it (cf. footnote 1). In the middle-game and endgame, when you are facing harder problems, perhaps even problems hard enough that nobody has ever successfully solved, it stops working as well because this is a qualitatively different environment. There’s a problem[2] of generalizing out of distribution, of a sharp left turn of sorts. Particularly likely to be the case when dealing with people who have already been exposed to promises/vibes about LW making society find truth faster than science, do better than Einstein (or not even bother), grok Bayesianism and grasp the deep truth of reality, etc., and then got hit in the face with said reality saying “no.” (See also the Eliezer excerpt here.)
Responding to your other comment here as well, for simplicity.
No, that’s not correct. What you claimed, and what I responded to, is (ad literam quote) “impossible problems can be defeated.” And as I said, that’s obviously false in the standard usage of these terms, and instead only makes sense in a different semantic interpretation; it is this interpretation that causes the status problems.[3] “Solve impossible problems” sounds much more metal and cooler than “solve impossible-seeming problems,” and carries with it an associated status-skepticism-inducing danger. When this issue appears, it’s particularly important to have specific, concrete examples to point to. Examples of difficult, actually seemingly-important problems that got solved, to point to;[4] not just 4-star instead of 2-star problems in a physics textbook.
And often ineffable, subconscious, S1-type
With a status-blind strategy
Particularly when talking about the project in the broadest terms, as you did in your shortform post, instead of narrow descriptions of specific subtasks like solving Thinking Physics.
Such as the probabilistic solution MIRI + Christiano came up with to overcome the Loebian obstacle to tiling agents.
Only if you, like, didn’t read any of the surrounding context. If you are not capable of distinguishing “this is slight poetic license for a thing that I just explained fairly clearly, and then immediately caveated”, I think that’s a you problem.
Perhaps so: it would be a reader problem if they aren’t interpreting the vibe of the text correctly. Just like it would be a reader problem if they don’t believe they can solve impossible (or “impossible-seeming”) problems when confronted with solid logic otherwise.
And yet, what if that’s what they do?[1] We don’t live in the should-universe, where people’s individual problems get assigned to them alone and don’t affect everyone else.
Or would do, in the middlegame and endgame, as I have claimed above?