Maybe. It might be that if you described what you wanted more clearly, it would be the same thing that I want, and possibly I was incorrectly associating this with the things at CFAR you say you’re against, in which case sorry.
But I still don’t feel like I quite understand your suggestion. You talk of “stupefying egregores” as problematic insofar as they distract from the object-level problem. But I don’t understand how pivoting to egregore-fighting isn’t also a distraction from the object-level problem. Maybe this is because I don’t understand what fighting egregores consists of, and if I knew, then I would agree it was some sort of reasonable problem-solving step.
I agree that the Sequences contain a lot of useful deconfusion, but I interpret them as useful primarily because they provide a template for good thinking, and not because clearing up your thinking about those things is itself necessary for doing good work. I think of the cryonics discussion the same way I think of the Many Worlds discussion—following the motions of someone as they get the right answer to a hard question trains you to do this thing yourself.
I’m sorry if “cultivate your will” has the wrong connotations, but you did say “The problem that’s upstream of this is the lack of will”, and I interpreted a lot of your discussion of de-numbing and so on as dealing with this.
Part of what inspired me to write this piece at all was seeing a kind of blindness to these memetic forces in how people talk about AI risk and alignment research. Making bizarre assertions about what things need to happen on the god scale of “AI researchers” or “governments” or whatever, roughly on par with people loudly asserting opinions about what POTUS should do. It strikes me as immensely obvious that memetic forces precede AGI. If the memetic landscape slants down mercilessly toward existential oblivion here, then the thing to do isn’t to prepare to swim upward against a future avalanche. It’s to orient to the landscape.
The claim “memetic forces precede AGI” seems meaningless to me, except insofar as memetic forces precede everything (eg the personal computer was invented because people wanted personal computers and there was a culture of inventing things). Do you mean it in a stronger sense? If so, what sense?
I also don’t understand why it’s wrong to talk about what “AI researchers” or “governments” should do. Sure, it’s more virtuous to act than to chat randomly about stuff, but many Less Wrongers are in positions to change what AI researchers do, and if they have opinions about that, they should voice them. This post of yours right now seems to be about what “the rationalist community” should do, and I don’t think it’s a category error for you to write it.
Maybe this would easier if you described what actions we should take conditional on everything you wrote being right.
There’s also the skulls to consider. As far as I can tell, this post’s recommendations are that we, who are already in a valley littered with a suspicious number of skulls,
turn right towards a dark cave marked ‘skull avenue’ whose mouth is a giant skull, and whose walls are made entirely of skulls that turn to face you as you walk past them deeper into the cave.
The success rate of movments aimed at improving the longterm future or improving rationality has historically been… not great but there’s at least solid concrete emperical reasons to think specific actions will help and we can pin our hopes on that.
The success rate of, let’s build a movement to successfully uncouple ourselves from society’s bad memes and become capable of real action and then our problems will be solvable, is 0. Not just in that thinking that way didn’t help but in that with near 100% success you just end up possessed by worse memes if you make that your explicit final goal (rather than ending up doing that as a side effect of trying to get good at something). And there’s also no concrete paths to action to pin our hopes on.
“The success rate of, let’s build a movement to successfully uncouple ourselves from society’s bad memes and become capable of real action and then our problems will be solvable, is 0.“
I’m not sure if this is an exact analog, but I would have said the scientific revolution and the age of enlightenment were two (To be honest, I’m not entirely sure where one ends and the other begins, and there may be some overlap, but I think of them as two separate but related things) pretty good examples of this that resulted in the world becoming a vastly better place, largely through the efforts of individuals who realized that by changing the way we think about things we can better put to use human ingenuity. I know this is a massive oversimplification, but I think it points in the direction of there potentially being value in pushing the right memes onto society.
The success rate of developing and introducing better memes into society is indeed not 0. The key thing there is that the scientific revolutionaries weren’t just as an abstract thinking “we must uncouple from society first, and then we’ll know what to do”. Rather, they wanted to understand how objects fell, how animals evolved and lots of other specific problems and developed good memes to achieve those ends.
I’m by no means an expert on the topic, but I would have thought it was a result of both object-level thinking producing new memes that society recognized as true, but also some level of abstract thinking along the lines of “using God and the Bible as an explanation for every phenomenon doesn’t seem to be working very well, maybe we should create a scientific method or something.”
I think there may be a bit of us talking past each other, though. From your response, perhaps what I consider “uncoupling from society’s bad memes” you consider to be just generating new memes. It feels like generally a conversation where it’s hard to pin down what exactly people are trying to describe (starting from the OP, which I find very interesting, but am still having some trouble understanding specifically) which is making it a bit hard to communicate.
Now that I’ve had a few days to let the ideas roll around in the back of my head, I’m gonna take a stab at answering this.
I think there are a few different things going on here which are getting confused.
1) What does “memetic forces precede AGI” even mean?
“Individuals”, “memetic forces”, and “that which is upstream of memetics” all act on different scales. As an example of each, I suggest “What will I eat for lunch?”, “Who gets elected POTUS?”, and “Will people eat food?”, respectively.
“What will I eat for lunch?” is an example of an individual decision because I can actually choose the outcome there. While sometimes things like “veganism” will tell me what I should eat, and while I might let that have influence me, I don’t actually have to. If I realize that my life depends on eating steak, I will actually end up eating steak.
“Who gets elected POTUS” is a much tougher problem. I can vote. I can probably persuade friends to vote. If I really dedicate myself to the cause, and I do an exceptionally good job, and I get lucky, I might be able to get my ideas into the minds of enough people that my impact is noticeable. Even then though, it’s a drop in the bucket and pretty far outside my ability to “choose” who gets elected president. If I realize that my life depends on a certain person getting elected who would not get elected without my influence… I almost certainly just die. If a popular memeplex decides that a certain candidate threatens it, that actually can move enough people to plausibly change the outcome of an election.
However there’s a limitation to which memeplexes can become dominant and what they can tell people to do. If a hypercreature tells people to not eat meat, it may get some traction there. If it tries to tell people not to eat at all, it’s almost certainly going to fail and die. Not only will it have a large rate of attrition from adherents dying, but it’s going to be a real hard sell to get people to take its ideas on, and therefore it will have a very hard time spreading.
My reading of the claim “memetic forces precede AGI” is that like getting someone elected POTUS, the problem is simply too big for there to be any reasonable chance that a few guys in a basement can just go do it on their own when not supported by friendly hypercreatures. Val is predicting that our current set of hypercreatures won’t allow that task to be possible without superhuman abilities, and that our only hope is that we end up with sufficiently friendly hypercreatures that this task becomes humanly possible. Kinda like if your dream was to run an openly gay weed dispensary, it’s humanly possible today, but not so further in the past or in Saudi Arabia today; you need that cultural support or it ain’t gonna happen.
2) “Fight egregores” sure sounds like “trying to act on the god level” if anything does. How is this not at least as bad as “build FAI”? What could we possibly do which isn’t foolishly trying to act above our level?
This is a confusing one, because our words for things like “trying” are all muddled together. I think basically, yes, trying to “fight egregores” is “trying to act on the god level”, and is likely to lead to problems. However, that doesn’t mean you can’t make progress against egregores.
So, the problem with “trying to act on a god level” isn’t so much that you’re not a god and therefore “don’t have permission to act on this level” or “ability to touch this level”, it’s that you’re not a god and therefore attempting to act as if you were a god fundamentally requires you to fail to notice and update on that fact. And because you’re failing to update, you’re doing something that doesn’t make sense in light of the information at hand. And not just any information either; it’s information that’s telling you that what you’re trying to do will not work. So of course you’re not going to get where you want if you ignore the road signs saying “WRONG WAY!”.
What you can do, which will help free you from the stupifying factors and unfriendly egregores, and (Val claims) will have the best chance of leading to a FAI, is to look at what’s true. Rather than “I have to do this, or we all die! I must do the impossible”, just “Can I do this? Is it impossible? If so, and I’m [likely] going to die, I can look at that anyway. Given what’s true, what do I want to do?”
If this has a ”...but that doesn’t solve the problem” bit to it, that’s kinda the point. You don’t necessarily get to solve the problem. That’s the uncomfortable thing we should not flinch away from updating on. You might not be able to solve the problem. And then what?
(Not flinching from these things is hard. And important)
3) What’s wrong with talking about what AI researchers should do? There’s actually a good chance they listen! Should they not voice their opinions on the matter? Isn’t that kinda what you’re doing here by talking about what the rationality community should do?
Yes. Kinda. Kinda not.
There’s a question of how careful one has to be, and Val is making a case for much increased caution but not really stating it this way explicitly. Bear with me here, since I’m going to be making points that necessarily seem like “unimportant nitpicking pedantry” relative to an implicit level of caution that is more tolerant to rounding errors of this type, but I’m not actually presupposing anything here about whether increased caution is necessary in general or as it applies to AGI. It is, however, necessary in order to understand Val’s perspective on this, since it is central to his point.
If you look closely, Val never said anything about what the rationality community “should” do. He didn’t use the word “should” once.
He said things like “We can’t align AGI. That’s too big.” and “So, I think raising the sanity waterline is upstream of AI alignment.” and “We have an advantage in that this war happens on and through us. So if we take responsibility for this, we can influence the terrain and bias egregoric/memetic evolution to favor Friendlines”. These things seem to imply that we shouldn’t try to align AGI and should instead do something like “take responsibility” so we can “influcence the terrain and bias egregoric/memetic evolution to favor friendliness”, and as far as rounding errors go, that’s not a huge one. However, he did leave the decision of what to do with the information he presented up to you, and consciously refrained from imbuing it with any “shouldness”. The lack of “should” in his post or comments is very intentional, and is an example of him doing the thing he views as necessary for FAI to have a chance of working out.
In (my understanding of) Val’s perspective, this “shouldness” is a powerful stupifying factor that works itself into everything—if you let it. It prevents you from seeing the truth, and in doing so blocks you from any path which might succeed. It’s so damn seductive and self protecting that we all get drawn into it all the time and don’t really realize—or worse, rationalize and believe that “it’s not really that big a deal; I can achieve my object level goals anyway (or I can’t anyway, and so it makes no difference if I look)”. His claim is that it is that big a deal, because you can’t achieve your goals—and that you know you can’t, which is the whole reason you’re stuck in your thoughts of “should” in the first place. He’s saying that the annoying effort to be more precise about what exactly we are aiming to share and holding ourselves to be squeaky clean from any “impotent shoulding” at things is actually a necessary precondition for success. That if we try to “Shut up and do the impossible”, we fail. That if we “Think about what we should do”, we fail. That if we “try to convince people”, even if we are right and pointing at the right thing, we fail. That if we allow ourselves to casually “should” at things, instead of recognizing it as so incredibly dangerous as to avoid out of principle, we get seduced into being slaves for unfriendly egregores and fail.
That last line is something I’m less sure Val would agree with. He seems to be doing the “hard line avoid shoulding, aim for maximally clean cognition and communication” thing and the “make a point about doing it to highlight the difference” thing, but I haven’t heard him say explicitly that he thinks it has to be a hard line thing.
And I don’t think it does, or should be (case in point). Taking a hard line can be evidence of flinching from a different truth, or a lack of self trust to only use that way of communicating/relating to things in a productive way. I think by not highlighting the fact that it can be done wisely, he clouds his point and makes his case less compelling than it could be. However, I do think he’s correct about it being both a deceptively huge deal and also something that takes a very high level of caution before you start to recognize the issues with lower levels of caution.
I feel seen. I’ll tweak a few details here & there, but you have the essence.
Thank you.
If this has a ”...but that doesn’t solve the problem” bit to it, that’s kinda the point. You don’t necessarily get to solve the problem. That’s the uncomfortable thing we should not flinch away from updating on. You might not be able to solve the problem. And then what?
Agreed.
Two details:
“…we should not flinch away…” is another instance of the thing. This isn’t just banishing the word “should”: the ability not to flinch away from hard things is a skill, and trying to bypass development of that skill with moral panic actually makes everything worse.
The orientation you’re pointing at here biases one’s inner terrain toward Friendly superintelligences. It’s also personally helpful and communicable. This is an example of a Friendly meme that can give rise to a Friendly superintelligence. So while sincerely asking “And then what?” is important, as is holding the preciousness of the fact that we don’t yet have an answer, that is enough. We don’t have to actually answer that question to participate in feeding Friendliness in the egregoric wars. We just have to sincerely ask.
That if we allow ourselves to casually “should” at things, instead of recognizing it as so incredibly dangerous as to avoid out of principle, we get seduced into being slaves for unfriendly egregores and fail.
That last line is something I’m less sure Val would agree with.
Admittedly I’m not sure either.
Generally speaking, viewing things as “so incredibly dangerous as to avoid out of principle” ossifies them too much. Ossified things tend to become attack surfaces for unFriendly superintelligences.
In particular, being scared of how incredibly dangerous something is tends to be stupefying.
But I do think seeing this clearly naturally creates a desire to be more clear and to drop nearly all “shoulding” — not so much the words as the spirit.
(Relatedly: I actually didn’t know I never used the word “should” in the OP! I don’t actually have anything against the word per se. I just try to embody this stuff. I’m delighted to see I’ve gotten far enough that I just naturally dropped using it this way.)
…I haven’t heard him say explicitly that he thinks it has to be a hard line thing.
And I don’t think it does, or should be (case in point). Taking a hard line can be evidence of flinching from a different truth, or a lack of self trust to only use that way of communicating/relating to things in a productive way. I think by not highlighting the fact that it can be done wisely, he clouds his point and makes his case less compelling than it could be.
I’m not totally sure I follow. Do you mean a hard line against “shoulding”?
If so, I mostly just agree with you here.
That said, I think trying to make my point more compelling would in fact be an example of the corruption I’m trying to purify myself of. Instead I want to be correct and clear. That might happen to result in what I’m saying being more compelling… but I need to be clean of the need for that to happen in order for it to unfold in a Friendly way.
However. I totally believe that there’s a way I could have been clearer.
And given how spot-on the rest of what you’ve been saying feels to me, my guess is you’re right about how here.
Although admittedly I don’t have a clear image of what that would have looked like.
“…we should not flinch away…” is another instance of the thing.
Doh! Busted.
Thanks for the reminder.
This isn’t just banishing the word “should”: the ability not to flinch away from hard things is a skill, and trying to bypass development of that skill with moral panic actually makes everything worse.
Agreed.
We don’t have to actually answer that question to participate in feeding Friendliness in the egregoric wars. We just have to sincerely ask.
Good point. Agreed, and worth pointing out explicitly.
I’m not totally sure I follow. Do you mean a hard line against “shoulding”?
Yes. You don’t really need it, things tend to work better without it, and the fact no one even noticed that that it didn’t show up in this post is a good example of that. At the same time, “I shouldn’t ever use ‘should’” obviously has the exact same problems, and it’s possible to miss that you’re taking that stance if you don’t ever say it out loud. I watched some of your videos after Kaj linked one, and… it’s not that it looked like you were doing that, but it looked like you might be doing that. Like there wasn’t any sort of self caricaturing or anything that showed me that “Val is well aware of this failure mode, and is actively steering clear”, so I couldn’t rule it out and wanted to mark it as a point of uncertainty and a thing you might want to watch out for.
That said, I think trying to make my point more compelling would in fact be an example of the corruption I’m trying to purify myself of. Instead I want to be correct and clear. That might happen to result in what I’m saying being more compelling… but I need to be clean of the need for that to happen in order for it to unfold in a Friendly way.
Ah, but I never said you should try to make your point more compelling! What do you notice when you ask yourself why “X would have effect Y” led you to respond with a reason to not do X? ;)
Maybe. It might be that if you described what you wanted more clearly, it would be the same thing that I want, and possibly I was incorrectly associating this with the things at CFAR you say you’re against, in which case sorry.
But I still don’t feel like I quite understand your suggestion. You talk of “stupefying egregores” as problematic insofar as they distract from the object-level problem. But I don’t understand how pivoting to egregore-fighting isn’t also a distraction from the object-level problem. Maybe this is because I don’t understand what fighting egregores consists of, and if I knew, then I would agree it was some sort of reasonable problem-solving step.
I agree that the Sequences contain a lot of useful deconfusion, but I interpret them as useful primarily because they provide a template for good thinking, and not because clearing up your thinking about those things is itself necessary for doing good work. I think of the cryonics discussion the same way I think of the Many Worlds discussion—following the motions of someone as they get the right answer to a hard question trains you to do this thing yourself.
I’m sorry if “cultivate your will” has the wrong connotations, but you did say “The problem that’s upstream of this is the lack of will”, and I interpreted a lot of your discussion of de-numbing and so on as dealing with this.
The claim “memetic forces precede AGI” seems meaningless to me, except insofar as memetic forces precede everything (eg the personal computer was invented because people wanted personal computers and there was a culture of inventing things). Do you mean it in a stronger sense? If so, what sense?
I also don’t understand why it’s wrong to talk about what “AI researchers” or “governments” should do. Sure, it’s more virtuous to act than to chat randomly about stuff, but many Less Wrongers are in positions to change what AI researchers do, and if they have opinions about that, they should voice them. This post of yours right now seems to be about what “the rationalist community” should do, and I don’t think it’s a category error for you to write it.
Maybe this would easier if you described what actions we should take conditional on everything you wrote being right.
There’s also the skulls to consider. As far as I can tell, this post’s recommendations are that we, who are already in a valley littered with a suspicious number of skulls,
https://forum.effectivealtruism.org/posts/ZcpZEXEFZ5oLHTnr9/noticing-the-skulls-longtermism-edition
https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-the-skulls/
turn right towards a dark cave marked ‘skull avenue’ whose mouth is a giant skull, and whose walls are made entirely of skulls that turn to face you as you walk past them deeper into the cave.
The success rate of movments aimed at improving the longterm future or improving rationality has historically been… not great but there’s at least solid concrete emperical reasons to think specific actions will help and we can pin our hopes on that.
The success rate of, let’s build a movement to successfully uncouple ourselves from society’s bad memes and become capable of real action and then our problems will be solvable, is 0. Not just in that thinking that way didn’t help but in that with near 100% success you just end up possessed by worse memes if you make that your explicit final goal (rather than ending up doing that as a side effect of trying to get good at something). And there’s also no concrete paths to action to pin our hopes on.
“The success rate of, let’s build a movement to successfully uncouple ourselves from society’s bad memes and become capable of real action and then our problems will be solvable, is 0.“
I’m not sure if this is an exact analog, but I would have said the scientific revolution and the age of enlightenment were two (To be honest, I’m not entirely sure where one ends and the other begins, and there may be some overlap, but I think of them as two separate but related things) pretty good examples of this that resulted in the world becoming a vastly better place, largely through the efforts of individuals who realized that by changing the way we think about things we can better put to use human ingenuity. I know this is a massive oversimplification, but I think it points in the direction of there potentially being value in pushing the right memes onto society.
The success rate of developing and introducing better memes into society is indeed not 0. The key thing there is that the scientific revolutionaries weren’t just as an abstract thinking “we must uncouple from society first, and then we’ll know what to do”. Rather, they wanted to understand how objects fell, how animals evolved and lots of other specific problems and developed good memes to achieve those ends.
I’m by no means an expert on the topic, but I would have thought it was a result of both object-level thinking producing new memes that society recognized as true, but also some level of abstract thinking along the lines of “using God and the Bible as an explanation for every phenomenon doesn’t seem to be working very well, maybe we should create a scientific method or something.”
I think there may be a bit of us talking past each other, though. From your response, perhaps what I consider “uncoupling from society’s bad memes” you consider to be just generating new memes. It feels like generally a conversation where it’s hard to pin down what exactly people are trying to describe (starting from the OP, which I find very interesting, but am still having some trouble understanding specifically) which is making it a bit hard to communicate.
Now that I’ve had a few days to let the ideas roll around in the back of my head, I’m gonna take a stab at answering this.
I think there are a few different things going on here which are getting confused.
1) What does “memetic forces precede AGI” even mean?
“Individuals”, “memetic forces”, and “that which is upstream of memetics” all act on different scales. As an example of each, I suggest “What will I eat for lunch?”, “Who gets elected POTUS?”, and “Will people eat food?”, respectively.
“What will I eat for lunch?” is an example of an individual decision because I can actually choose the outcome there. While sometimes things like “veganism” will tell me what I should eat, and while I might let that have influence me, I don’t actually have to. If I realize that my life depends on eating steak, I will actually end up eating steak.
“Who gets elected POTUS” is a much tougher problem. I can vote. I can probably persuade friends to vote. If I really dedicate myself to the cause, and I do an exceptionally good job, and I get lucky, I might be able to get my ideas into the minds of enough people that my impact is noticeable. Even then though, it’s a drop in the bucket and pretty far outside my ability to “choose” who gets elected president. If I realize that my life depends on a certain person getting elected who would not get elected without my influence… I almost certainly just die. If a popular memeplex decides that a certain candidate threatens it, that actually can move enough people to plausibly change the outcome of an election.
However there’s a limitation to which memeplexes can become dominant and what they can tell people to do. If a hypercreature tells people to not eat meat, it may get some traction there. If it tries to tell people not to eat at all, it’s almost certainly going to fail and die. Not only will it have a large rate of attrition from adherents dying, but it’s going to be a real hard sell to get people to take its ideas on, and therefore it will have a very hard time spreading.
My reading of the claim “memetic forces precede AGI” is that like getting someone elected POTUS, the problem is simply too big for there to be any reasonable chance that a few guys in a basement can just go do it on their own when not supported by friendly hypercreatures. Val is predicting that our current set of hypercreatures won’t allow that task to be possible without superhuman abilities, and that our only hope is that we end up with sufficiently friendly hypercreatures that this task becomes humanly possible. Kinda like if your dream was to run an openly gay weed dispensary, it’s humanly possible today, but not so further in the past or in Saudi Arabia today; you need that cultural support or it ain’t gonna happen.
2) “Fight egregores” sure sounds like “trying to act on the god level” if anything does. How is this not at least as bad as “build FAI”? What could we possibly do which isn’t foolishly trying to act above our level?
This is a confusing one, because our words for things like “trying” are all muddled together. I think basically, yes, trying to “fight egregores” is “trying to act on the god level”, and is likely to lead to problems. However, that doesn’t mean you can’t make progress against egregores.
So, the problem with “trying to act on a god level” isn’t so much that you’re not a god and therefore “don’t have permission to act on this level” or “ability to touch this level”, it’s that you’re not a god and therefore attempting to act as if you were a god fundamentally requires you to fail to notice and update on that fact. And because you’re failing to update, you’re doing something that doesn’t make sense in light of the information at hand. And not just any information either; it’s information that’s telling you that what you’re trying to do will not work. So of course you’re not going to get where you want if you ignore the road signs saying “WRONG WAY!”.
What you can do, which will help free you from the stupifying factors and unfriendly egregores, and (Val claims) will have the best chance of leading to a FAI, is to look at what’s true. Rather than “I have to do this, or we all die! I must do the impossible”, just “Can I do this? Is it impossible? If so, and I’m [likely] going to die, I can look at that anyway. Given what’s true, what do I want to do?”
If this has a ”...but that doesn’t solve the problem” bit to it, that’s kinda the point. You don’t necessarily get to solve the problem. That’s the uncomfortable thing we should not flinch away from updating on. You might not be able to solve the problem. And then what?
(Not flinching from these things is hard. And important)
3) What’s wrong with talking about what AI researchers should do? There’s actually a good chance they listen! Should they not voice their opinions on the matter? Isn’t that kinda what you’re doing here by talking about what the rationality community should do?
Yes. Kinda. Kinda not.
There’s a question of how careful one has to be, and Val is making a case for much increased caution but not really stating it this way explicitly. Bear with me here, since I’m going to be making points that necessarily seem like “unimportant nitpicking pedantry” relative to an implicit level of caution that is more tolerant to rounding errors of this type, but I’m not actually presupposing anything here about whether increased caution is necessary in general or as it applies to AGI. It is, however, necessary in order to understand Val’s perspective on this, since it is central to his point.
If you look closely, Val never said anything about what the rationality community “should” do. He didn’t use the word “should” once.
He said things like “We can’t align AGI. That’s too big.” and “So, I think raising the sanity waterline is upstream of AI alignment.” and “We have an advantage in that this war happens on and through us. So if we take responsibility for this, we can influence the terrain and bias egregoric/memetic evolution to favor Friendlines”. These things seem to imply that we shouldn’t try to align AGI and should instead do something like “take responsibility” so we can “influcence the terrain and bias egregoric/memetic evolution to favor friendliness”, and as far as rounding errors go, that’s not a huge one. However, he did leave the decision of what to do with the information he presented up to you, and consciously refrained from imbuing it with any “shouldness”. The lack of “should” in his post or comments is very intentional, and is an example of him doing the thing he views as necessary for FAI to have a chance of working out.
In (my understanding of) Val’s perspective, this “shouldness” is a powerful stupifying factor that works itself into everything—if you let it. It prevents you from seeing the truth, and in doing so blocks you from any path which might succeed. It’s so damn seductive and self protecting that we all get drawn into it all the time and don’t really realize—or worse, rationalize and believe that “it’s not really that big a deal; I can achieve my object level goals anyway (or I can’t anyway, and so it makes no difference if I look)”. His claim is that it is that big a deal, because you can’t achieve your goals—and that you know you can’t, which is the whole reason you’re stuck in your thoughts of “should” in the first place. He’s saying that the annoying effort to be more precise about what exactly we are aiming to share and holding ourselves to be squeaky clean from any “impotent shoulding” at things is actually a necessary precondition for success. That if we try to “Shut up and do the impossible”, we fail. That if we “Think about what we should do”, we fail. That if we “try to convince people”, even if we are right and pointing at the right thing, we fail. That if we allow ourselves to casually “should” at things, instead of recognizing it as so incredibly dangerous as to avoid out of principle, we get seduced into being slaves for unfriendly egregores and fail.
That last line is something I’m less sure Val would agree with. He seems to be doing the “hard line avoid shoulding, aim for maximally clean cognition and communication” thing and the “make a point about doing it to highlight the difference” thing, but I haven’t heard him say explicitly that he thinks it has to be a hard line thing.
And I don’t think it does, or should be (case in point). Taking a hard line can be evidence of flinching from a different truth, or a lack of self trust to only use that way of communicating/relating to things in a productive way. I think by not highlighting the fact that it can be done wisely, he clouds his point and makes his case less compelling than it could be. However, I do think he’s correct about it being both a deceptively huge deal and also something that takes a very high level of caution before you start to recognize the issues with lower levels of caution.
I feel seen. I’ll tweak a few details here & there, but you have the essence.
Thank you.
Agreed.
Two details:
“…we should not flinch away…” is another instance of the thing. This isn’t just banishing the word “should”: the ability not to flinch away from hard things is a skill, and trying to bypass development of that skill with moral panic actually makes everything worse.
The orientation you’re pointing at here biases one’s inner terrain toward Friendly superintelligences. It’s also personally helpful and communicable. This is an example of a Friendly meme that can give rise to a Friendly superintelligence. So while sincerely asking “And then what?” is important, as is holding the preciousness of the fact that we don’t yet have an answer, that is enough. We don’t have to actually answer that question to participate in feeding Friendliness in the egregoric wars. We just have to sincerely ask.
Admittedly I’m not sure either.
Generally speaking, viewing things as “so incredibly dangerous as to avoid out of principle” ossifies them too much. Ossified things tend to become attack surfaces for unFriendly superintelligences.
In particular, being scared of how incredibly dangerous something is tends to be stupefying.
But I do think seeing this clearly naturally creates a desire to be more clear and to drop nearly all “shoulding” — not so much the words as the spirit.
(Relatedly: I actually didn’t know I never used the word “should” in the OP! I don’t actually have anything against the word per se. I just try to embody this stuff. I’m delighted to see I’ve gotten far enough that I just naturally dropped using it this way.)
I’m not totally sure I follow. Do you mean a hard line against “shoulding”?
If so, I mostly just agree with you here.
That said, I think trying to make my point more compelling would in fact be an example of the corruption I’m trying to purify myself of. Instead I want to be correct and clear. That might happen to result in what I’m saying being more compelling… but I need to be clean of the need for that to happen in order for it to unfold in a Friendly way.
However. I totally believe that there’s a way I could have been clearer.
And given how spot-on the rest of what you’ve been saying feels to me, my guess is you’re right about how here.
Although admittedly I don’t have a clear image of what that would have looked like.
Doh! Busted.
Thanks for the reminder.
Agreed.
Good point. Agreed, and worth pointing out explicitly.
Yes. You don’t really need it, things tend to work better without it, and the fact no one even noticed that that it didn’t show up in this post is a good example of that. At the same time, “I shouldn’t ever use ‘should’” obviously has the exact same problems, and it’s possible to miss that you’re taking that stance if you don’t ever say it out loud. I watched some of your videos after Kaj linked one, and… it’s not that it looked like you were doing that, but it looked like you might be doing that. Like there wasn’t any sort of self caricaturing or anything that showed me that “Val is well aware of this failure mode, and is actively steering clear”, so I couldn’t rule it out and wanted to mark it as a point of uncertainty and a thing you might want to watch out for.
Ah, but I never said you should try to make your point more compelling! What do you notice when you ask yourself why “X would have effect Y” led you to respond with a reason to not do X? ;)