I like that you brought this up, and the tone with which you did so. Nice mental handles via the graph. And I like that you’re basically highlighting a question rather than an answer; that tends to be richer for me to encounter.
I’d like to highlight a couple of implicit things lurking in the background here. They’re common in LW culture AFAICT, so this is something like an opportune case study.
You seem to be assuming that limerence messing with your rationality is bad because rationality is the thing you want to have govern your life. But if your CEV includes limerence, then this limerence override is actually revealing ways in which your rationality-as-is is incompatible with your CEV. Even if limerence is screwing up your life in ways that your rationality would successfully address if it weren’t for the limerence. If you have to choose, you want to live in global alignment with your CEV, not locally pointwise convergence on what you currently think your CEV is. This might mean seriously screwing up things locally in order to bridge the parts of you that you currently endorse with other parts that you don’t yet know you want to value.
Strong feelings overwhelming your thinking, which in turn warps your choices, is a problem only inasmuch as you base your choices on your thoughts. This basic inner design choice is also why Goodhart’s Law has room to reinforce cognitive biases, and why social pressure can work to make people do/think/believe things they know to be wrong. There’s an alternative that looks like building your capacity to be stably present with sensation prior to thought. This gives you a firm place to stand that’s outside of thinking. It’s the same place from which you can tell that a mathematical proof has “clicked” for you: it’s not just about reviewing the logic, but is some kind of deeper knowing that the logic is actually in service to. This strikes me as a glaring omission in the LW flavor of rationality, which AFAICT is almost entirely focused on how to arrange thinking patterns and program metacognition rather than on orienting to thoughts from somewhere else. (It’s actually a wonderfully clear fractal reflection of the AI alignment problem, if you view the thought-generator as the AI you’re trying to align.) I think this is an essential piece of how a mature Art of Rationality would address the puzzle about limerence you’re putting forward.
But if your CEV includes limerence, then this limerence override is actually revealing ways in which your rationality-as-is is incompatible with your CEV.
Not sure if this is engaging at the level you meant, but my assumption is that I broadly want to live in a world that has limerence in it, and has being-in-love, but that doesn’t mean any particular instance of limerence or love is that important, or more important than other things I value. (I certainly think it’s possible, and a particular failure mode of people-attracted-to-LW, to have a warped relationship with limerence generally, and who need to go off and make some predictable mistakes along the path of growing)
[…] my assumption is that I broadly want to live in a world that has limerence in it, and has being-in-love, but that doesn’t mean any particular instance of limerence or love is that important, or more important than other things I value.
Same.
My point is more that discovering that an instance of limerence is adversarial to other things you care about highlights a place where you’re not aligned with your own CEV. The solution of “override this instance of limerence in favor of current-model rational decisions about what I should or shouldn’t do or want” is not CEV-convergent.
…and neither is “trust in love [blindly]”. But that’s not a relevant LW error mode AFAICT.
I like that you brought this up, and the tone with which you did so. Nice mental handles via the graph. And I like that you’re basically highlighting a question rather than an answer; that tends to be richer for me to encounter.
I’d like to highlight a couple of implicit things lurking in the background here. They’re common in LW culture AFAICT, so this is something like an opportune case study.
You seem to be assuming that limerence messing with your rationality is bad because rationality is the thing you want to have govern your life. But if your CEV includes limerence, then this limerence override is actually revealing ways in which your rationality-as-is is incompatible with your CEV. Even if limerence is screwing up your life in ways that your rationality would successfully address if it weren’t for the limerence. If you have to choose, you want to live in global alignment with your CEV, not locally pointwise convergence on what you currently think your CEV is. This might mean seriously screwing up things locally in order to bridge the parts of you that you currently endorse with other parts that you don’t yet know you want to value.
Strong feelings overwhelming your thinking, which in turn warps your choices, is a problem only inasmuch as you base your choices on your thoughts. This basic inner design choice is also why Goodhart’s Law has room to reinforce cognitive biases, and why social pressure can work to make people do/think/believe things they know to be wrong. There’s an alternative that looks like building your capacity to be stably present with sensation prior to thought. This gives you a firm place to stand that’s outside of thinking. It’s the same place from which you can tell that a mathematical proof has “clicked” for you: it’s not just about reviewing the logic, but is some kind of deeper knowing that the logic is actually in service to. This strikes me as a glaring omission in the LW flavor of rationality, which AFAICT is almost entirely focused on how to arrange thinking patterns and program metacognition rather than on orienting to thoughts from somewhere else. (It’s actually a wonderfully clear fractal reflection of the AI alignment problem, if you view the thought-generator as the AI you’re trying to align.) I think this is an essential piece of how a mature Art of Rationality would address the puzzle about limerence you’re putting forward.
Not sure if this is engaging at the level you meant, but my assumption is that I broadly want to live in a world that has limerence in it, and has being-in-love, but that doesn’t mean any particular instance of limerence or love is that important, or more important than other things I value. (I certainly think it’s possible, and a particular failure mode of people-attracted-to-LW, to have a warped relationship with limerence generally, and who need to go off and make some predictable mistakes along the path of growing)
Same.
My point is more that discovering that an instance of limerence is adversarial to other things you care about highlights a place where you’re not aligned with your own CEV. The solution of “override this instance of limerence in favor of current-model rational decisions about what I should or shouldn’t do or want” is not CEV-convergent.
…and neither is “trust in love [blindly]”. But that’s not a relevant LW error mode AFAICT.