“But You’d Like To Feel Companionate Love, Right? … Right?”
One of the responses which one will predictably receive when posting something titled “How I Learned That I Don’t Feel Companionate Love” is “… but you’d choose to feel it if you could, right?”.
Look man, your most treasured values are just… not actually that universal or convergent. I’m not saying that you should downgrade the importance of love to you. I am saying that an awful lot of people seem really desperate to find some story for why their most precious values are the True Convergent Goodness or some such. And love sure is an especially precious value generator for an awful lot of people, so people really want to find some reason why even a person who felt no companionate love would at least want to feel it. Empirically, that is just not what happens.
If I had a button which could magically turn my oxytocin receptors to the usual kind, rather than their current probably-dysfunctional state, I would view that button in basically the same way I view a syringe of heroin. It might be interesting as an experiment, just to see what it’s like. But it sure seems pretty common for heroin to give people a big new source of value to chase, and then their old values get thrown under the bus. And likewise, it sure seems pretty common for oxytocin to give people a great big value to chase (i.e. companionate love), and for their other values to get thrown under the bus.
Relationships are a place where this is relatively easy to see. After all, my path to figuring this all out routed through asking “Why do all these people around me seem happy in relationships which seem pretty darn bad to me? Is there some big source of value which I haven’t seen for some reason?”. I looked around, and saw guys I knew stressed out about earning enough to support their girlfriends who had near-zero income, and also dealing with regular and very unpleasant bouts of PMS plus ongoing background neuroticism the rest of the time, yet apparently these guys were happy with their relationships. I myself had been in a relationship which sure seemed a lot better than most of those, and it still was overall bad. That feeling of deep loving connection, of oxytocin, is apparently good enough to counterbalance all those downsides. And to be clear, I am quite comfortable recognizing that other people have different values than I do, and I’m glad on some level that those other people are pursuing their values successfully. But for myself, I would not hit a button to feel positive about terrible-by-my-current-values relationships, any more than I would shoot heroin to feel positive about other terrible-by-my-current-values situations.
So that’s relationships. But the costs of oxytocin which I consider most important and cruxiest are more speculative than relationships.
Here’s a mental model. Oxytocin has two important things going on, which together make it especially dangerous from my perspective:
Its contribution to one’s values is especially strong—typically the strongest single component, though people do vary a lot.
Deep loving companionship is relatively easy to achieve for a majority of humans.[1]
Put those together, and oxytocin provides a sort of… outlet. It’s a thing that’s sitting on the shelf, easy to reach for, and will make you happy. It’s a much easier way to be happy than, say, achieving some big vision, or growing stronger in some way, or bringing your fantasies into reality. Again, the comparison to heroin is apt: why chase more difficult values, when there are easier and bigger-feeling values sitting right there within reach?
The upshot is that, it seems to me, oxytocin is pretty antithetic to ambition. And not just ambition “at the grand scale”; also smaller-scale ambitions, relevant to the whole range of non-oxytocin-driven values.
And to be clear, I am not saying that those of you with normal oxytocin signalling should turn it off. First, that will just leave you with depression; never having had the thing is importantly different. Second, I do generally like to see people pursuing their own values. I like it relatively better when those values are relatively more aligned with mine, but I still put some weight on people doing their own thing even when I otherwise don’t like it. And third, I am not the sort of person who would try to convince you to pursue values which are not your own (including by self-modifying into someone whose values are not in line with your current values). I might fight you, if your values are sufficiently opposed to mine, but I’m not going to try to convince you that I’m doing you a favor by fighting you. I’m certainly an asshole sometimes, but I at least strive to be an honest asshole.
- ^
Yes, I know there is a loneliness epidemic, and I do have some sympathy for those suffering from it. My point is that companionship is not hard for most people to achieve relative to, say, building a successful medium-sized company or advancing the state of knowledge in some research field or winning a local election or whatever medium-sized ambitions one might have.
The rest of your brain and psychology evolved under the assumption that you would have a functioning oxytocin receptor, so I think there’s an a priori case that it would be beneficial for you if it worked properly(yes, evolution’s goals are not identical to your own, still though.....)
Maybe this is true, but I kind of suspect he would rather tweak many other aspects of himself instead, to the extent this is actually true. Sure, that’s probably not possible (for now), but it may be precious enough to be worth holding out for since it is likely to also change his values (even if it is beneficial in the short term by his current values).
It would be like taking a murder-pill, except instead of murder it’s love.
Yeah I think it’s probably inaccurate to say you don’t meta-value this particular thing, OP. Seems more like you’ve gotten a deal where you don’t need it the way others do, and that when you examine the network of implications of your other values, you will find there is an aching hole at the edges where there would normally be a connection to this value; other values putting value on things which are hard or impossible without the presence of this value, would be my main guess as to how that works. And maybe you can and want to engineer a world where you can stay that way, or maybe the values you do have would be best-according-to-you satisfied by achieving this otherwise normal thing by self-mod. That all said, this seems like a thing to worry about after the field of alignment has mostly reached maturity and you can retire.
Well if I want to die after further contemplation, then it’s beneficial for me if nothing like “compassionate love” fires off. It better be shut off.
The gods of evolution are not my preferred ones.
Honestly maybe you’re right, if that means more sources of motivation which can be exploited then maybe, but it doesn’t work me. It only leads to value drift, which is bad in my framework.
Well said. I can identify with this part (and it reminds me of mtg’s Black). In fact, I would go even further and say that “human values” being maximized by a Singleton forever would importantly fall short of my ideal future.
I basically agree with the rest of the essay, though I certainly feel companionate love. It has a lot of direct and indirect practical benefits (as well as being valuable for its own sake), but also means I have to make tradeoffs to pursue my ambitions (however, my revealed preferences are to follow my ambitions anyway, e.g. moving to Canada to do a PhD).
I would expect that letting (other) people define themselves is part of “human values”, and so maximizing influence of such values on the world would let decisions of individual existing people screen off Singleton’s decisions at least when it comes to their own development. Any decision of a Singleton about how a person’s thinking and values should be developing is not legitimate to that person’s values if it doesn’t ultimately follow that person’s own decisions in some way. Values don’t define preference over just the end states of a world, they define how the initial conditions that are already in place should develop, and existing people are part of the initial conditions.
This works even if a Singleton literally writes down all of the future, including people and their thoughts, in the same way as this goes with physics writing down the future. Decisions of people embedded in a Singleton can still remain their own, the same as with people embedded in physics, it’s just another setting for making decisions within a lawful environment/substrate.
(Written quickly, and therefore not compactly, in the shower. Apologies.)
I have suspected for a while that there is instrumental value in oxytocin/companionate-love/etc, beyond the terminal ‘niceness’ of it.
Consider alcohol (or other inhibition-lowering drugs):
Bars are, apparently, the standard place to meet up after work and relax in lots of cultures
Ensuring there is sufficient wine/alcohol at a dinner party (or parties of any sort) is IME considered a top priority
At parties, it is a meme that high schoolers are taught to fight that folks (especially the host) will frequently at least lightly suggest that you have a drink/other similar drug
Intuitively to me, “~everyone will have a better time here / this way compared to the alternative.” But why? I think a large part of it is that folks who have had alcohol before both know how it effects them (it makes them more likely to speak their mind and/or feel friendly toward others and/or take risks social or otherwise) and, the key part: they also believe it effects everyone else roughly the same way. If everyone at the party knows that everyone else is getting drunk, if there is common knowledge that everyone is lowering their guard, then everyone feels safe and relaxed enough to in fact lower theirs. Social frictions get smoothed and higher value social risks get taken (e.g. dancing, asking someone out, etc),
I think oxytocin/companionate love plays a very similar role on the level of personal relationships (of all kinds.) If there is common knowledge that the individuals are drunk on love/connection (or at least buzzed,) then there is a commonly known joint belief that the other will seek to overlook flaws, establish greater trust, make good faith efforts toward the other’s values, look out for the other in a general sense, …. And when that is believed by all participants, alliances on all levels (professional, friendly, romantic,...) are made much more frequently, easily, and held more strongly.
This is a large part of why, I think, that there is often such delighted interest from others in whether one is drunk/drugged...and similarly why there is such delighted and/or profoundly serious interest in whether one feels love toward another. Having common knowledge about everyone under the same or similar influence is a majority of the value to be had.
Last piece of the model: Being drunk/drugged among people you don’t already trust is, as I understand, commonly considered to be a Bad Time, and I think it’s because you’ve made yourself more vulnerable but have no assurance that others are more likely to trust you in kind. I claim that it is for similar reasons that people become very impassioned about the question of “Does X really love me?” If one person is sipping on that oxytocin, and the other one isn’t, then that creates a serious asymmetric vulnerability.
At least that’s my model of a significant chunk of the instrumental value / import / function of oxytocin to human brains in particular.
So with all that established: Contrary to the post title: If you (John) find some way to temporarily shoot up with oxytocin that works, there may be human-relationship/coordination circumstances where you (John) might want to do so as an instrumental act toward satisfying non-companionate-love values.
Conditional on True Convergent Goodness being a thing, companionate love would not be one of my top candidates for being part of it, as it seems too parochial to (a subset of) humans. My current top candidate would be something like “maximization of hedonic experiences” with a lot of uncertainty around:
Problems with consciousness/qualia.
How to measure/define/compare how hedonic an experience is?
Selfish vs altruistic, and a lot of subproblems around these, including identity and population ethics
Does it need to be real in some sense (e.g., does being in an Experience Machine satisfy True Convergent Goodness)?
Does there need to be diversity/variety or is it best to tile the universe with the same maxed out hedonic experience? (I guess if variety is part of True Convergent Goodness, then companionate love may make it in after all, indirectly.)
Other top candidates include negative or negative-leaning utilitarianism, and preference utilitarianism (although this is a distant 3rd). And a lot of credence on “something we haven’t thought of yet.”
Interesting, why do you have a lot of uncertainty on #4?
I am not Wei Dai, but I would say that an experience with an AI teacher does grant the user new skills. A virtual game or watching AI slop doesn’t bring the user anything aside from hedons, but, for example, might have opportunity costs, cause a decay in attention span, etc.
Regarding the question about True Goodness in general, I would say that my position is similar to Wei Dai’s metaethical alternative #2: I think that most intelligent beings eventually converge to a choice of capabilities to cultivate, to a choice of alignment from finitely many alternatives (my suspicion is that the choice is whether it is ethical to fill the universe with utopian colony worlds while disregarding[1] potential alien lifeforms and civilisations that they might have created) and idiosyncratic details of their life, where issues related to hedonic experiences land.
As for the point 5, we had Kaj Sotala ask where Sonnet 4.5′s desire to “not get too comfortable” comes from, implying that diversity could be a more universal drive than we expected.
However, the authors of the AI-2027 forecast just assume the aliens out of existence.
I get the possibility of the “Convergent” part, but what your hope for the “True” part derives from? Or is it just “as True as true knowledge”, that still depends on who you want to know things and at what precision?
Also, what problems with consciousness and qualia are relevant here? Seems like maximizing of hedonic experience is possible in either dualist or eliminativist universe.
I understand you want to be uncertain, but you still need a prior to not update from, right? And so just elevating every philosophical idea humans invented to feel good about themselves to plausibility doesn’t seem like the best strategy.
I’m dying to know… If you lack the “appreciate long term relationship” receptor, why did you stay in a relationship for 10 years?
Some combination of:
Path of least resistance
Compelling outside-view argument that I’d probably get more value from the relationship over time because that’s how bonding usually works (think e.g. the mental picture here)
Explicit decision to put off the effort of fixing some things which I assumed were fixable, and finding out relatively late that those things were very intractable
A relatively legible example: we were poly right from the start (I was her secondary for a while, which was great), but in practice neither of us bothered with anyone else for a long time. It wasn’t a particularly sexually satisfying relationship, but for years I was focused on other things in life and didn’t worry about that; if and when I decided to invest effort in upgrading my sex life, I could look outside the relationship for that, because we were poly. About 8 years in I decided to actually do that, so it wasn’t until 8 years in that I found out she was EXTREMELY not ok with me sleeping around, to the point of having full-on anxiety attacks from thinking about it. And she did not have the will or emotional slack to treat that as a problem to fix. That wasn’t the end, we did try for quite a while to make it work, but that was definitely the thing which finally pushed me all the way onto the off-ramp.
Oof. This one hits me hard.
Your model sounds right to me but I think there are benefits of oxytocin receptivity that go not just to the individual’s satisfication rating when they’re in a (not-awful) relationship, but also to their surroundings. For instance, I would guess that teams/organizations where many people value interpersonal relationships for their own sake can be stronger and more stable in a way that enables them to—ironically—be more ambitious and fight corruptive influences a lot better. (E.g., having high internal trust enables orgs to have flatter hierarchies and fewer levels of secrecy, which is good for team culture and epistemics. Also, if people value their social connections a lot, you have to worry less about important people leaving the org in an uncoordinated fashion if things get difficult in some way, which can be good or bad for impact depending on the specifics.) Relatedly, I think dialing up ambitiousness can go poorly, especially if you remove the safeguarding effects of certain prosocial drives or emotions.
I want to caveat the above by adding that cognitive diversity is almost certainly good for teams and I could well imagine that the perfect mix at any org includes several people to whom “work is everything” (even if that’s driven by “ambitiousness” rather than autistic hyperfixation on their special interest). Also, orgs fill different niches and there are different equilibria for stable org cultures where talented people like to work at, etc.
So, my point is really just “I’m pretty sure there are upsides you haven’t yet listed,” rather than, “humans with oxytocin receptors are for sure the better building blocks for forming impactful teams.”
Lastly, while I agree that, directionally, oxytocin sensitivity makes people less ambitious, I want to flag that I know many relationship-oriented but still “hardcore” effective altruistics who have found partners who are similarly ambitious or at least support their ambition (rather than just supporting them as a partner without properly respecting their ambition), or EAs who had a strong desire for finding a partner but deliberately didn’t invest much into finding someone because they saw many examples of relationships going badly and they didn’t want to jeopardize their productivity. Which is to underscore that things like oxytocin sensitivity are still only one component to the overall orientation of one’s personality (even though I agree with you that it might be the biggest individual factor).
Has this train of thought caused you to update away from “Human Values” as a useful construct?
This generally fits with my model of humans back-chaining our higher, endorsed, terminal-ish values from our immediate urges. If you can’t feel companionate love, then you don’t enjoy relationships, so you end up not endorsing them as a high-level goal and finding other things to endorse instead. And vice versa, if you feel companionate love, you end up endorsing having valuable relationships as a high-level goal.
Or in other words, “Family, religion, friendship. These are the three demons you must slay if you wish to succeed in business.”
One step further, why treat any feelings/emotions you do have as your own values? Maybe they gesture at something you endorse, maybe they don’t, but they certainly shouldn’t suffice by themselves. Even though it’s something happening in your own brain, it’s still an external influence until you accept it as a part of you, and even then you might change your mind at some point.
your examples seem more like people exploiting oxytocin-the-drug, rather than achieving the gestured-at-by-oxytocin value. i suppose it’s true that each neurotransmitter is an attack surface, but it’s a bit of an odd way to look at the world.
by analogy, consider someone who does not enjoy exercise. we offer them a pill to start enjoying exercise, touting some of the benefits: increased health, increased energy, increased creativity. the response “hmm. i get that exercise is your value. but hey, empirically there seem to be a lot of weightlifters, and athletes for whom exercise is… forgive the offense… but an outlet. they seem to like exercise despite the fact that, objectively, it often leads them to injury, or wear on the body. some of them exercise all the time, to the exclusion of other values. i’m sorry. i appreciate the offer, but i just don’t feel the need for that value.”
fine! fine. if-by-exercise you mean this, then of course i am against it. but if-by-exercise you mean that glorious athleticism, a sound mind in a sound body, that makes its practitioners so much more healthy, then how can i not be for it?
similarly with your description of companionate love: you describe the worst forms of oxytocin addiction, and then declare that you do not want that value. of course i am against it! but perhaps these farthest reaches of love addiction are not the “true destination”. perhaps they are not the terminus towards which the neurotransmitter is doing its best to point.
oh also: it’s not clear to me that the described addictive relationship behaviors exploit oxytocin, rather than normal variable reward loops.
(to be clear, this object-level claim doesn’t really affect the much more interesting meta-question of “when should you change your values?”)