I’m a very confused person trying to become less confused. My history as a New Age mystic still colors everything I think even though I’m striving for rationality nowadays. Here’s my backstory if you’re interested.
If I were Bob I’d have told her to fuck off long ago and stopped letting some random person berate me for being lazy just like my parents always have. This is basically guilt-tripping, not a beneficial way of approaching any kind of motivation, and it is absolutely guaranteed to produce pushback. But then, I’m probably not your target audience, am I?
Btw just to be clear, I think Said Achmiz explained my reaction better than I, who habitually post short reddit-tier responses, can. My specific issue is that Alice seems to be acting as if it’s any of her business what Bob does. It is not. Absolutely nobody likes being told they’re not being ethical enough. It’s why everyone hates vegans. As someone who doesn’t like experiencing such judgmental demands, I would have the kneejerk emotional reaction to want to become less of an EA just to spite her. (I would not of course act on this reaction, but I would start finding EA things to be in an ugh field because they remind me of the distress caused by this interaction.)
Holy heck I have been enlightened. And by contemplating nothingness too! Thanks for the clarification, it all makes sense now.
I really enjoy this sequence but there’s a sticking point that’s making me unable to continue until I figure it out. It seems to me rather obvious that… utility functions are not shift-invariant! If I denominate option A at 1 utilon and option B at 2 utilons, that means I am indifferent between a certain outcome of A and a 50% probability of B—and this is no longer the case if I shift my utility function even slightly. Ratios of utilities mean something concrete and are destroyed by translation. Since your entire argument seems to rest on that inexplicably not being the case, I can’t see how any of this is useful.
I understand all this logically, but my emotional brain asks, “Yeah, but why should I care about any of that? I want what I want. I don’t want to grow, or improve myself, or learn new perspectives, or bring others joy. I want to feel good all the time with minimal effort.”
When wireheading—real wireheading, not the creepy electrode in the brain sort that few people would actually accept—is presented to you, it is very hard to reject it, particularly if you have a background of trauma or neurodivergence that makes coping with “real life” difficult to begin with, which is why so many people with brains like mine end up as addicts. Actually, by some standards, I am an addict, just not of any physical substance.
And to be honest, as a risk-averse person, it’s hard for me to rationally argue for why I ought to interact with other people when AIs are better, except the people I already know, trust, and care about. Like, where exactly is my duty to “grow” (from other people’s perspective, by other people’s definitions, because they tell me I ought to do it) supposed to be coming from? The only thing that motivates me, sometimes, to try to do growth-and-self-improvement things is guilt. And I’m actually a pretty hard person to guilt into doing things.
That’s a temporary problem. Robot bodies will eventually be good enough. And I’ve been a virgin for nearly 26 years, I can wait a decade or two longer till there’s something worth downloading an AI companion into if need be.
Neither of these really describes what childhood is for. Both of them are inventions of the modern WEIRD society. I’d suggest you read “Anthropology of Childhood: Cherubs, Chattels, Changelings” for a wider view on the subject… it’s pretty bleak though. The very idea that there is such a thing as an optimal childhood parents ought to strive to provide their children… is also a modern, Western, extremely unusual idea, and throughout most of history, in most cultures, they were just… little creatures that would eventually be adults and till then either got in the way or were used for something.
The norm appears to be “benevolent neglect”, at best—that is, children are not (outside of our Western bubble of reality, as well as East Asia which independently invented some of the same norms) actively taught or guided towards anything; mostly they are ignored and they teach themselves everything they need to know by mimicking adults. People spend time with their children, but it’s rarely a goal explicitly striven for (the way it is for Western parents); it’s just a side effect of their existing at all.
To be honest, I look forward to AI partners. I have a hard time seeing the point of striving to have a “real” relationship with another person, given that no two people are really perfectly compatible, no one can give enough of their time and attention to really satisfy a neverending desire for connection, etc. I expect AIs to soon enough be better romantic companions—better companions in all ways—than humans are. Why shouldn’t I prefer them?
Great, apparently I’m in just the right place… I’m always alone and have few friends who might influence me to give up my wacky ideas! Wonderful.....
Those stories are surprisingly coherent and compelling. They were actually fun to read!
I’m not sure how useful the concept of boundary placement rebellion is, though. It certainly is a thing, but it’s also something basically everyone engages in. I pretty much constantly do it… though maybe that says more about me than anything...
“Thou strivest ever. Even in thy yielding, thou strivest to yield; and lo! thou yieldest not. Go thou into the outermost places, and subdue all things. Subdue thy fear and thy distrust. And then—YIELD.”—Aleister Crowley
I’m never really sure what there’s any point in saying. My main interests have nothing to do with AI alignment, which seems to be the primary thing people talk about here. And a lot of my thoughts require the already existing context of my previous thoughts. Honestly, it’s difficult for me to communicate what’s going on in my head to anyone.
No, it’s called “lying”. The text that he produces as a result of these social pressures does not reflect his actual thought processes. You can’t judge a belief on the basis of a bunch of ex post facto arguments people make up to rationalize it—the method by which they came to hold the belief is much more informative, and for those of us with very roundabout styles of thinking (such as myself) being forced into this self-censorship and modification of our thought patterns into something “coherent” and easy to read actually destroys all the evidence of how we actually came to the idea, and thus destroys much of your ability to effectively examine its validity!
I feel the same as Adrian and Cato. I am very much the opposite of a rigorous thinker—in fact, I am probably not capable of rigor—and I would like to be the person who spews loads of interesting off the wall ideas for others to parse through and expand upon those which are useful. But that kind of role doesn’t seem to exist here and I feel very intimidated even writing comments, much less actual posts—which is why I rarely do. The feeling that I have to put tremendous labor into making a Proper Essay full of citations and links to sequences and detailed arguments and so on—it’s just too much work and not worth the effort for something I don’t even know anyone will care about.
This makes me wonder if some proportion of “masculine” gay men are actually transwomen (of the early onset type) with autoandrophilia. I may even fit into that category myself. I didn’t care about masculinity and in fact found it somewhat abhorrent and not-me-ish until I started getting off to more masculine looking guys in porn. (When I first saw porn when I was 12 I mainly focused on twinks and wanted to look like them, and there’s still a part of me that feels that way, which wars with the part that wants to bulk up because masc dudes are also hot—and usually wins, because bulking is hard and I would rather read books.)
Of course, my natural femininity is not tremendous (I wasn’t flamboyant as a child and as far as I know never have been—I’ve always thought feminine-acting men were creepy—but I did flirt with identifying as nonbinary during my late teens, and used to have multiple female alters during the period where I thought I had multiple personalities), and most of my femininity is the result of misandry taught by the media and my mother (I believed for most of my childhood and early teens that masculinity is disgusting and bestial, and that only women can be powerful / noble, but later realized that like all other disgusting and bestial things, masculinity is sexy as fuck, which helped me get out of my misandry phase.)
Nowadays I think my gender identity is probably something like “true hermaphrodite / omega (as in the omegaverse fanfiction trope) male”, which unfortunately is not something that one can currently medically transition to, and I experience no dysphoria (and to be honest, the only reason I think it would be cool to have both male and female genitals is because it seems too asymmetric and unbalanced not to, and I’m very Libra [yes I know astrology isn’t real, but it’s still a helpful and / or fun language to describe personalities with]).
Well—actually, it’s possible I do experience dysphoria, but in which direction changes with my mood (I sometimes don’t feel masculine enough), and there’s an element of The Paraphilia Which Must Not Be Named [note: if you ask me, I will not name it, and I will neither confirm nor deny guesses, but you can probably figure it out based on what I’m not saying] which also interacts in weird ways with the whole thing, and overall I just find gender and sexuality stuff tiresome and confusing and sort of wish I didn’t have to deal with it.
Thanks for coming to my rambly asf TED talk.
This is interesting, and imo dystopian and dreadful, but it doesn’t belong on Lesswrong. I downvoted.
I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It’s easy to think that if you’re on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn’t learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals—whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.
I was about to mention Piaget, but you referred to him at the end of the post. Definitely seems relevant, since we noticed the possible connection independently.
This reminds me strongly of the anarchist principle of unity of means and ends, which is why anarchists aren’t into violent revolution anymore—you can’t end coercion by coercive means.
Ooh! I don’t know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)
It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there’s always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants’ planning is for nought and the grasshopper actually has the right idea. It doesn’t seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can’t be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.