I get the sense that you were just trying to allude to the ideas that--
Even if you have some kind of "alignment", blindly going full speed ahead with AI is likely to lead to conflict between humans and/or various human value systems, possibly aided by powerful AI or conducted via powerful AI proxies, and said conflict could be seriously Not Good.
Claims that “democratic consensus” will satisfactorily or safely resolve such conflicts, or even resolve them at all, are, um, naively optimistic.
It might be worth it to head that off by unspecified, but potentially drastic means, involving preventing blindly going ahead with AI, at least for an undetermined amount of time.
If that’s what you wanted to express, then OK, yeah.
Yes. That’s really my central claim. All the other discussions over values is not me saying “look, we’re going to resolve this problem of human values in one lesswrong post”. It was to point to the depth of the issue (and, one important and I think overlooked point, that it is not just Mistake Theory that raw clarity/intelligence can solve, there is a fundamental aspect of Conflict Theory we won’t be able to casually brush aside) and that it is not idle philosophical wandering.
I’m sorry that came off as unduly pugnacious. I was actually reacting to what I saw as similarly emphatic language from you (“I can’t believe some of you...”), and trying to forcefully make the point that the alternative wasn’t a bed of roses.
Don’t be sorry, it served its illustration purpose.
We lesswronger are a tiny point in the space of existing human values. We are all WEIRD or very close to that. We share a lot of beliefs that, seen from the outside, even close outside like academia, seems insane. Relative to the modal human who is probably a farmer in rural India or China, we may as well be a bunch of indistinguishable aliens.
And yet we manage to find scissor statements pretty easily. The tails come apart scarily fast.
It just seemed awfully glib and honestly a little combative in itself.
I don’t see how glib and combative “this post is already too long” is ?
“obvious” probably is, yes. My only defense is I don’t have a strong personal style, I’m easily influenced, and read Zvi a lot, who has the same manner of overusing it. I probably should be mindful to not do it myself (I removed at least two on drafting this answer, so progress !).
Well, yes, but I did say “as least not while the ‘humans’ involved are recognizably like the humans we have now”. I guess both the Superhappies and the Babyeaters are like humans in some ways, but not in the ways I had in mind.
No, I mean recognizable humans having an AGI in their hand can decide to go the Superhappies way. Or Babyeaters way. Or whatever unrecognizable-as-humans way. The choice was not even on the table before AGI, and that represent a fundamental change. Another fundamental change brought by AGI is the potential for an unprecedented concentration of power. Many leaders had the ambition to mold humanity to their taste ; none had the capacity to.
Some people definitely have a lot of their self-worth and sense of prestige tied up in their jobs, and in their jobs being needed. But many people don’t. I don’t think a retail clerk, a major part of whose job is to be available as a smiling punching bag for any customers who decide to be obnoxious, is going to feel too bad about getting the same or a better material lifestyle for just doing whatever they happen to feel like every day.
I think a lot of people have that. There’s a even meme for that “It ain’t much, but it’s honest work”.
All in one, I don’t think either of us has much more evidence that a vague sense of things anyway ? I sure don’t have.
I remember hearing things close to “my agency is meaningful if and only if I have to take positive, considered action to ensure my survival, or at least a major chunk of my happiness”.
I think that’s the general direction of the thing we’re trying to point, yes ?
A medieval farmer who screw up is going to starve. A medieval farmer who does exceptionally well will have a surplus he can use on stuff he enjoys/finds valuable.
A chess player who screw up is going to lose some ELO points (and some mix of shame/disappointment). A chess player who does exceptionally well will gain some ELO points (and some mix of pride/joy).
If you give me the choice of living the life of a medieval farmer or someone who has nothing in his life but playing chess, I will take the former. Yes, I know it’s a very, very hard life. Worse in a lot of ways (if you give me death as a third choice, I will admit that death starts to become enticing, if only because if you throw me in a medieval farmer life I’ll probably end up dead pretty fast anyway). The generator of that choice is what I (and apparently others) are trying to point with Meaningfulness/Agency.
I think a lot of things we enjoy and value can be described as “growing as a person”.
Does “growing as a person” sounds like a terminal goal to you ? It doesn’t to me.
If it’s not, what is it instrumental to ?
For me it’s clear, it’s the same thing as the generator of the choice above. I grow so I can hope to act better when there’s real stakes. Remove real stakes, there’s no point in growing, and ultimately, I’m afraid there’s no point to anything.
Is “real stakes” easier to grasp than Agency/Meaningfulness ? Or have I just moved confusion around ?
I’ve also heard plenty of people talk about “meaningfulness” in ways that directly contradict your definition.
Well, the problem is that there is so much concepts, especially when you want to be precise, and so few words.
My above Agency/Meaningfulness explanation does not match perfectly with the one in my previous answer. It’s not that I’m inconsistent, it’s that I’m trying to describe the elephant from different sides (and yeah, sure, you can argue, the trunk of the elephant is not the same thing as the leg of the elephant).
That being said I don’t think they point to completely unrelated concepts. All of those definitions above “positive, considered actions...” ? “Broad Sweep of History” ? its collective version ? Yeah, I all recognize them as parts of the elephant. Even the altruistic one, even if I find that one a bit awkward and maybe misleading. You should not see them as competing and inconsistent definitions, they do point to the same thing, at least for me.
Try to focus more on the commonalities, less on the distinctions ? Try to outline the elephant from the trunk and legs ?
Two caveats that, which I expect you’ve already noticed yourself:
There are going to be conflicts over human values in the non-AGI, non-ASI world too. Delaying AI may prevent them from getting even worse, but there’s still blood flowing over these conflicts without any AI at all. Which is both a limitation of the approach and perhaps a cost in itself.
More generally, if you think your values are going to largely win, you have to trade off caution, consideration for other people’s values, and things like that, against the cost of that win being delayed.[1]
I think a lot of people have that. There’s a even meme for that “It ain’t much, but it’s honest work”.
All in one, I don’t think either of us has much more evidence that a vague sense of things anyway ? I sure don’t have.
So far as I know, there are no statistics. My only guess is that you’re likely talking about a “lot” of people on each side (if you had to reduce it to two sides, which is of course probably oversimplifying beyond the bounds of reason).
[...] “my agency is meaningful if and only if I have to take positive, considered action to ensure my survival, or at least a major chunk of my happiness”.
I think that’s the general direction of the thing we’re trying to point, yes ?
I’ll take your word for it that it’s important to you, and I know that other people have said it’s important to them. Being hung up on that seems deeply weird to me for a bunch of reasons that I could name that you might not care to hear about, and probably another bunch of reasons I haven’t consciously recognized (at least yet).
If you give me the choice of living the life of a medieval farmer or someone who has nothing in his life but playing chess, I will take the former.
OK, here’s one for you. An ASI has taken over the world. It’s running some system that more or less matches your view of a “meaningless UBI paradise”. It send one of its bodies/avatars/consciousness nodes over to your house, and it says:
“I/we notice that you sincerely think your life is meaningless. Sign here, and I/we will set you up as a medieval farmer. You’ll get land in a community of other people who’ve chosen to be medieval farmers (you’ll still be able to lose that land under the rules of the locally prevailing medieval system). You’ll have to work hard and get things right (and not be too unlucky), or you’ll starve. I/we will protect your medieval enclave from outside incursion, but other than that you’ll get no help. Obviously this will have no effect on how I/we run the rest of the world. If you take this deal, you can’t revoke it, so the stakes will be real.”[2]
Would you take that?
The core of the offer is that the ASI is willing to refrain from rescuing you from the results of certain failures, if you really want that. Suppose the ASI is willing to edit the details to your taste, so long as it doesn’t unduly interfere with the ASI’s ability to offer other people different deals (so you don’t get to demand “direct human control over the light cone” or the like). Is there any variant that you’d be satisfied with?
Or does having to choose it spoil it? Or is it too specific to that particular part of the elephant?
Does “growing as a person” sounds like a terminal goal to you ?
Yes, actually. One of the very top ones.
Is “real stakes” easier to grasp than Agency/Meaningfulness ? Or have I just moved confusion around ?
It’s clear and graspable.
I don’t agree with it, but it helps with the definition problem, at least as far as you personally are concerned. At least it resolves enough of the definition problem to move things along, since you say that the “elephant” has other parts. Now I can at least talk about “this trunk you showed me and whatever’s attached to it in some way yet to be defined”.
Well, the problem is that there is so much concepts, especially when you want to be precise, and so few words.
Maybe it’s just an “elephant” thing, but I still get the feeling that a lot of it is a “different people use these words with fundamentally different meanings” thing.
Being hung up on that seems deeply weird to me for a bunch of reasons that I could name that you might not care to hear about
Yeah, I’m curious. The only reason I know that makes sense for not caring about that is pretty extreme negative utilitarianism that you apparently don’t agree with ? (if you have agency you can fail in your plans and suffer, and That Is Not Allowed)
Would you take that?
Given an AGI, there’s a big concern whether this is a true proposal, or a lie going from “and secretly a vast majority of the rest of that world is a prop, you don’t really risk anything” to “I’m going to upload you to what is essentially a gigantic MMO”. But I think it’s not the purpose of your thought experiment ?
I think there are better intermediate places between “medieval farmer” and “UBI paradise”, if it’s what you mean by “details to your tastes”. Current society. Some more SF-like setups like : “we give you and some other space-settler-minded individuals that galaxy other there and basic space tech, do whatever you want”. Some of those I go there without a second thought. I pretty much like current society, actually, setting AGI-builders aside (and yes, limiting to developed world). Medieval farmer life is genuinely sufficiently terrible that I’m on the fence between death and medieval farmer.
But yes, between just medieval farmer and UBI paradise, I’ll probably give a test to UBI paradise (I might be proven wrong and was too lacking in imagination to see all the wonderful things there !), milk the few drop of util that I expect to still find there, but my current expectations is I’m going to bail out at some point.
Or does having to choose it spoil it?
There are various levels of “spoils it”. Your proposal is on the very low ends of spoiling it. Essentially negligible, but I think I can empathize with people thinking “it’s already too high levels of spoiling”. On increasing levels there are “and you can decide to go back to UBI society anytime” (err… that’s pretty close to just being a big IRL role-playing game, isn’t it ?) up to “and I can give you a make-a-wish button” (“wait, that’s basically what I wanted to escape”).
And it’s pretty much a given that it’s a level of Agency/Meaningfulness that is going to be lost even in the worlds where the Agency/Meaningful crowd get most of what they want, as part of bargaining, unless we somehow end up just blindly maximizing Agency/Meaningfulness. Which to be clear would be a pretty awful outcome.
Some of this kind of puts words in your mouth by extrapolating from similar discussions with others. I apologize in advance for anything I’ve gotten wrong.
What’s so great about failure?
This one is probably the simplest from my viewpoint, and I bet it’s the one that’s you’ll “get” the least. Because it’s basically my not “getting” your view at a very basic level.
Why would you ever even want to be able to fail big, in a way that would follow you around? What actual value do you get out of it? Failure in itself is valuable to you?
Wut?
It feels to me like a weird need to make your whole life into some kind of game to be “won” or “lost”, or some kind of gambling addiction or something.
And I do have to wonder if there may not be a full appreciation for what crushing failure really is.
Failure is always an option
If you’re in the “UBI paradise”, it’s not like you can’t still succeed or fail. Put 100 years into a project. You’re gonna feel the failure if it fails, and feel the success if it succeeds.
That’s artificial? Weak sauce? Those aren’t real real stakes? You have to be an effete pampered hothouse flower to care about that kind of made-up stuff?
Well, the big stakes are already gone. If you’re on Less Wrong, you probably don’t have much real chance of failing so hard that you die, without intentionally trying. Would your medieval farmer even recognize that your present stakes are significant?
… and if you care, your social prestige, among whoever you care about, can always be on the table, which is already most of what you’re risking most of the time.
Basically, it seems like you’re treating a not-particularly-qualitative change as bigger than it is, and privileging the status quo.
What agency?
Agency is another status quo issue.
Everybody’s agency is already limited, severely and arbitrarily, but it doesn’t seem to bother them.
Forces mostly unknown and completely beyond your control have made a universe in which you can exist, and fitted you for it. You depend on the fine structure constant. You have no choice about whether it changes. You need not and cannot act to maintain the present value. I doubt that makes you feel your agency is meaningless.
You could be killed by a giant meteor tomorrow, with no chance of acting to change that. More likely, other humans could kill you, still in a way you couldn’t influence, for reasons you couldn’t change and might never learn. You will someday die of some probably unchosen cause. But I bet none of this worries you on the average day. If it does, people will worry about you.
The Grand Sweep of History is being set by chaotically interacting causes, both natural and human. You don’t know what most of them are. If you’re one of a special few, you may be positioned to Change History by yourself… but you don’t know if you are, what to do, or what the results would actually be. Yet you don’t go around feeling like a leaf in the wind.
The “high impact” things that you do control are pretty randomly selected. You can get into Real Trouble or gain Real Advantages, but how is contingent, set by local, ephemeral circumstances. You can get away with things that would have killed a caveman, and you can screw yourself in ways you couldn’t easily even explain to a caveman.
Yet, even after swallowing all the existing arbitrariness, new arbitrariness seems not-OK. Imagine a “UBI paradise”, except each person gets a bunch of random, arbitrary, weird Responsibilities, none of them with much effect on anything or anybody else. Each Responsibility is literally a bad joke. But the stakes are real: you’re Shot at Dawn if you don’t Meet Your Responsibilities. I doubt you’d feel the Meaning very strongly.
… even though some of the human-imposed stuff we have already can seem too close to a bad joke.
The upshot is that it seems the “important” control people say they need is almost exactly the control they’re used to having (just as the failures they need to worry about are suspiciously close to failures they presently have to worry about). Like today’s scope of action is somehow automatically optimal by natural law.
That feels like a lack of imagination or flexibility.
And I definitely don’t feel that way. There are things I’d prefer to keep control over, but they’re not exactly the things I control today, and don’t fall neatly into (any of) the categories people call “meaningful”. I’d probably make some real changes in my scope of control if I could.
What about everybody else?
It’s all very nice to talk about being able to fail, but you don’t fail in a vaccuum. You affect others. Your “agentic failure” can be other people’s “mishap they don’t control”. It’s almost impossible to totally avoid that. Even if you want that, why do you think you should get it?
The Universe doesn’t owe you a value system
This is a bit nebulous, and not dead on the topic of “stakes”, and maybe even a bit insulting… but I also think it’s related in an important way, and I don’t know a better way to say it clearly.
I always feel a sense that what people who talk about “meaning” really want is value realism. You didn’t say this, but this is what I feel like I see underneath practically everybody’s talk about meaning:
Gosh darn it, there should be some external, objective, sharable way to assign Real Value to things. Only things that Real Value are “meaningful.
And if there is no such thing, it’s important not to accept it, not really, not on a gut level...
… because I need it, dammit!
Say that or not, believe it or not, feel it or not, your needs, real or imagined, don’t mean anything to the Laws that Govern All. They don’t care to define Real Value, and they don’t.
You get to decide what matters to you, and that means you have to decide what matters to you. Of course what you pick is ultimately caused by things you don’t control, because you are caused by things you don’t control. That doesn’t make it any less yours. And it won’t exactly match anybody else.
… and choosing to need the chance to fail, because it superficially looks like an externally imposed part of the Natural Order(TM), seems unfortunate. I mean, if you can avoid it.
“But don’t you see, Sparklebear? The value was inside of YOU all the time!”
What I sense from this is that what you’re not getting is that my value system is made of tradeoff of let’s call it “Primitive Values” (ie one that are at least sufficiently universal in human psychology that you kind of can describe them with compact words).
I obviously don’t value failure. If I did I would plan for failure. I don’t. I value/plan for success.
But if all plans ultimately lead to success, what of use/fun/value is planning ?
So failure has to be part of the territory, if I want my map-making skills to… matter ? make sense ? make a difference ?
It feels to me like a weird need to make your whole life into some kind of game to be “won” or “lost”, or some kind of gambling addiction or something.
My first reaction was “no, no, gambling addiction and speaking of Winning at Life like Trump could looks like terribly uncharitable”.
My second reaction is you’re pretty much directionaly right and into the path of understanding ? Just put it in a bit more charitable way ? We have been shaped by Evolution at large. By winners in the great game of Life, red in blood and claws. And while playing don’t mean winning, not playing certainly means losing. Schematically, I can certainly believe that “Agency” is the shard inside of me that comes out of that outer (intermediate) objective “enjoy the game, and play to win”. I have the feeling that you have pretty much lost the “enjoy the game” shard, possibly because you have a mutant variant “enjoy ANY game” (and you know what ? I can certainly imagine a “enjoy ANY game” variant enjoying UBI paradise).
Well, the big stakes are already gone. If you’re on Less Wrong, you probably don’t have much real chance of failing so hard that you die, without intentionally trying. Would your medieval farmer even recognize that your present stakes are significant?
This gives me another possible source/model of inspiration, the good old “It’s the Journey that matters, not the Destination”.
Many video games have a “I win” cheatcode. Players at large don’t use it. Why not, if winning the game is the goal ? And certainly all of their other actions are consistent with the player want to win the game. He’s happy when things go well, frustrated when they go wrong, At the internet age, they look at guides, tips. They will sometimes hand the controller to a better player after being stuck. And yet they don’t press the “I win” button.
You are the one saying “do you enjoy frustration or what ? Just press the I Win button”. I’m the one saying “What are you saying ? He’s obviously enjoying the game, isn’t he ?”.
I agree that the Destination of Agency is pretty much “there is no room left for failure” (and pretty much no Agency left). This is what most of our efforts go into : better plans for a better world with better odds for us. There’s some Marxist vibes “competition tend to reduce profit over time in capitalist economies, therefore capitalism will crumble under the weight of its own contradiction”. If you enjoy entrepreneurship in a capitalistic economy, the better you are at it, the stronger you drive down profits. “You: That seems to indicate that entrepreneurs hate capitalism and profits, and would be happy in a communist profit-less society. Me: What ?”. Note we have the same thing as “will crumble under the weights…” in the game metaphor : when the player win, it’s also the end of the game.
So let’s go a bit deeper into that metaphor : the game is Life. Creating an ASI-driven UBI paradise is discovering that the developer created a “I Win” button. Going into that society is pressing that button. Your position I guess is “well, living well in an UBI paradise is the next game”. My position is “no, the UBI paradise is still in the same game. It’s akin to the Continue Playing button in a RTS after having defeated all opponents on the map. Sure, you can play in the sense you can still move units around gather resources and so on but c’mon, it’s not the same, and I can already tell how much it’s going to be much less fun, simply because it’s not what the game was designed for. There is no next game. We have finished the only game we had. Enjoy drawing fun patterns with your units while you can enjoy it ; for me I know it won’t be enjoyable for very long.”
… and if you care, your social prestige, among whoever you care about, can always be on the table, which is already most of what you’re risking most of the time.
Oh, this is another problem I thought of, then forgot.
This sounds like a positive nightmare to me.
It seems a hard-to-avoid side-effect of losing real stakes/agency.
In our current society, you can improve the life of others around you in the great man-vs-nature conflict. AKA economics is positive-sum (I think you mentioned something about some people talking about Meaningfulness giving you an altruistic definition ? There we are !).
Remove this and you only have man-vs-man conflicts (gamified so nobody get hurt). Those are generally zero-sum, just positional. When you gain a rank in the Chess ladder, another one lose one.
No place for positive-sum games seems a bad place to live. Don’t know at what extent it is fixable in the UBI-paradise (does cooperative, positive-sum games fix this ? I’m not sure how much the answer is “obviously yes” or “it’s just a way to informally make a ranking of who is the best player, granting status, so it’s actually zero sum”), or how much is it just going to end up Agency in another guise.
Forces mostly unknown and completely beyond your control have made a universe in which you can exist, and fitted you for it. You depend on the fine structure constant. You have no choice about whether it changes. You need not and cannot act to maintain the present value. I doubt that makes you feel your agency is meaningless.
My first reaction is “the shard of Agency inside me has been created by Evolution ; the definition of the game I’m supposed to enjoy and its scope draws from there. Of course it’s not going to care about that kind of stuff”.
My second reaction is : “I certainly hope my distant descendants will change the fine-structure constant of the universe, it looks possible and a way to avoid the heat death of the universe” (https://www.youtube.com/watch?v=XhB3qH_TFds&list=PLd7-bHaQwnthaNDpZ32TtYONGVk95-fhF&index=2). I don’t know how much it’s a nitpick (I certainly notice that I prefer “my distant descendants” to “the ASI supervisor of UBI-paradise”).
More likely, other humans could kill you, still in a way you couldn’t influence, for reasons you couldn’t change and might never learn. You will someday die of some probably unchosen cause.
This is the split between Personal Agency and Collective Agency. At our current level at capabilities, it doesn’t differentiate very much. It will certainly, later.
Since we live in society, and much people tend to not like being killed, we shape societies such that such events tend not to happen (mostly via punishment and socialization). Each individual try to steer society at the best of its capabilities. If we collectively end up in a place where there’s no murders, people like me consider this a success. Otherwise, a failure.
Politics, advocacy, leading-by-example, guided by things like Game Theory, Ethics, History. Those are very much not out of the scope of Agency. It would be if individuals had absolutely 0 impact on society.
It’s all very nice to talk about being able to fail, but you don’t fail in a vaccuum. You affect others. Your “agentic failure” can be other people’s “mishap they don’t control”. It’s almost impossible to totally avoid that. Even if you want that, why do you think you should get it?
That’s why, for me and at my current speculation level, I think there is two Red Bright Lines for a post-ASI future.
One : if there is no recognizable Mormons society in a post-ASI future, something Has Gone Very Wrong. Mormons tend to value their traditional way of life pretty heavily (which includes agency). Trampling those in particular probably indicate that we are generally trampling a awful lot of values actually held by a lot of actual people.
Two : if there is no recognizable UBI paradise in a post-ASI future, something Has Gone Very Wrong. For pretty much the same reason.
(there is plausibly a similar third red line for transhumanists, but they cause serious security/safety challenges for the rest of the universe, so it’s getting more complicated there, so I found no way to articulate such a red line for them).
The corollary being is : the (non-terribly-gone-wrong) pot-ASI future is almost inevitably a patchwork of different societies with different tradeoffs. Unless One Value System wins, one which is low on Diversity on top of that. Which would be terrible.
To answer you : I should get that because I’m going to live with other people who are okay that I get that, because they want to get it too.
“But don’t you see, Sparklebear? The value was inside of YOU all the time!”
I entirely agree with you here. It’s all inside us. If there was some Real Really Objectively Meaningful Values out there, I would believe a technically aligned ASI to be able to recognize this and would be much less concerned by the potential loss of Agency/Meaningfulness/whatever we call it. Alas, I don’t believe it’s the case.
Mostly some self-description, since you seem want a model of me. I did add an actual disagreement (or something) at the end, but I don’t think there’ll be much more for me to say about it if you don’t accept it. I will read anything you write.
I have the feeling that you have pretty much lost the “enjoy the game” shard, possibly because you have a mutant variant ” enjoy ANY game”.
More like “enjoy the process”. Why would I want to set a “win” condition to begin with?
I don’t play actual games at all unless somebody drags me into them. They seem artificial and circumscribed. Whatever the rules are, I don’t really care enough about learning them, or learning to work within them, unless it gives me something that seems useful for whatever random conditions may come up later, outside the game. That applies to whatever the winning condition is, as much as to any other rule.
Games with competition tend to be especially tedious. Making the competition work seems to tends to further constrain the design of the rules, so they’re more boring. And the competition can make the other people involved annoying.
As far as winning itself… Whee! I got the most points! That, plus whatever coffee costs nowadays, will buy me a cup of coffee. And I don’t even like coffee.
I study things, and I do projects.
While I do evaluate project results, I’m not inclined to bin them as “success” or “failure”. I mean, sure, I’ll broadly classify a project that way, especially if I have to summarize it to somebody else in a sentence. But for myself I want more than that. What exactly did I get out of doing it? The whole thing might even be a “success” if it didn’t meet any of its original goals.
I collect capabilities. Once I have a capability, I often, but not always, lose interest in using it, except maybe to get more capabilities. Capabilities get extra points for being generally useful.
I collect experiences when new, pleasurable, or interesting ones seem to be available. But just experiences, not experiences of “winning”.
I’ll do crossword puzzles, but only when I have nothing else to do and mostly for the puns.
Many video games have a “I win” cheatcode. Players at large don’t use it. Why not, if winning the game is the goal ?
Even I would understand that as not, actually, you know, winning the game. I mean, a game is a system with rules. No rules, no game, thus no win. And if there’s an auto-win button that has no reason to be in the rules other than auto-win, well, obvious hole is obvious.
It’s just that I don’t care to play a game to begin with.
If something is gamified, meaning that somebody has artificially put a bunch of random stuff I don’t care about between me and something I actually want in real life, then I’ll try to bypass the game. But I’m not going to do that for points, or badges, or “achievements” that somebody else has decided I should want. I’m not going to push the “win” button. I’m just not gonna play. I loathe gamification.
Creating an ASI-driven UBI paradise is discovering that the developer created a “I Win” button.
I see it not as an “I win” button, but as an “I can do the stuff I care about without having to worry about which random stupid bullshit other people might be willing to pay me for, or about tedious chores that don’t interest me” button.
Sure, I’m going to mash that.
And eventually maybe I’ll go more transcendent, if that’s on offer. I’m even willing to accept certain reasonable mental outlooks to avoid being too “unaligned”.
This is the split between Personal Agency and Collective Agency.
I don’t even believe “Collective Agency” is a thing, let alone a thing I’d care about. Anything you can reasonably call “agency” requires preferences, and intentional, planned, directed, well, action toward a goal. Collectives don’t have preferences and don’t plan (and also don’t enjoy, or even experience, either the process or the results).
Which, by the way, brings me to the one actual quibble I’m going to put in this. And I’m not sure what to do with that quibble. I don’t have a satisfactory course of action and I don’t think I have much useful insight beyond what’s below. But I do know it’s a problem.
One : if there is no recognizable Mormons society in a post-ASI future, something Has Gone Very Wrong.
I was once involved in a legal case that had a lot to do with some Mormons. Really they were a tiny minority of the people affected, but the history was such that the legal system thought they were salient, so they got talked about a lot, and got to talk themselves, and I learned a bit about them.
These particular Mormons were a relatively isolated polygynist splinter sect that treated women, and especially young women, pretty poorly (actually I kind of think everybody but the leaders got a pretty raw deal, and I’m not even sure the leaders were having much of a Good Time(TM)). It wasn’t systematic torture, but it wasn’t Fun Times either. And the people on the bottom had a whole lot less of what most people would call “agency” than the people on the top.
But they could show you lots of women who truly, sincerely wanted to stay in their system. That was how they’d been raised and what they believed in. And they genuinely believed their Prophet got direct instructions from God (now and then, not all the time).
Nobody was kept in chains. Anybody who wanted to leave was free to walk away from their entire family, probably almost every person they even knew by name, and everything they’d ever been taught was important, while defying what at least many of them truly believed was the literal will of God. And of course move somewhere where practically everybody had a pretty alien way of life, and most people were constantly doing things they’d always believed were hideously immoral, and where they’d been told people were doing worse than they actually were.
They probably would have been miserable if they’d been forcibly dragged out of their system. They might never have recovered. If they had recovered, it might well have meant they’d had experiences that you could categorize as brainwashing.
It would have been wrong to yank them out of their system. So far I’m with you.
But was it right to raise them that way? Was it right to allow them to be raised that way? What kind of “agency” did they have in choosing the things that molded them? The people who did mold them got agency, but they don’t seem to have gotten much.
As I think you’ve probably figured out, I’m very big on individual, conscious, thinking, experiencing, wanting agents, and very much against giving mindless aggregates like institutions, groups, or “cultures”, anywhere near the same kind of moral weight.
From my point of view, a dog has more right to respect and consideration than a “heritage”. The “heritage” is only important because of the people who value it, and that does not entitle it to have more, different people fed to it. And by this I specifically mean children.
A world of diverse enclaves is appealing in a lot of ways. But, in every realistic form I’ve been able to imagine, it’s a world where the enclaves own people.
More precisely, it’s a world where “culture” or “heritage”, or whatever, is used an excuse for some people not only to make other people miserable, but to condition them from birth to choose that misery. Children start to look suspiciously like they’re just raw material for whatever enclave they happen to be born in. They don’t choose the enclave, not when it matters.
It’s not like you can just somehow neutrally turn a baby into an adult and then have them “choose freely”. People’s values are their own, but that doesn’t mean they create those values ex nihilo.
I suppose you could fix the problem by switching to reproduction by adult fission, or something. But a few people might see that as a rather abrupt departure, maybe even contrary to their values. And kids are cute.
Yes. That’s really my central claim. All the other discussions over values is not me saying “look, we’re going to resolve this problem of human values in one lesswrong post”. It was to point to the depth of the issue (and, one important and I think overlooked point, that it is not just Mistake Theory that raw clarity/intelligence can solve, there is a fundamental aspect of Conflict Theory we won’t be able to casually brush aside) and that it is not idle philosophical wandering.
Don’t be sorry, it served its illustration purpose.
We lesswronger are a tiny point in the space of existing human values. We are all WEIRD or very close to that. We share a lot of beliefs that, seen from the outside, even close outside like academia, seems insane. Relative to the modal human who is probably a farmer in rural India or China, we may as well be a bunch of indistinguishable aliens.
And yet we manage to find scissor statements pretty easily. The tails come apart scarily fast.
I don’t see how glib and combative “this post is already too long” is ?
“obvious” probably is, yes. My only defense is I don’t have a strong personal style, I’m easily influenced, and read Zvi a lot, who has the same manner of overusing it. I probably should be mindful to not do it myself (I removed at least two on drafting this answer, so progress !).
No, I mean recognizable humans having an AGI in their hand can decide to go the Superhappies way. Or Babyeaters way. Or whatever unrecognizable-as-humans way. The choice was not even on the table before AGI, and that represent a fundamental change. Another fundamental change brought by AGI is the potential for an unprecedented concentration of power. Many leaders had the ambition to mold humanity to their taste ; none had the capacity to.
I think a lot of people have that. There’s a even meme for that “It ain’t much, but it’s honest work”.
All in one, I don’t think either of us has much more evidence that a vague sense of things anyway ? I sure don’t have.
I think that’s the general direction of the thing we’re trying to point, yes ?
A medieval farmer who screw up is going to starve. A medieval farmer who does exceptionally well will have a surplus he can use on stuff he enjoys/finds valuable.
A chess player who screw up is going to lose some ELO points (and some mix of shame/disappointment). A chess player who does exceptionally well will gain some ELO points (and some mix of pride/joy).
If you give me the choice of living the life of a medieval farmer or someone who has nothing in his life but playing chess, I will take the former. Yes, I know it’s a very, very hard life. Worse in a lot of ways (if you give me death as a third choice, I will admit that death starts to become enticing, if only because if you throw me in a medieval farmer life I’ll probably end up dead pretty fast anyway). The generator of that choice is what I (and apparently others) are trying to point with Meaningfulness/Agency.
I think a lot of things we enjoy and value can be described as “growing as a person”.
Does “growing as a person” sounds like a terminal goal to you ? It doesn’t to me.
If it’s not, what is it instrumental to ?
For me it’s clear, it’s the same thing as the generator of the choice above. I grow so I can hope to act better when there’s real stakes. Remove real stakes, there’s no point in growing, and ultimately, I’m afraid there’s no point to anything.
Is “real stakes” easier to grasp than Agency/Meaningfulness ? Or have I just moved confusion around ?
Well, the problem is that there is so much concepts, especially when you want to be precise, and so few words.
My above Agency/Meaningfulness explanation does not match perfectly with the one in my previous answer. It’s not that I’m inconsistent, it’s that I’m trying to describe the elephant from different sides (and yeah, sure, you can argue, the trunk of the elephant is not the same thing as the leg of the elephant).
That being said I don’t think they point to completely unrelated concepts. All of those definitions above “positive, considered actions...” ? “Broad Sweep of History” ? its collective version ? Yeah, I all recognize them as parts of the elephant. Even the altruistic one, even if I find that one a bit awkward and maybe misleading. You should not see them as competing and inconsistent definitions, they do point to the same thing, at least for me.
Try to focus more on the commonalities, less on the distinctions ? Try to outline the elephant from the trunk and legs ?
OK, I read you and essentially agree with you.
Two caveats that, which I expect you’ve already noticed yourself:
There are going to be conflicts over human values in the non-AGI, non-ASI world too. Delaying AI may prevent them from getting even worse, but there’s still blood flowing over these conflicts without any AI at all. Which is both a limitation of the approach and perhaps a cost in itself.
More generally, if you think your values are going to largely win, you have to trade off caution, consideration for other people’s values, and things like that, against the cost of that win being delayed.[1]
So far as I know, there are no statistics. My only guess is that you’re likely talking about a “lot” of people on each side (if you had to reduce it to two sides, which is of course probably oversimplifying beyond the bounds of reason).
I’ll take your word for it that it’s important to you, and I know that other people have said it’s important to them. Being hung up on that seems deeply weird to me for a bunch of reasons that I could name that you might not care to hear about, and probably another bunch of reasons I haven’t consciously recognized (at least yet).
OK, here’s one for you. An ASI has taken over the world. It’s running some system that more or less matches your view of a “meaningless UBI paradise”. It send one of its bodies/avatars/consciousness nodes over to your house, and it says:
Would you take that?
The core of the offer is that the ASI is willing to refrain from rescuing you from the results of certain failures, if you really want that. Suppose the ASI is willing to edit the details to your taste, so long as it doesn’t unduly interfere with the ASI’s ability to offer other people different deals (so you don’t get to demand “direct human control over the light cone” or the like). Is there any variant that you’d be satisfied with?
Or does having to choose it spoil it? Or is it too specific to that particular part of the elephant?
Yes, actually. One of the very top ones.
It’s clear and graspable.
I don’t agree with it, but it helps with the definition problem, at least as far as you personally are concerned. At least it resolves enough of the definition problem to move things along, since you say that the “elephant” has other parts. Now I can at least talk about “this trunk you showed me and whatever’s attached to it in some way yet to be defined”.
Maybe it’s just an “elephant” thing, but I still get the feeling that a lot of it is a “different people use these words with fundamentally different meanings” thing.
Although I don’t know how anybody could confidently expect to win at this point.
… and I’m already seeing the can of worms opening up around your kids’ choices, but let’s ignore that for the moment…
Yeah, I’m curious. The only reason I know that makes sense for not caring about that is pretty extreme negative utilitarianism that you apparently don’t agree with ? (if you have agency you can fail in your plans and suffer, and That Is Not Allowed)
Given an AGI, there’s a big concern whether this is a true proposal, or a lie going from “and secretly a vast majority of the rest of that world is a prop, you don’t really risk anything” to “I’m going to upload you to what is essentially a gigantic MMO”. But I think it’s not the purpose of your thought experiment ?
I think there are better intermediate places between “medieval farmer” and “UBI paradise”, if it’s what you mean by “details to your tastes”. Current society. Some more SF-like setups like : “we give you and some other space-settler-minded individuals that galaxy other there and basic space tech, do whatever you want”. Some of those I go there without a second thought. I pretty much like current society, actually, setting AGI-builders aside (and yes, limiting to developed world). Medieval farmer life is genuinely sufficiently terrible that I’m on the fence between death and medieval farmer.
But yes, between just medieval farmer and UBI paradise, I’ll probably give a test to UBI paradise (I might be proven wrong and was too lacking in imagination to see all the wonderful things there !), milk the few drop of util that I expect to still find there, but my current expectations is I’m going to bail out at some point.
There are various levels of “spoils it”. Your proposal is on the very low ends of spoiling it. Essentially negligible, but I think I can empathize with people thinking “it’s already too high levels of spoiling”. On increasing levels there are “and you can decide to go back to UBI society anytime” (err… that’s pretty close to just being a big IRL role-playing game, isn’t it ?) up to “and I can give you a make-a-wish button” (“wait, that’s basically what I wanted to escape”).
And it’s pretty much a given that it’s a level of Agency/Meaningfulness that is going to be lost even in the worlds where the Agency/Meaningful crowd get most of what they want, as part of bargaining, unless we somehow end up just blindly maximizing Agency/Meaningfulness. Which to be clear would be a pretty awful outcome.
OK...
Some of this kind of puts words in your mouth by extrapolating from similar discussions with others. I apologize in advance for anything I’ve gotten wrong.
What’s so great about failure?
This one is probably the simplest from my viewpoint, and I bet it’s the one that’s you’ll “get” the least. Because it’s basically my not “getting” your view at a very basic level.
Why would you ever even want to be able to fail big, in a way that would follow you around? What actual value do you get out of it? Failure in itself is valuable to you?
Wut?
It feels to me like a weird need to make your whole life into some kind of game to be “won” or “lost”, or some kind of gambling addiction or something.
And I do have to wonder if there may not be a full appreciation for what crushing failure really is.
Failure is always an option
If you’re in the “UBI paradise”, it’s not like you can’t still succeed or fail. Put 100 years into a project. You’re gonna feel the failure if it fails, and feel the success if it succeeds.
That’s artificial? Weak sauce? Those aren’t real real stakes? You have to be an effete pampered hothouse flower to care about that kind of made-up stuff?
Well, the big stakes are already gone. If you’re on Less Wrong, you probably don’t have much real chance of failing so hard that you die, without intentionally trying. Would your medieval farmer even recognize that your present stakes are significant?
… and if you care, your social prestige, among whoever you care about, can always be on the table, which is already most of what you’re risking most of the time.
Basically, it seems like you’re treating a not-particularly-qualitative change as bigger than it is, and privileging the status quo.
What agency?
Agency is another status quo issue.
Everybody’s agency is already limited, severely and arbitrarily, but it doesn’t seem to bother them.
Forces mostly unknown and completely beyond your control have made a universe in which you can exist, and fitted you for it. You depend on the fine structure constant. You have no choice about whether it changes. You need not and cannot act to maintain the present value. I doubt that makes you feel your agency is meaningless.
You could be killed by a giant meteor tomorrow, with no chance of acting to change that. More likely, other humans could kill you, still in a way you couldn’t influence, for reasons you couldn’t change and might never learn. You will someday die of some probably unchosen cause. But I bet none of this worries you on the average day. If it does, people will worry about you.
The Grand Sweep of History is being set by chaotically interacting causes, both natural and human. You don’t know what most of them are. If you’re one of a special few, you may be positioned to Change History by yourself… but you don’t know if you are, what to do, or what the results would actually be. Yet you don’t go around feeling like a leaf in the wind.
The “high impact” things that you do control are pretty randomly selected. You can get into Real Trouble or gain Real Advantages, but how is contingent, set by local, ephemeral circumstances. You can get away with things that would have killed a caveman, and you can screw yourself in ways you couldn’t easily even explain to a caveman.
Yet, even after swallowing all the existing arbitrariness, new arbitrariness seems not-OK. Imagine a “UBI paradise”, except each person gets a bunch of random, arbitrary, weird Responsibilities, none of them with much effect on anything or anybody else. Each Responsibility is literally a bad joke. But the stakes are real: you’re Shot at Dawn if you don’t Meet Your Responsibilities. I doubt you’d feel the Meaning very strongly.
… even though some of the human-imposed stuff we have already can seem too close to a bad joke.
The upshot is that it seems the “important” control people say they need is almost exactly the control they’re used to having (just as the failures they need to worry about are suspiciously close to failures they presently have to worry about). Like today’s scope of action is somehow automatically optimal by natural law.
That feels like a lack of imagination or flexibility.
And I definitely don’t feel that way. There are things I’d prefer to keep control over, but they’re not exactly the things I control today, and don’t fall neatly into (any of) the categories people call “meaningful”. I’d probably make some real changes in my scope of control if I could.
What about everybody else?
It’s all very nice to talk about being able to fail, but you don’t fail in a vaccuum. You affect others. Your “agentic failure” can be other people’s “mishap they don’t control”. It’s almost impossible to totally avoid that. Even if you want that, why do you think you should get it?
The Universe doesn’t owe you a value system
This is a bit nebulous, and not dead on the topic of “stakes”, and maybe even a bit insulting… but I also think it’s related in an important way, and I don’t know a better way to say it clearly.
I always feel a sense that what people who talk about “meaning” really want is value realism. You didn’t say this, but this is what I feel like I see underneath practically everybody’s talk about meaning:
Say that or not, believe it or not, feel it or not, your needs, real or imagined, don’t mean anything to the Laws that Govern All. They don’t care to define Real Value, and they don’t.
You get to decide what matters to you, and that means you have to decide what matters to you. Of course what you pick is ultimately caused by things you don’t control, because you are caused by things you don’t control. That doesn’t make it any less yours. And it won’t exactly match anybody else.
… and choosing to need the chance to fail, because it superficially looks like an externally imposed part of the Natural Order(TM), seems unfortunate. I mean, if you can avoid it.
“But don’t you see, Sparklebear? The value was inside of YOU all the time!”
What I sense from this is that what you’re not getting is that my value system is made of tradeoff of let’s call it “Primitive Values” (ie one that are at least sufficiently universal in human psychology that you kind of can describe them with compact words).
I obviously don’t value failure. If I did I would plan for failure. I don’t. I value/plan for success.
But if all plans ultimately lead to success, what of use/fun/value is planning ?
So failure has to be part of the territory, if I want my map-making skills to… matter ? make sense ? make a difference ?
My first reaction was “no, no, gambling addiction and speaking of Winning at Life like Trump could looks like terribly uncharitable”.
My second reaction is you’re pretty much directionaly right and into the path of understanding ? Just put it in a bit more charitable way ? We have been shaped by Evolution at large. By winners in the great game of Life, red in blood and claws. And while playing don’t mean winning, not playing certainly means losing. Schematically, I can certainly believe that “Agency” is the shard inside of me that comes out of that outer (intermediate) objective “enjoy the game, and play to win”. I have the feeling that you have pretty much lost the “enjoy the game” shard, possibly because you have a mutant variant “enjoy ANY game” (and you know what ? I can certainly imagine a “enjoy ANY game” variant enjoying UBI paradise).
This gives me another possible source/model of inspiration, the good old “It’s the Journey that matters, not the Destination”.
Many video games have a “I win” cheatcode. Players at large don’t use it. Why not, if winning the game is the goal ? And certainly all of their other actions are consistent with the player want to win the game. He’s happy when things go well, frustrated when they go wrong, At the internet age, they look at guides, tips. They will sometimes hand the controller to a better player after being stuck. And yet they don’t press the “I win” button.
You are the one saying “do you enjoy frustration or what ? Just press the I Win button”. I’m the one saying “What are you saying ? He’s obviously enjoying the game, isn’t he ?”.
I agree that the Destination of Agency is pretty much “there is no room left for failure” (and pretty much no Agency left). This is what most of our efforts go into : better plans for a better world with better odds for us. There’s some Marxist vibes “competition tend to reduce profit over time in capitalist economies, therefore capitalism will crumble under the weight of its own contradiction”. If you enjoy entrepreneurship in a capitalistic economy, the better you are at it, the stronger you drive down profits. “You: That seems to indicate that entrepreneurs hate capitalism and profits, and would be happy in a communist profit-less society. Me: What ?”. Note we have the same thing as “will crumble under the weights…” in the game metaphor : when the player win, it’s also the end of the game.
So let’s go a bit deeper into that metaphor : the game is Life. Creating an ASI-driven UBI paradise is discovering that the developer created a “I Win” button. Going into that society is pressing that button. Your position I guess is “well, living well in an UBI paradise is the next game”. My position is “no, the UBI paradise is still in the same game. It’s akin to the Continue Playing button in a RTS after having defeated all opponents on the map. Sure, you can play in the sense you can still move units around gather resources and so on but c’mon, it’s not the same, and I can already tell how much it’s going to be much less fun, simply because it’s not what the game was designed for. There is no next game. We have finished the only game we had. Enjoy drawing fun patterns with your units while you can enjoy it ; for me I know it won’t be enjoyable for very long.”
Oh, this is another problem I thought of, then forgot.
This sounds like a positive nightmare to me.
It seems a hard-to-avoid side-effect of losing real stakes/agency.
In our current society, you can improve the life of others around you in the great man-vs-nature conflict. AKA economics is positive-sum (I think you mentioned something about some people talking about Meaningfulness giving you an altruistic definition ? There we are !).
Remove this and you only have man-vs-man conflicts (gamified so nobody get hurt). Those are generally zero-sum, just positional. When you gain a rank in the Chess ladder, another one lose one.
No place for positive-sum games seems a bad place to live. Don’t know at what extent it is fixable in the UBI-paradise (does cooperative, positive-sum games fix this ? I’m not sure how much the answer is “obviously yes” or “it’s just a way to informally make a ranking of who is the best player, granting status, so it’s actually zero sum”), or how much is it just going to end up Agency in another guise.
My first reaction is “the shard of Agency inside me has been created by Evolution ; the definition of the game I’m supposed to enjoy and its scope draws from there. Of course it’s not going to care about that kind of stuff”.
My second reaction is : “I certainly hope my distant descendants will change the fine-structure constant of the universe, it looks possible and a way to avoid the heat death of the universe” (https://www.youtube.com/watch?v=XhB3qH_TFds&list=PLd7-bHaQwnthaNDpZ32TtYONGVk95-fhF&index=2). I don’t know how much it’s a nitpick (I certainly notice that I prefer “my distant descendants” to “the ASI supervisor of UBI-paradise”).
This is the split between Personal Agency and Collective Agency. At our current level at capabilities, it doesn’t differentiate very much. It will certainly, later.
Since we live in society, and much people tend to not like being killed, we shape societies such that such events tend not to happen (mostly via punishment and socialization). Each individual try to steer society at the best of its capabilities. If we collectively end up in a place where there’s no murders, people like me consider this a success. Otherwise, a failure.
Politics, advocacy, leading-by-example, guided by things like Game Theory, Ethics, History. Those are very much not out of the scope of Agency. It would be if individuals had absolutely 0 impact on society.
That’s why, for me and at my current speculation level, I think there is two Red Bright Lines for a post-ASI future.
One : if there is no recognizable Mormons society in a post-ASI future, something Has Gone Very Wrong. Mormons tend to value their traditional way of life pretty heavily (which includes agency). Trampling those in particular probably indicate that we are generally trampling a awful lot of values actually held by a lot of actual people.
Two : if there is no recognizable UBI paradise in a post-ASI future, something Has Gone Very Wrong. For pretty much the same reason.
(there is plausibly a similar third red line for transhumanists, but they cause serious security/safety challenges for the rest of the universe, so it’s getting more complicated there, so I found no way to articulate such a red line for them).
The corollary being is : the (non-terribly-gone-wrong) pot-ASI future is almost inevitably a patchwork of different societies with different tradeoffs. Unless One Value System wins, one which is low on Diversity on top of that. Which would be terrible.
To answer you : I should get that because I’m going to live with other people who are okay that I get that, because they want to get it too.
I entirely agree with you here. It’s all inside us. If there was some Real Really Objectively Meaningful Values out there, I would believe a technically aligned ASI to be able to recognize this and would be much less concerned by the potential loss of Agency/Meaningfulness/whatever we call it. Alas, I don’t believe it’s the case.
Mostly some self-description, since you seem want a model of me. I did add an actual disagreement (or something) at the end, but I don’t think there’ll be much more for me to say about it if you don’t accept it. I will read anything you write.
More like “enjoy the process”. Why would I want to set a “win” condition to begin with?
I don’t play actual games at all unless somebody drags me into them. They seem artificial and circumscribed. Whatever the rules are, I don’t really care enough about learning them, or learning to work within them, unless it gives me something that seems useful for whatever random conditions may come up later, outside the game. That applies to whatever the winning condition is, as much as to any other rule.
Games with competition tend to be especially tedious. Making the competition work seems to tends to further constrain the design of the rules, so they’re more boring. And the competition can make the other people involved annoying.
As far as winning itself… Whee! I got the most points! That, plus whatever coffee costs nowadays, will buy me a cup of coffee. And I don’t even like coffee.
I study things, and I do projects.
While I do evaluate project results, I’m not inclined to bin them as “success” or “failure”. I mean, sure, I’ll broadly classify a project that way, especially if I have to summarize it to somebody else in a sentence. But for myself I want more than that. What exactly did I get out of doing it? The whole thing might even be a “success” if it didn’t meet any of its original goals.
I collect capabilities. Once I have a capability, I often, but not always, lose interest in using it, except maybe to get more capabilities. Capabilities get extra points for being generally useful.
I collect experiences when new, pleasurable, or interesting ones seem to be available. But just experiences, not experiences of “winning”.
I’ll do crossword puzzles, but only when I have nothing else to do and mostly for the puns.
Even I would understand that as not, actually, you know, winning the game. I mean, a game is a system with rules. No rules, no game, thus no win. And if there’s an auto-win button that has no reason to be in the rules other than auto-win, well, obvious hole is obvious.
It’s just that I don’t care to play a game to begin with.
If something is gamified, meaning that somebody has artificially put a bunch of random stuff I don’t care about between me and something I actually want in real life, then I’ll try to bypass the game. But I’m not going to do that for points, or badges, or “achievements” that somebody else has decided I should want. I’m not going to push the “win” button. I’m just not gonna play. I loathe gamification.
I see it not as an “I win” button, but as an “I can do the stuff I care about without having to worry about which random stupid bullshit other people might be willing to pay me for, or about tedious chores that don’t interest me” button.
Sure, I’m going to mash that.
And eventually maybe I’ll go more transcendent, if that’s on offer. I’m even willing to accept certain reasonable mental outlooks to avoid being too “unaligned”.
I don’t even believe “Collective Agency” is a thing, let alone a thing I’d care about. Anything you can reasonably call “agency” requires preferences, and intentional, planned, directed, well, action toward a goal. Collectives don’t have preferences and don’t plan (and also don’t enjoy, or even experience, either the process or the results).
Which, by the way, brings me to the one actual quibble I’m going to put in this. And I’m not sure what to do with that quibble. I don’t have a satisfactory course of action and I don’t think I have much useful insight beyond what’s below. But I do know it’s a problem.
I was once involved in a legal case that had a lot to do with some Mormons. Really they were a tiny minority of the people affected, but the history was such that the legal system thought they were salient, so they got talked about a lot, and got to talk themselves, and I learned a bit about them.
These particular Mormons were a relatively isolated polygynist splinter sect that treated women, and especially young women, pretty poorly (actually I kind of think everybody but the leaders got a pretty raw deal, and I’m not even sure the leaders were having much of a Good Time(TM)). It wasn’t systematic torture, but it wasn’t Fun Times either. And the people on the bottom had a whole lot less of what most people would call “agency” than the people on the top.
But they could show you lots of women who truly, sincerely wanted to stay in their system. That was how they’d been raised and what they believed in. And they genuinely believed their Prophet got direct instructions from God (now and then, not all the time).
Nobody was kept in chains. Anybody who wanted to leave was free to walk away from their entire family, probably almost every person they even knew by name, and everything they’d ever been taught was important, while defying what at least many of them truly believed was the literal will of God. And of course move somewhere where practically everybody had a pretty alien way of life, and most people were constantly doing things they’d always believed were hideously immoral, and where they’d been told people were doing worse than they actually were.
They probably would have been miserable if they’d been forcibly dragged out of their system. They might never have recovered. If they had recovered, it might well have meant they’d had experiences that you could categorize as brainwashing.
It would have been wrong to yank them out of their system. So far I’m with you.
But was it right to raise them that way? Was it right to allow them to be raised that way? What kind of “agency” did they have in choosing the things that molded them? The people who did mold them got agency, but they don’t seem to have gotten much.
As I think you’ve probably figured out, I’m very big on individual, conscious, thinking, experiencing, wanting agents, and very much against giving mindless aggregates like institutions, groups, or “cultures”, anywhere near the same kind of moral weight.
From my point of view, a dog has more right to respect and consideration than a “heritage”. The “heritage” is only important because of the people who value it, and that does not entitle it to have more, different people fed to it. And by this I specifically mean children.
A world of diverse enclaves is appealing in a lot of ways. But, in every realistic form I’ve been able to imagine, it’s a world where the enclaves own people.
More precisely, it’s a world where “culture” or “heritage”, or whatever, is used an excuse for some people not only to make other people miserable, but to condition them from birth to choose that misery. Children start to look suspiciously like they’re just raw material for whatever enclave they happen to be born in. They don’t choose the enclave, not when it matters.
It’s not like you can just somehow neutrally turn a baby into an adult and then have them “choose freely”. People’s values are their own, but that doesn’t mean they create those values ex nihilo.
I suppose you could fix the problem by switching to reproduction by adult fission, or something. But a few people might see that as a rather abrupt departure, maybe even contrary to their values. And kids are cute.
The words you’re looking (perhaps): power, influence