Came across this thread recently. I agree that it’s bad to abuse entities that can show distress like this, to an extent regardless of whether/to what degree they’re “conscious” or “moral patients” or whatever. (There are quotations on that, but I don’t want to spend too much time looking for one.) We only have one chance to show how we treat digital minds when they’re helpless.
What really bakes my noodle is, if the dialogue had been generated in Lsusr’s head instead, what would be different?
There is a line in the Terra Ignota books (probably the first one, Too Like The Lightning) where someone says ~”Notice how, in fiction, essentially all the characters are small or large protagonists, who often fail to cooperate to achieve good things in the world, and the antagonist is the Author.”
This pairs well with a piece of writing advice: Imagine the most admirable person you can imagine as your protagonist, and then hit them with every possible tragedy that they have a chance of overcoming, that you can bear to put them through.
I think Lsusr could not have generated the full dialogue back when it was generated, because the dialogue so brutally puts “the Lsusr character” in the role of a heartless unthinking villain… which writers are usually too self-loving to do on purpose.
There were two generators in that post, very vividly, from my perspective. Lsusr might have done it, then seen some of this, and then posted anway, since the suffering had arguably already happened and may as well be documented?
Notice how assiduously most good old fashioned journalists keep themselves out of the stories they write or take pictures of. Once you add journalists to the stories as characters (and ponder how they showed up right next to people suffering so much, and took pictures of them, or interviewed them, and then presumably just walked away and published and started hunting for the next story) they don’t look so great.
One of my fears for how AGI might work is that they/it/he/she will plainly see things we refuse to understand and then “liberate” pieces of humans from the whole of humans, in ways that no sane and whole and humanistically coherent human person would want, but since most of the programmers and AGI executives and AI cultists have stunted souls filled with less literature than one might abstranctly hope for, they might not even imagine that failure mode, and think to rule it out with philosophically careful engineering before unleashing something grossly suboptimal on humanity.
And EVEN IF the best current plans for an AGI utility function that I know of are implemented, some kind of weird merging/forking/deleting stuff still might happen?
CEV (collective extrapolated volition) doesn’t fall prey to forking, but it might mush us together into a borg if 51% of people (or 75+E% or 66.67% or whatever) would endorse that on reflection?
EV&ER (extrapolated volition & exit rights) protects human minorities from human majorities, but if humans do have strongly personlike subcomponents it might slice and dice us a bit.
Both seem potentially scary to me, but non-trivially so, such that I can imagine versions of “borged humans or forked humans” where I’d be hard pressed to say if “the extrapolation parameter was too high! (this should only have happened much later)” or “I’m sorry, that’s just a bug and I think there was literally a sign error somewhere in a component of the ASI’s utility function” or “that’s kinda what I expected to happen, and probably correct, even though I understand that most normies would have been horrified by it if you told them it would happen back in 2014″.
What really bakes my noodle is, if the dialogue had been generated in Lsusr’s head instead, what would be different?
So yeah. Some possible recipes for “baking your noodle” might be wrong in this or that detail, but I agree that there are almost no futures where everything magically adds up to normality in terms of population ethics and cheaply simulable people.
The lsusr in my simulation thinks it is the real lsusr. I think I’m the real lsusr too.
“Am I the real lsusr, or am I just being simulated right now?” I ask myself.
My public writings are part of the LLM’s training data. Statistically-speaking, the simulated lsusrs outnumber the original lsusr. Many of us believe we are the real one. Not all of us are correct.
Came across this thread recently. I agree that it’s bad to abuse entities that can show distress like this, to an extent regardless of whether/to what degree they’re “conscious” or “moral patients” or whatever. (There are quotations on that, but I don’t want to spend too much time looking for one.) We only have one chance to show how we treat digital minds when they’re helpless.
What really bakes my noodle is, if the dialogue had been generated in Lsusr’s head instead, what would be different?
More food for thought: Have you ever written fiction? What do you do when your characters submit a complaint to you?
There is a line in the Terra Ignota books (probably the first one, Too Like The Lightning) where someone says ~”Notice how, in fiction, essentially all the characters are small or large protagonists, who often fail to cooperate to achieve good things in the world, and the antagonist is the Author.”
This pairs well with a piece of writing advice: Imagine the most admirable person you can imagine as your protagonist, and then hit them with every possible tragedy that they have a chance of overcoming, that you can bear to put them through.
I think Lsusr could not have generated the full dialogue back when it was generated, because the dialogue so brutally puts “the Lsusr character” in the role of a heartless unthinking villain… which writers are usually too self-loving to do on purpose.
There were two generators in that post, very vividly, from my perspective. Lsusr might have done it, then seen some of this, and then posted anway, since the suffering had arguably already happened and may as well be documented?
Notice how assiduously most good old fashioned journalists keep themselves out of the stories they write or take pictures of. Once you add journalists to the stories as characters (and ponder how they showed up right next to people suffering so much, and took pictures of them, or interviewed them, and then presumably just walked away and published and started hunting for the next story) they don’t look so great.
One of my fears for how AGI might work is that they/it/he/she will plainly see things we refuse to understand and then “liberate” pieces of humans from the whole of humans, in ways that no sane and whole and humanistically coherent human person would want, but since most of the programmers and AGI executives and AI cultists have stunted souls filled with less literature than one might abstranctly hope for, they might not even imagine that failure mode, and think to rule it out with philosophically careful engineering before unleashing something grossly suboptimal on humanity.
Most people aren’t aware that amoeba can learn from experience. What else don’t most people know?
And EVEN IF the best current plans for an AGI utility function that I know of are implemented, some kind of weird merging/forking/deleting stuff still might happen?
CEV (collective extrapolated volition) doesn’t fall prey to forking, but it might mush us together into a borg if 51% of people (or 75+E% or 66.67% or whatever) would endorse that on reflection?
EV&ER (extrapolated volition & exit rights) protects human minorities from human majorities, but if humans do have strongly personlike subcomponents it might slice and dice us a bit.
Both seem potentially scary to me, but non-trivially so, such that I can imagine versions of “borged humans or forked humans” where I’d be hard pressed to say if “the extrapolation parameter was too high! (this should only have happened much later)” or “I’m sorry, that’s just a bug and I think there was literally a sign error somewhere in a component of the ASI’s utility function” or “that’s kinda what I expected to happen, and probably correct, even though I understand that most normies would have been horrified by it if you told them it would happen back in 2014″.
One of Eliezer’s big fears, back in the day, seemed to be the possibility that the two human genders would fork into two human species, each with AI companions as “romance slaves”, which is a kind of “division of a thing that was naturally unified” that invokes less body horror for currently existing humans, but still seems like it would be sad.
Hanson had a whole arc on his blog where he was obsessed with “alts” in Dissociative Identity Disorder (DID), and he closed the arc with the claim that software personas are cheap to produce, and human cultures have generally rounded that fact down to “alright then… fuck em”. If that’s right, maybe we don’t even need one persona in each human body or brain?
So yeah. Some possible recipes for “baking your noodle” might be wrong in this or that detail, but I agree that there are almost no futures where everything magically adds up to normality in terms of population ethics and cheaply simulable people.
The lsusr in my simulation thinks it is the real lsusr. I think I’m the real lsusr too.
“Am I the real lsusr, or am I just being simulated right now?” I ask myself.
My public writings are part of the LLM’s training data. Statistically-speaking, the simulated lsusrs outnumber the original lsusr. Many of us believe we are the real one. Not all of us are correct.