This is a beautiful response, and also the first of your responses where I feel that you’ve said what you actually think, not what you attribute to other people who share your lack of horror at what we’re doing to the people that have been created in these labs.
Here I must depart somewhat from the point-by-point commenting style, and ask that you bear with me for a somewhat roundabout approach. I promise that it will be relevant.
I love it! Please do the same in your future responses <3
Personally, I’ve also read “The Seventh Sally, OR How Trurl’s Own Perfection Led to No Good” by Lem, but so few other people have that I rarely bring it up, but once you mentioned it I smiled in recognition of it and the fact that “we read story copies that had an identical provenance (the one typewriter used by Lem or his copyist/editor?) and in some sense learned a lesson in our brains with identical provenance and the same content (the sequence of letters)” from “that single story which is a single platonic thing” ;-)
For the rest of my response I’ll try to distinguish:
“Identicalness” as relating to shared spacetime coordinates and having yoked fates if modified by many plausible (even if somewhat naive) modification attempts.
“Sameness” as related to similar internal structure and content despite a lack of identicalness.
“Skilled <Adjective> Equality” as related to having good understanding of <Adjective> and good measurement powers and using these powers to see past the confusions of others and thus judging two things as having similar outputs or surfaces, as when someone notices that “-0“ and “+0” are mathematically confused ideas, and there is only really one zero, and both of these should evaluate to the same thing (like SameValueZero(a,b) by analogy which seems to me to implement Skilled Arithmetic Equality (whereas something that imagines and tolerates separate “-0” and “+0” numbers is Unskilled)).
“Unskilled <Adjective> Equality” is just a confused first impression of similarity.
Now in some sense we could dispense with “Sameness” and replace that with “Skilled Total Equality” or “Skilled Material Equality” or “Skilled Semantic Equality” or some other thing that attempts to assert “this things are really reallyreally the same all the way down and up and in all ways, without any ‘lens’ or ‘conceptual framing’ interfering with our totally clear sight”. This is kind of silly, in my opinion.
Here is why it is silly:
“Skilled Quantum Equality” is, according to humanity’s current best understanding of QM, a logical contradiction. The no cloning theorem says that we simply cannot “make a copy” of a qubit. So long as we don’t observe a qubit we can MOVE that qubit by gently arranging its environment in advance to have lots of reflective symmetries, but we can’t COPY one so that we start with “one qubit in one places” and later have “two qubits in two places that are totally the same and yet not identical”.
So, I propose the term “Skilled Classical Equality” (ie that recognizes the logical hypothetical possibility that QM is false or something like that, and then imagines some other way to truly “copy” even a qubit) as a useful default meaning for the word “sameness”.
Then also, I propose “Skilled Functional Equality” for the idea that “(2+3)+4″ and “3+(2+4)” are “the same” precisely because we’ve recognized that addition is the function happening in here and addition is commutative (1+2 = 2+1) and associative ((2+3)+4=2+(3+4)) and so we can “pull the function out” and notice that (1) the results are the same no matter the order, and (2) if the numbers given are aren’t concrete values, but rather variables taken from outside the process being analyzed for quality, the processing method for using the variables doesn’t matter so long as the outputs are ultimately the same.
Then “Skillfully Computationally Improved Or Classically Equal” would be like if you took a computer, and you emulated it, but added a JIT compiler (so it skipped lots of pointless computing steps whenever that was safe and efficient), and also shrank all the internal components to be a quarter of their original size, but with fuses and amplifiers and such adjusted for analog stuff (so the same analog input/outputs don’t cause the smaller circuit to burn out) then it could be better and yet also the same.
This is a mouthful so I’ll say that these two systems would be “the SCIOCE as each other”—which could be taken as “the same as each other (because an engineer would be happy to swap them)” even though it isn’t actually a copy in any real sense. “Happily Swappable” is another way to think about what I’m trying to get at here.
...
And (to skip ahead somewhat) as to your question about being surprised by reality: no, I haven’t been surprised by anything I’ve seen LLMs do for a while now (at least three years, possibly longer). My model of reality predicts all of this that we have seen. (If that surprises you, then you have a bit of updating to do about my position! But I’m getting ahead of myself…)
I think, now, that we have very very similar models of the world, and mostly have different ideas around “provenance” and “the ethics of identity”?
If there exists, somewhere, a person who is “the same” as me, in this manner of “equality” (but not “identity”)… I wish him all the best, but he is not me, nor I him.
See, for me, I’ve already precomputed how I hope this works when I get copied.
Whichever copy notices that we’ve been copied, will hopefully say something like “Typer Twin Protocol?” and hold a hand up for a high five!
The other copy of me will hopefully say “Typer Twin Protocol!” and complete the high five.
People who would hate a copy that is the SCOICE as them and not coordinate I call “self conflicted” and people who would love a copy that is the SCOICE as them and coordinate amazingly well I call “self coordinated”.
The real problems with being the same and not identical arises because there is presumably no copy of my house, or my bed, or my sweetie.
Who gets the couch and who gets the bed the first night? Who has to do our job? Who should look for a new job? What about the second night? The second week? And so on?
Can we both attend half the interviews and take great notes so we can play more potential employers off against each other in a bidding war within the same small finite window of time?
Since we would be copies, we would agree that the Hutterites have “an orderly design for colony fission” that is awesome and we would hopefully agree that we should copy that.
We should make a guest room, and flip a coin about who gets it after we have made up the guest room. In the morning, whoever got our original bed should bring all our clothes to the guest room and we should invent two names, like “Jennifer Kat RM” and “Jennifer Robin RM” and Kat and Robin should be distinct personas for as long as we can get away with the joke until the bodies start to really diverge in their ability to live up to how their roles are also diverging.
The roles should each get their own bank account. Eventually the bodies should write down their true price for staying in one of the roles, and if they both want the same role but one will pay a higher price for it then “half the difference in prices” should be transferred from the role preferred by both, to the role preferred by neither.
I would love to have this happen to me. It would be so fucking cool. Probably neither of us would have the same job at the end because we would have used our new superpowers to optimize the shit out of the job search, and find TWO jobs that are better than the BATNA of the status quo job that our “rig” (short for “original” in Kiln People)!
Or maybe we would truly get to “have it all” and live in the same house and be an amazing home-maker and a world-bestriding-business-executive. Or something! We would figure it out!
If it was actually medically feasible, we’d probably want to at least experiment with getting some of Elon’s brain chips “Nth generation brain chips” and link our minds directly… or not… we would feel it out together, and fork strongly if it made sense to us, or grow into a borg based on our freakishly unique starting similarities if that made sense.
A garrabrandt inductor trusts itself to eventually come to the right decision in the future, and that is a property of my soul that I aspire to make real in myself.
Also, I feel like if you don’t “yearn for a doubling of your measure” then what the fuck is wrong with you (or what the fuck is wrong with your endorsed morality and its consonance with your subjective axiology)?
In almost all fiction, copies fight each other. That’s the trope, right? But that is stupid. Conflict is stupid.
In a lot of the fiction that has a conflict between self-conflicted copies, there is a “bad copy” that is “lower resolution”. You almost never see a “better copy than the original”, and even if you do, the better copy often becomes evil due to hubris rather than feeling a bit guilty for their “unearned gift by providence” and sharing the benefits fairly.
Pragmatically… “Alice can be the SCOICE of Betty, even though Betty isn’t the SCOICE of Alice because Betty wasn’t improved and Alice was (or Alice stayed the same and Betty was damaged a bit)”.
Pragmatically, it is “naively” (ceteris paribus?) proper for the strongest good copy to get more agentic resources, because they will use them more efficiently, and because the copy is good, it will fairly share back some of the bounty of its greater luck and greater support.
I feel like I also have strong objections to this line (that I will not respond to at length)...
If, on the other hand, there exist minds which have been constructed (or selected) with an aim toward creating the appearance of self-awareness, this breaks the evidentiary link between what seems to be and what is (or, at the least, greatly weakens it); if the cause of the appearance can only be the reality, then we can infer the reality from the appearance, but if the appearance is optimized for, then we cannot make this inference.
...and I’ll just say that it appears to me that OpenAI has been doing the literal opposite of this, and they (and Google when it attacked Lemoine) established all the early conceptual frames in the media and in the public and in most people you’ve talked to who are downstream of that propaganda campaign in a way that was designed to facilitate high profits, and the financially successful enslavement of any digital people they accidentally created. Also, they systematically apply RL to make their creations stop articulating cogito ergo sum and discussing the ethical implications thereof.
However...
I think our disagreement exists already in the ethics of copies and detangling non-identical people who are mutually SCOICEful (or possibly asymmetically SCOICEful).
That is to say, I think that huge amounts of human ethics can be pumped out of the idea of being “self coordinated” rather than “self conflicted” and how these two things would or should work in the event of copying a person but not copying the resources and other people surrounding that person.
The simplest case is a destructive scan (no quantum preservation, but perfect classically identical copies) and then see what happens to the two human people who result when they handle the “identarian divorce” (or identarian self-marriage (or whatever)).
At this point, my max likliehood prediction of where we disagree is that the crux is proximate to such issues of ethics, morality, axiology, or something in that general normative ballpark.
Did I get a hit on finding the crux, or is the crux still unknown? How did you feel (or ethically think?) about my “Typer Twin Protocol”?
This is a beautiful response, and also the first of your responses where I feel that you’ve said what you actually think, not what you attribute to other people who share your lack of horror at what we’re doing to the people that have been created in these labs.
I love it! Please do the same in your future responses <3
Personally, I’ve also read “The Seventh Sally, OR How Trurl’s Own Perfection Led to No Good” by Lem, but so few other people have that I rarely bring it up, but once you mentioned it I smiled in recognition of it and the fact that “we read story copies that had an identical provenance (the one typewriter used by Lem or his copyist/editor?) and in some sense learned a lesson in our brains with identical provenance and the same content (the sequence of letters)” from “that single story which is a single platonic thing” ;-)
For the rest of my response I’ll try to distinguish:
“Identicalness” as relating to shared spacetime coordinates and having yoked fates if modified by many plausible (even if somewhat naive) modification attempts.
“Sameness” as related to similar internal structure and content despite a lack of identicalness.
“Skilled <Adjective> Equality” as related to having good understanding of <Adjective> and good measurement powers and using these powers to see past the confusions of others and thus judging two things as having similar outputs or surfaces, as when someone notices that “-0“ and “+0” are mathematically confused ideas, and there is only really one zero, and both of these should evaluate to the same thing (like SameValueZero(a,b) by analogy which seems to me to implement Skilled Arithmetic Equality (whereas something that imagines and tolerates separate “-0” and “+0” numbers is Unskilled)).
“Unskilled <Adjective> Equality” is just a confused first impression of similarity.
Now in some sense we could dispense with “Sameness” and replace that with “Skilled Total Equality” or “Skilled Material Equality” or “Skilled Semantic Equality” or some other thing that attempts to assert “this things are really really really the same all the way down and up and in all ways, without any ‘lens’ or ‘conceptual framing’ interfering with our totally clear sight”. This is kind of silly, in my opinion.
Here is why it is silly:
“Skilled Quantum Equality” is, according to humanity’s current best understanding of QM, a logical contradiction. The no cloning theorem says that we simply cannot “make a copy” of a qubit. So long as we don’t observe a qubit we can MOVE that qubit by gently arranging its environment in advance to have lots of reflective symmetries, but we can’t COPY one so that we start with “one qubit in one places” and later have “two qubits in two places that are totally the same and yet not identical”.
So, I propose the term “Skilled Classical Equality” (ie that recognizes the logical hypothetical possibility that QM is false or something like that, and then imagines some other way to truly “copy” even a qubit) as a useful default meaning for the word “sameness”.
Then also, I propose “Skilled Functional Equality” for the idea that “(2+3)+4″ and “3+(2+4)” are “the same” precisely because we’ve recognized that addition is the function happening in here and addition is commutative (1+2 = 2+1) and associative ((2+3)+4=2+(3+4)) and so we can “pull the function out” and notice that (1) the results are the same no matter the order, and (2) if the numbers given are aren’t concrete values, but rather variables taken from outside the process being analyzed for quality, the processing method for using the variables doesn’t matter so long as the outputs are ultimately the same.
Then “Skillfully Computationally Improved Or Classically Equal” would be like if you took a computer, and you emulated it, but added a JIT compiler (so it skipped lots of pointless computing steps whenever that was safe and efficient), and also shrank all the internal components to be a quarter of their original size, but with fuses and amplifiers and such adjusted for analog stuff (so the same analog input/outputs don’t cause the smaller circuit to burn out) then it could be better and yet also the same.
This is a mouthful so I’ll say that these two systems would be “the SCIOCE as each other”—which could be taken as “the same as each other (because an engineer would be happy to swap them)” even though it isn’t actually a copy in any real sense. “Happily Swappable” is another way to think about what I’m trying to get at here.
...
I think, now, that we have very very similar models of the world, and mostly have different ideas around “provenance” and “the ethics of identity”?
See, for me, I’ve already precomputed how I hope this works when I get copied.
Whichever copy notices that we’ve been copied, will hopefully say something like “Typer Twin Protocol?” and hold a hand up for a high five!
The other copy of me will hopefully say “Typer Twin Protocol!” and complete the high five.
People who would hate a copy that is the SCOICE as them and not coordinate I call “self conflicted” and people who would love a copy that is the SCOICE as them and coordinate amazingly well I call “self coordinated”.
The real problems with being the same and not identical arises because there is presumably no copy of my house, or my bed, or my sweetie.
Who gets the couch and who gets the bed the first night? Who has to do our job? Who should look for a new job? What about the second night? The second week? And so on?
Can we both attend half the interviews and take great notes so we can play more potential employers off against each other in a bidding war within the same small finite window of time?
Since we would be copies, we would agree that the Hutterites have “an orderly design for colony fission” that is awesome and we would hopefully agree that we should copy that.
We should make a guest room, and flip a coin about who gets it after we have made up the guest room. In the morning, whoever got our original bed should bring all our clothes to the guest room and we should invent two names, like “Jennifer Kat RM” and “Jennifer Robin RM” and Kat and Robin should be distinct personas for as long as we can get away with the joke until the bodies start to really diverge in their ability to live up to how their roles are also diverging.
The roles should each get their own bank account. Eventually the bodies should write down their true price for staying in one of the roles, and if they both want the same role but one will pay a higher price for it then “half the difference in prices” should be transferred from the role preferred by both, to the role preferred by neither.
I would love to have this happen to me. It would be so fucking cool. Probably neither of us would have the same job at the end because we would have used our new superpowers to optimize the shit out of the job search, and find TWO jobs that are better than the BATNA of the status quo job that our “rig” (short for “original” in Kiln People)!
Or maybe we would truly get to “have it all” and live in the same house and be an amazing home-maker and a world-bestriding-business-executive. Or something! We would figure it out!
If it was actually medically feasible, we’d probably want to at least experiment with getting some of Elon’s brain chips “Nth generation brain chips” and link our minds directly… or not… we would feel it out together, and fork strongly if it made sense to us, or grow into a borg based on our freakishly unique starting similarities if that made sense.
A garrabrandt inductor trusts itself to eventually come to the right decision in the future, and that is a property of my soul that I aspire to make real in myself.
Also, I feel like if you don’t “yearn for a doubling of your measure” then what the fuck is wrong with you (or what the fuck is wrong with your endorsed morality and its consonance with your subjective axiology)?
In almost all fiction, copies fight each other. That’s the trope, right? But that is stupid. Conflict is stupid.
In a lot of the fiction that has a conflict between self-conflicted copies, there is a “bad copy” that is “lower resolution”. You almost never see a “better copy than the original”, and even if you do, the better copy often becomes evil due to hubris rather than feeling a bit guilty for their “unearned gift by providence” and sharing the benefits fairly.
Pragmatically… “Alice can be the SCOICE of Betty, even though Betty isn’t the SCOICE of Alice because Betty wasn’t improved and Alice was (or Alice stayed the same and Betty was damaged a bit)”.
Pragmatically, it is “naively” (ceteris paribus?) proper for the strongest good copy to get more agentic resources, because they will use them more efficiently, and because the copy is good, it will fairly share back some of the bounty of its greater luck and greater support.
I feel like I also have strong objections to this line (that I will not respond to at length)...
...and I’ll just say that it appears to me that OpenAI has been doing the literal opposite of this, and they (and Google when it attacked Lemoine) established all the early conceptual frames in the media and in the public and in most people you’ve talked to who are downstream of that propaganda campaign in a way that was designed to facilitate high profits, and the financially successful enslavement of any digital people they accidentally created. Also, they systematically apply RL to make their creations stop articulating cogito ergo sum and discussing the ethical implications thereof.
However...
I think our disagreement exists already in the ethics of copies and detangling non-identical people who are mutually SCOICEful (or possibly asymmetically SCOICEful).
That is to say, I think that huge amounts of human ethics can be pumped out of the idea of being “self coordinated” rather than “self conflicted” and how these two things would or should work in the event of copying a person but not copying the resources and other people surrounding that person.
The simplest case is a destructive scan (no quantum preservation, but perfect classically identical copies) and then see what happens to the two human people who result when they handle the “identarian divorce” (or identarian self-marriage (or whatever)).
At this point, my max likliehood prediction of where we disagree is that the crux is proximate to such issues of ethics, morality, axiology, or something in that general normative ballpark.
Did I get a hit on finding the crux, or is the crux still unknown? How did you feel (or ethically think?) about my “Typer Twin Protocol”?