Suppose you know that there is an apple in this box. You will modify your memory then, to think that the box is empty. You open the box, expecting nothing there. Is there an apple?
Also, what if there is another branch of the universe where there is no apple, and you in the “yes apple” universe did modify his memory and you are both identical now. So there are two identical people in deferent worlds, one with box-with-apple, the other one with box-without-apple.
Should you, in the world with apple and yet unmodified memory anticipate 50% chance to experience empty box after opening it?
I think it’s identical to the problem when you get copied in two rooms, numbered 1 and 2, then you should expect 50% of 1 and 50% of 2 even if there is literally no randomness or uncertainty in what’s going to happen. or is it?
So, implication’s here is that you can squeeze yourself into different timelines by modifying your memory or what, am i going crazy here
In our solar system, the two largest objects are the Sun and Jupiter. Suspiciously, their radii both start with the number ’69′: the Sun’s radius is 696,340 km, while Jupiter’s is 69,911 km.
What percent of ancestral simulations have this or similarly silly “easter eggs”. What is the Bayes factor
TLDR give pigs guns (preferably by enhancing individual baseline pigs, not by breeding new type of smart powerful pig. Otherwise it will probably just be two different cases. More like gene therapy than producing modified fetuses)
As of lately I hold an opinion that morals are proxy to negotiated cooperation or something, I think it clarifies a lot about the dynamics that produce it. It’s like evolutionary selection → human desire to care about family and see their kids prosper, implicit coordination problems between agents of varied power levels → morals.
So, like, uplift could be the best way to ensure that animals are treated well. Just give them power to hurt you and benefit you, and they will be included into moral considerations, after some time for it to shake out. Same stuff with hypothetical p-zombies, they are as powerful as humans, so they will be included. Same with EMs.
Also, “super beneficiaries” are then just powerful beings, don’t bother to research the depth of experience or strength of preferences. (e.g. gods, who can do whatever and don’t abide by their own rules and perceived to be moral, as an example of this dynamics).
Also, pantheon of more human like gods → less perceived power + perceived possibility to play on disagreements → lesser moral status. One powerful god → more perceived power → stronger moral status. Coincidence? I think not.
Modern morals could be driven by a lot stronger social mobility. People have a lot of power now, and can unexpectedly acquire a lot of power later. so, you should be careful with them and visibly commit to treating them well (e.g. be moral person, with particular appropriate type of morals).
And it’s not surprising how (chattel) slaves were denied a claim on being provided with moral considerations (or claim on being a person or whatever), in a strong equilibrium where they are powerless and expected to be powerless later.
tldr give pigs guns
(preferably by enhancing individual baseline pigs, not by breeding new type of smart powerful pig. Otherwise it will probably just be two different cases. More like gene therapy than producing modified fetuses)
I think this is misguided. It ignores the is-ought discrepancy by assuming that the way morals seem to have evolved is the “truth” of moral reasoning. I also think it’s tactically unsound—the most common human-group reaction to something that looks like a threat and isn’t already powerful enough to hurt us is extermination.
I DO think that uplift (of humans and pigs) is a good thing on its own—more intelligence means more of the universe experiencing and modeling itself.
It ignores the is-ought discrepancy by assuming that the way morals seem to have evolved is the “truth” of moral reasoning
No? Not sure how do you got that from my post. Like, my point is that morals are backed in solutions to coordination problems between agents with different wants and power levels. Backed into people’s goal systems. Just as “loving your kids” is a desire that was backed in from reproductive fitness pressure. But instead of brains it works on a level of culture. I.e. Adaptation-Executers, not Fitness-Maximizers
I also think it’s tactically unsound—the most common human-group reaction to something that looks like a threat and isn’t already powerful enough to hurt us is extermination.
Eh. I think it’s one of the considerations. Like, it will probably not be that. It’s either ban on everything even remotely related or some chaos when different regulatory systems trying to do stuff.
Suppose you know that there is an apple in this box. You will modify your memory then, to think that the box is empty. You open the box, expecting nothing there. Is there an apple?
Also, what if there is another branch of the universe where there is no apple, and you in the “yes apple” universe did modify his memory and you are both identical now. So there are two identical people in deferent worlds, one with box-with-apple, the other one with box-without-apple.
Should you, in the world with apple and yet unmodified memory anticipate 50% chance to experience empty box after opening it?
If you got confused about the setup here is a diagram: https://i.imgur.com/jfzEknZ.jpeg
I think it’s identical to the problem when you get copied in two rooms, numbered 1 and 2, then you should expect 50% of 1 and 50% of 2 even if there is literally no randomness or uncertainty in what’s going to happen. or is it?
So, implication’s here is that you can squeeze yourself into different timelines by modifying your memory or what, am i going crazy here
In our solar system, the two largest objects are the Sun and Jupiter. Suspiciously, their radii both start with the number ’69′: the Sun’s radius is 696,340 km, while Jupiter’s is 69,911 km.
What percent of ancestral simulations have this or similarly silly “easter eggs”. What is the Bayes factor
You might enjoy this classic: https://www.lesswrong.com/posts/9HSwh2mE3tX6xvZ2W/the-pyramid-and-the-garden
TLDR give pigs guns (preferably by enhancing individual baseline pigs, not by breeding new type of smart powerful pig. Otherwise it will probably just be two different cases. More like gene therapy than producing modified fetuses)
As of lately I hold an opinion that morals are proxy to negotiated cooperation or something, I think it clarifies a lot about the dynamics that produce it. It’s like evolutionary selection → human desire to care about family and see their kids prosper, implicit coordination problems between agents of varied power levels → morals.
So, like, uplift could be the best way to ensure that animals are treated well. Just give them power to hurt you and benefit you, and they will be included into moral considerations, after some time for it to shake out. Same stuff with hypothetical p-zombies, they are as powerful as humans, so they will be included. Same with EMs.
Also, “super beneficiaries” are then just powerful beings, don’t bother to research the depth of experience or strength of preferences. (e.g. gods, who can do whatever and don’t abide by their own rules and perceived to be moral, as an example of this dynamics).
Also, pantheon of more human like gods → less perceived power + perceived possibility to play on disagreements → lesser moral status. One powerful god → more perceived power → stronger moral status. Coincidence? I think not.
Modern morals could be driven by a lot stronger social mobility. People have a lot of power now, and can unexpectedly acquire a lot of power later. so, you should be careful with them and visibly commit to treating them well (e.g. be moral person, with particular appropriate type of morals).
And it’s not surprising how (chattel) slaves were denied a claim on being provided with moral considerations (or claim on being a person or whatever), in a strong equilibrium where they are powerless and expected to be powerless later.
tldr give pigs guns
(preferably by enhancing individual baseline pigs, not by breeding new type of smart powerful pig. Otherwise it will probably just be two different cases. More like gene therapy than producing modified fetuses)
I think this is misguided. It ignores the is-ought discrepancy by assuming that the way morals seem to have evolved is the “truth” of moral reasoning. I also think it’s tactically unsound—the most common human-group reaction to something that looks like a threat and isn’t already powerful enough to hurt us is extermination.
I DO think that uplift (of humans and pigs) is a good thing on its own—more intelligence means more of the universe experiencing and modeling itself.
No? Not sure how do you got that from my post. Like, my point is that morals are backed in solutions to coordination problems between agents with different wants and power levels. Backed into people’s goal systems. Just as “loving your kids” is a desire that was backed in from reproductive fitness pressure. But instead of brains it works on a level of culture. I.e. Adaptation-Executers, not Fitness-Maximizers
Eh. I think it’s one of the considerations. Like, it will probably not be that. It’s either ban on everything even remotely related or some chaos when different regulatory systems trying to do stuff.