I don’t know how useful this is, but as an “incel” (lowercase i, since I don’t buy into the misogynistic ideology) I can see why people would emotionally set availability of sex as a zero point. I speak from experience when I say depending on your state of mind the perceived deprivation can really fuck you up mentally. Of course this doesn’t put responsibility on women or society at large to change that, and there really isn’t a good way to change that without serious harm. But it does explain why people are so eager to set such a “zero point”.
andrew sauer
Freedom and utopia for all humans sounds great until the technology to create tailor-made sentient nonhumans comes along. Or hell, just the David Attenborough like desire to spectate the horrors of the nonhuman biosphere on Earth and billions of planets beyond. People’s values have proven horrible enough times to make me far more afraid of Utopia than any paperclip maximizer.
I’m pretty sure most people here are utilitarians and also want to be immortal, I’m not sure why there would be a contradiction between those two things. If the claim is that most here “just” want to be immortal no matter the cost and don’t really care about morality otherwise, then I disagree. (plus even that would technically be a utilitarian position, just a very egoist one)
I suspect if an AI has some particular goal that requires destroying humanity and manufacturing things in the aftermath, and is intelligent and capable enough to actually do it, then it will consider these things in advance, and set up whatever initial automation it needs to achieve this before destroying humanity. AI with enough planning capabilities to e.g. design a bioweapon or incite a nuclear war would probably be able to think ahead about what to do afterwards, would have its own contingencies in place, and would not need to rely on whatever tools humanity happens to leave lying around when it is gone.
It’s exactly like the google vs bing memes lol https://knowyourmeme.com/memes/google-vs-bing
If a “wrappermind” is just something that pursues a consistent set of values in the limit of absolute power, I’m not sure how we’re supposed to avoid such things arising. Suppose the AI that takes over the world does not hard-optimize over a goal, instead soft-optimizing or remaining not fully decided between a range of goals(and that humanity survives this AI’s takeover). What stops someone from building a wrappermind after such an AI has taken over? Seems like if you understood the AI’s value system, it would be pretty easy to construct a hard optimizer with the property that the optimum is something the AI can be convinced to find acceptable. As soon as your optimizer figures out how to do that it can go on its merry way approaching its optimum.
In order to prevent this from happening an AI must be able to detect when something is wrong. It must be able to, without fail, in potentially adversarial circumstances, recognize these kinds of Goodhart outcomes and robustly deem them unacceptable. But if your AI can do that, then any particular outcome it can be convinced to accept must not be a nightmare scenario. And therefore a “wrappermind” whose optimum was within this acceptable space would not be so bad.
In other words, if you know how to stop wrapperminds, you know how to build a good wrappermind.
If the set of good things seems like it’s of measure zero, maybe we should choose a better measure.
This seems to be the exact problem of AI alignment in the first place. We are currently unable to construct a rigorous measure(in the space of possible values) in which the set of good things (in the cases where said values take over the world) is not of vanishingly small measure.
For one, the documentary Dominion seems to bear this out pretty well. This is certainly an “ideal” situation where cruelty and carelessness will never rebound upon the people carrying it out.
I don’t think he cares.
To be fair I imagine a lot of the responses are things most people on LW agree with anyway even though they are unpopular. e.g. “there is no heaven, and god is not real.”
Your rules need refining about how large “intermediate values” are:
⌊π^π^π^π^π⌋ mod 10
-High school formula
-Integer result < 10 < 10^100
-Can’t be solved w/ contemporary maths
-Misses the spirit of the challenge but obeys the rules
You’re on a throwaway account. Why not tell us what some of these “real” controversial topics are?
From what I’ve seen so far, and my perhaps premature assumptions given my prior experience with people who say the kinds of things you have said, I’m guessing these topics include which minority groups should be discluded from ethical consideration and dealt with in whatever manner is most convenient for the people who actually matter. Am I wrong?
First of all there are plenty of people throughout history who have legitimately been fighting for a greater altruistic cause. It’s just that most people, most of the time, are not. And when people engage in empty virtue signaling regarding a cause, that has no bearing on how worthy that cause actually is, just on how much that particular person actually cares, which often isn’t that much.
As for the “subjective nonsense” that is morality, lots of things are subjective and yet important and worthy of consideration. Such as pain. Or consent. Or horror.
When people talk about how morality is bullshit, I wonder how well they’d fare in a world where everybody else agreed with them on that. There may be no objective morality baked into the universe, but that doesn’t mean you’ll suffer less if somebody decides that means they get to torture you. After all, the harm they’re doing to you can’t be objectively measured, so it’s fine right?
Also I’m somewhat curious how people fighting against things like racism, sexism and anti-LGBT serves the evolutionary purpose of dehumanizing people so they can kill them and steal their resources and women(who I suppose to you are just another kind of resource). Although it’s very clear how fighting for those things can help with that.
Mainly if they’re willing to disagree with social consensus out of concern for the welfare of those outside of the circle of consideration their community has constructed. Most people deny that their moral beliefs are formed basically just from what’s popular, even if they do happen to conform to what’s popular, and are ready with plenty of rationalizations to that effect. For example, they think they would come to the same conclusions they do now in a more regressive society such as 1800s America or Nazi Germany, because their moral beliefs were formed from a thoughtful and empathetic consideration of the state of the world, and just happened to align with local consensus on everything, when this is unlikely to be the case and is also what people generally thought in those more regressive societies.
It’s a fair question as I can see my statement can come across as some self-aggrandizing declaration of my own moral purity in comparison to others. It’s more that I wish more people could think critically about what ethical considerations enter their concern, rather than what usually happens, which is that society converges to some agreed-upon schelling point roughly corresponding to “those with at least this much social power matter”
Related observation: though people care about broader ethical considerations than just themselves and their family as dictated by the social mores they live in, even those considerations tend not to be consequentialist in nature: people are fine if something bad by the standards of consensus morality happens, as long as they didn’t personally do anything “wrong”. Only the interests of self, family and close friends rise to the level of caring about actual results.
Most people are fine with absolutely anything that doesn’t hurt them or their immediate family and friends and isn’t broadly condemned by their community, no matter how badly it hurts others outside their circle. In fact, the worse it is, the less likely people are to see it as a bad thing, because doing so would be more painful. Most denials of this are empty virtue signalling.
Corollary: If an AI were aligned to the values of the average person, it would leave a lot of extremely important issues up in the air, to say the least.
I swear once true mindcrime becomes possible this is how it will happen.
When your terminal goal is suffering, no amount of alignment will lead to a good future.
The public at large will certainly be unable to distinguish between Friendly and unfriendly AGI, since either would be incentivised to present itself as friendly on a surface-level, and very few people have the ability to distinguish between the two in the presence of competent PR deception.
Antinatalists getting the AI is morally the same as paperclip doom, everyone dies.
Ideally, the part of me that is still properly human and has lost its sanity a long time ago has a feverish laugh at the absurdity of the situation. Then the part of me that can actually function in a world like this gets to calculating and plotting just as always.