I still intermittently run into people who claim that there’s no such thing as reality or truth;
This sounds… strawmanny. “Reality and truth are not always the most useful concepts and it pays to think in other ways at times” would be a somewhat more charitable representation of non-realist ideas.
If I understood correctly, your objection to Three Worlds Collide is (mostly?) descriptive rather than prescriptive: you think the story is unrealistic, rather than dispute some normative position that you believe it defends.
I am not a moral realist, so I cannot dispute someone else’s morals, even if I don’t relate to them, as long as they leave me alone. So, yes, descriptive, and yes, I find the story a great read, but that particular element, moral expansionism, does not match the implied cohesiveness of the multi-world human species.
Do you believe real world humans are “slow to act against the morals it finds abhorrent”?
how do you explain all (often extremely violent) conflicts over religion and political ideology over the course of human history?
Generally, economic or some other interests in disguise. Like distracting the populous from the internal issues. You can read up on the reasons behind the Crusades, the Holocaust, etc. You can also notice that when the morals lead the way, extreme religious zealotry leads to internal instability, like the fractures inside Christianity and Islam. So, my model that you call “factually wrong” seems to fit the observations rather well, though I’m sure not perfectly.
Whatever explanation you provide to this survival, what prevents it from explaining the continued survival of the human species until the imaginary future in the story?
My point is that humans are behaviorally both much more and much less tolerant of the morals they find deviant than they profess. In the story I would have expected humans to express extreme indignation over babyeaters’ way of life, but do nothing about it beyond condemnation.
It’s frustrating where an honest exchange fails to achieve any noticeable convergence… Might try once more and if not, well, Aumann does not apply here, anyhow.
My main point: “to survive, a species has to be slow to act against the morals it finds abhorrent”. I am not sure if this is the disagreement, maybe you think that it’s not a valid implication (and by implication I mean the converse, “intolerant ⇒ stunted”).
I had a pair programming experience at my first job back in the late 80s, before it was a thing, and my coworker and I clicked well, so it was fun while it lasted. Never had a chance to do it again, but miss it a lot. Wish I could work at a place where this is practiced.
I still don’t understand, is your claim descriptive or prescriptive?
Neither… Or maybe descriptive? I am simply stating the implication, not prescribing what to do.
I don’t understand what you’re saying here at all.
Yes, we do have plenty of laws, but no one goes out of their way to find and hunt down the violators. If anything, the more horrific something is, the more we try to pretend it does not exist. You can argue and point at the law enforcement, whose job it is, but it doesn’t change the simple fact that you can sleep soundly at night ignoring what is going on somewhere not far from you, let alone in the babyeaters’ world.
“Universal we!right” is a contradiction in terms.
We may have not agreed on the meaning. I meant “human universal” not some species-independent morality.
in a given debate about ethics there might be hope that the participants can come to a consensus
I find it too optimistic a statement for a large “we”. The best one can hope for is that logical people can agree with an implication like “given this set of values, this is the course of action someone holding these values ought to take to stay consistent”, without necessarily agreeing with the set of values themselves. In that sense, again, it’s describes self-consistent behaviors without privileging a specific one.
In general, it feels like this comment thread has failed to get to the crux of the disagreement, and I am not sure if anything can be done about it, at least without using a more interactive medium.
Re “tenability”, today’s SMBC captures it well: https://www.smbc-comics.com/comic/multiplanetary
If interpreted in the logical sense, I don’t think your argument makes sense: it seems like trying to derive an “ought” from an “is”.
Hmm, in my reply to OP I expressed what the moral of the story is for me, and in my reply to you I tried to justify it by appealing to the expected stability of the species as a whole. The “ought”, if any, is purely utilitarian: to survive, a species has to be slow to act against the morals it finds abhorrent.
Also, the actual distance between those diverging morals matters, and baby eating surely seems like an extreme example.
Uh. If you live in a city, there is a 99% chance that there is little girl within a mile from you being raped and tortured by her father/older brother daily for their own pleasure, yet no effort is made to find and save her. I don’t find the babyeaters’ morals all that divergent from human, at least the babyeaters had a justification for their actions based on the need for the whole species to survive.
I don’t claim claim that leaving the Baby-eaters alone is necessarily we!wrong, but it is not obvious to me that it is we!right
My point is that there is no universal we!right and we!wrong in the first place, yet the story was constructed on this premise, which led to the whole species being hoisted on its own petard.
it is supposed to be a “weird” culture by modern standards), much less an alien culture like the Super-Happies
Oh. It never struck me as weird, let alone alien. The babyeaters are basically Spartans and the super-happies are hedonists.
The near-universal reaction of the crew to the baby-eaters customs is not just horror and disgust, but also the moral imperative to act to change them. It’s as if there existed a species-wide objective “species!wrong”, which is an untenable position, and, even less believably than that, as if there existed a “universal!meta-wrong” where anyone not adhering to your moral norms must be changed in some way to make it palatable (the super-happies are willing to go an extra mile to change themselves in their haste to fix things that are “wrong” with others).
This position is untenable because it would lead to constant internal infighting, as customs and morals naturally drift apart for a diverse enough society. Unless you impose a central moral authority and ruthlessly weed out all deviants.
I am not sure how much of the anti-prime-directive morality is endorsed by Eliezer personally, as opposed to merely being described by Eliezer the fiction writer.
I liked the story, but could never relate to its Eliezer-imposed “universal morality” of forcing others to conform to your own norms. To me the message of the story is “expansive metaethics leads to trouble, stick to your own and let others live the way they are used to, while being open to learning about each other’s ways non-judgmentally”.
I’ve tried to learn the basics of the category theory some years ago, already having some background in algebraic topology, mathematical physics and programming. And, presumably, in rationality. I got the glimpses of how interesting it is, how it could be useful, but was never quite able to make use of it. Very curious if your series of posts can change that for me. Keep going!
This was a very speculative if exciting essay, and I don’t believe that there has been any serious research done in this area, in part because it is unclear where one would start without having a better understanding of the measurement problem. Certainly online search comes up empty. I think the main value of this work is that a computer scientist and part-physicist (though Scott Aaronson would probably deny that he is the latter) can make a non-trivial contribution to the age-old philosophical questions of free will and consciousness.
On general principles, given the Lizardman’s constant of 4-5%, one would expect at least several people to nuke the site. Strange that it didn’t happen.
That was an interesting exposition. One of the millions of lives rare people like Petrov save from extinction. There are probably dozens more of people like him, all over the world, most never to get any recognition or even acknowledgment, and likely prosecuted for going against the authority and the regulations.
Increasing complexity also increases fragility. #ProgrammingTruths
Good therapy and a good emotional support networks are not competitors, but rather two great tastes that taste great together.
The hard part is finding friends who would give you emotional support by actively listening to you, without giving you unsolicited advice or trying to solve your problems. An even harder part is to be a friend like that to others. I used to volunteer on an emotional support website for several hours a day for a couple of years dong just that, and it’s amazing how much we all crave an empathetic listener while rarely being one.
This “proxy fireworks” where expanding the system causes various proxies to visibly split and fly in different directions is definitely a good intuitive way to understand some of the AI alignment issues.
What I am wondering is whether it is possible to specify anything but a proxy, even in principle. After all, humans as general intelligences fall into the same trap of optimizing proxies (instrumental goals) instead of the terminal goals all the time. We are also known for our piss-poor corrigibility properties. Certainly the maze example or a clean room example are simple enough, but once we ramp up the complexity, odds are that the actual optimization proxies start showing up. And the goal is for an AI to be better at discerning and avoiding “wrong” proxies than any human, even though humans are apt to deny that they are stuck optimizing proxies despite the evidence to the contrary. The reaction (except in PM from Michael Vassar) to an old post of mine is a pretty typical example of that. So, my expectation would be that we would resist to even consider that we optimize a proxy when trying to align a corrigible AI.
Not sure about studies, seems more like a useful cultural thing, one of those traditions whose internal justifications have nothing to do with the reason they are useful:
In this case it is probably about having a balanced diet with enough calories, vitamins, minerals etc. It is probably far from optimal, but a good-enough simple advice an average human can follow.
I was tempted to downvote your post, but refrained, seeing how much effort you put into it. Sadly, it seems to miss the point of non-realism entirely, at least the way I understand it. I am not a realist, and have been quite vocal about my views here. Admittedly, they are rather more radical than those of many here. Mostly out of necessity, since once you become skeptical about one realist position, then to be consistent you have to keep decompartmentalizing until the notions of reality, truth and existence become nothing more than useful models. This obviously applies to normative claims, as well, and so cognitivism is not wrong, but meaningless.
if anti-realism is true, then it can’t also be true that we should believe that anti-realism is true
Without a better description of the “you” in this setup, I doubt one can fruitfully answer this question. In general, however, my preferred resolution to the Fermi paradox is that there is no intelligent life out there because there isn’t one even here, on Earth. Because the notion of life abstracted from “self-reproducing proteins” loses its coherency. But, as a fellow chemical reaction, I am looking forward to other views.
Why are you worried about egg consumption specifically?
I would like to see some concrete examples of this DDDiscourse. And with fewer Zs :)