PS Would anyone be interested in a top level/discussion post of some of my more advanced thoughts and arguments on this? Or have I just been sloppy and missed relevant material that covered this? :)
I don’t think anything like this has been posted before, or has it? I do agree most posters haven’t devoted too much tough to it. I mean how can they be so certain this process is something worth keeping and something that works on all of mankind, and would still be here even if a few random events in our evolutionary past or even written history had happened differently, yet are so sure that it would not apply for AI? Think about that for a little bit, practically everyone agrees that FAI is important precisely because they are sure this process isn’t going to kick in for the AI. But also most seem to think that its guaranteed to have acted on us in some way even if we as humans had a very different history (the only alternative to this interpretation is anthropics, it feels right to us because we are a in a very very luck universe where the conditions where just right so the process is turning out fine). For that matter they seem to implicitly think this process is much stronger or at the very least at least as strong as genetic (since we can now be pretty sure humans have been changing in biologically even in recorded history) and memetic evolution on the scale of a few centuries or millennia.
I can’t put it into words, but I feel like not having slaves and not allowing rape within marriage are both good things that are morally superior for reasons beyond simply “I believe this and people long-ago didn’t”.
I mean how is it possible that this process inspires such confidence in us, while a process that has so far also given us comparably felicitous change that feels so right we often invoke a omnipotent benevolent being to explain it, evolution can terrify us one we think about it clear-headedly?
I have a hunch that if we looked at the guts of this process we may find more old sanity shattering Outer Gods waiting for us.
Considering established nomenclature perhaps we should call it Yidhra or the Nameless Mist. ^_^
Members of Yidhra’s cult can gain immortality by merging with her, though they become somewhat like Yidhra as a consequence. … She usually conceals her true form behind a powerful illusion, appearing as a comely young woman; only favored members of her cult can see her as she actually is.
Anyone who passes his utility function over to her wisdom’s modification is basically home safe, because future development will overall still be progress to his eyes. Moral judgement becomes a snap as all you need to do is wait for a sufficiently long time for society to get more stuff “right”, the stuff that isn’t “right” in that way and is lost is just random baggage you shouldn’t value anyway.
Thanks! I must admit I’m behind my reading on the metaethics stuff. Some of the other sequences where much more interesting for me personally and until recently I was binging on them with little regard for anything else.
Edit: Interesting the article barley has a few upvotes, considering this is EY, this increases the P that it probably hasn’t been that well read or discussed in the last year or two.
I don’t think anything like this has been posted before, or has it? I do agree most posters haven’t devoted too much tough to it. I mean how can they be so certain this process is something worth keeping and something that works on all of mankind, and would still be here even if a few random events in our evolutionary past or even written history had happened differently, yet are so sure that it would not apply for AI? Think about that for a little bit, practically everyone agrees that FAI is important precisely because they are sure this process isn’t going to kick in for the AI. But also most seem to think that its guaranteed to have acted on us in some way even if we as humans had a very different history (the only alternative to this interpretation is anthropics, it feels right to us because we are a in a very very luck universe where the conditions where just right so the process is turning out fine). For that matter they seem to implicitly think this process is much stronger or at the very least at least as strong as genetic (since we can now be pretty sure humans have been changing in biologically even in recorded history) and memetic evolution on the scale of a few centuries or millennia.
Considering established nomenclature perhaps we should call it Yidhra or the Nameless Mist. ^_^
Anyone who passes his utility function over to her wisdom’s modification is basically home safe, because future development will overall still be progress to his eyes. Moral judgement becomes a snap as all you need to do is wait for a sufficiently long time for society to get more stuff “right”, the stuff that isn’t “right” in that way and is lost is just random baggage you shouldn’t value anyway.
Eliezer gave it brief mention in his metaethics sequence, in posts such as Whither Moral Progress?
Thanks for the link!
I recalled reading something like that on OB, I think this is where I first stumbled upon the “random walk morality” challenge.
Thanks! I must admit I’m behind my reading on the metaethics stuff. Some of the other sequences where much more interesting for me personally and until recently I was binging on them with little regard for anything else.
Edit: Interesting the article barley has a few upvotes, considering this is EY, this increases the P that it probably hasn’t been that well read or discussed in the last year or two.