Propagandizing Meta-Ethics in an Essay Contest

So someone I follow on Facebook linked to this essay contest on the subject of: How Should Humanity Steer the Future? My urge to make a joke of it immediately kicked in. The impulse to joke turned into an impulse to actually submit an essay when the words “steering the future” set off a minor “ding!” in my head.

At least regarding AI, many papers and articles have already been published on what that problem is: even well-intentioned people could accidentally create a Completely or Partially Alien Mind-Design that has no care and no sympathy for us or our values. Should such a thing grow more powerful than us, it would proceed to kill us all. We would be in the way, as a bacterium is to us, and dealt with identically.

Blah blah blah blah.

To me personally, by sheer madness of personal preferences, that is not the interesting part. Danger, even existential danger, seems to me quite passe these days. In only the time I’ve been alive, we’ve been under threat of nuclear apocalypse via war between nation-states or nuclear-weapons usage by terrorist groups, global warming and other environmental damages are slowly destroying the only planetary-level habitat we humans have, and in the past five or so years we’ve been dealing with continental-level economic collapses and stagnation as well (I personally subscribe to the theory that capitalism itself is a human-Unfriendly optimization process, which is partially apropos here). Those are just the apocalypses: then we have to add in all the daily pains, indignities and deaths suffered by the vast majority of the world’s people, many of whom are so inured to their suffering that they consider it a normal or even morally appropriate part of their lives. Only at the end of that vast, astronomical summation can we say we have totalled humanity’s problems.

All that, only in the time I’ve been alive, for about 25 years now, even when, statistically speaking, I’m living in a steadily improving golden age (or at least a gilded age that’s fighting to become a golden age). No previous generation of humanity was ever so rich, so educated, so able to dream! Why did I constantly feel myself to be well-past the Despair and Cynicism Event Horizons?

Which left the younger me asking: what’s the point, then? How can it be that “everything’s amazing and nobody’s happy”? What have we failed to do or achieve? Are we failing to find meaning in our lives because we don’t serve God? But God is silent! Do our social structures pervert our daily lives away from the older, deeper “red in tooth and claw” human nature? But the utility increases are clear and demonstrable (besides which, those ideologies sicken me, personally)!

Enter the project to solve meta-ethics, normally considered the key to building an AI that will not go obviously and dramatically human!wrong in its operation. Or, as I would put it, the project to fully and really describe the set of all wishes we actually want to make, by indirection if necessary.

Which is what’s so much more interesting to me than mere AI safety engineering. Not that I want Clippy to win, but I would certainly be happier with an outcome in which Clippy wins but I know what I was shooting for and what I did wrong, versus an outcome in which Clippy wins but I die ignorant of some of What is Good and Right. Dying in battle for a good cause is better than dying in battle for a bad cause, and actually dying at home as a helpless victim because I couldn’t find a good cause is preferable to damaging others for a bad cause!

After all, how can my younger self be anti-utopian, anti-totalitarian and pro-revolutionary? He considers the existing social conditions in need of overthrow and radical improvement, and yet cannot define the end goal of his movement? Goodness, light, and warmth are like pornography: we know them when we see them (and still be told that instinctive warm-fuzzy recognition doesn’t even resemble actual goodness?)? How then, might I today recognize these things when I see them, and do so smartly?

Hence my desired essay topic, and hence my asking for advice here. It’s easy to recapitulate the Oncoming AI-Induced Omnicide Problem, but I don’t want to go at it from that angle. Inherently, we should wish to know what we wish for, especially since we already possess powerful optimization processes (like our economies, as I said above) well below the superintelligent level. If someone wants to ask about steering the future, by now, my instinctive first response is, “We must know where we actually want the future to go, such that if we really set out for there, we will look forward to our arrival, and such that once we get there, we will be glad we did, and such that we make sure to value our present selves while we’re on the journey. OR ELSE.” I want to use the essay to argue this view, with reference not only to AI but to any other potential application for a partial or complete descriptive theory of meta-ethics.

Questions (or rather, Gondor calling for aid):

  • What are resources on these meta-ethical issues other than the Sequences? Official publications of any kind, but especially those suitable for reference in an academic-level paper. “Archive dive this really smart guy’s blog” is simply not an effective way to propagandize (except with HPMoR readers, and we’re a self-selecting bunch).

  • What are interesting and effective ways to apply a descriptive theory of ethics other than FAI? What can we do with wishes we know we want to make?

Thanks!