Reminder: Morality is unsolved

Here is a game you can play with yourself, or others:

a) You have to decide on a moral framework that can be explained in detail, to anyone.

b) It will be implemented worldwide tomorrow.

c) Tomorrow, every single human on Earth, including you and everyone you know, will also have their lives randomly swapped with someone else.

This means that you are operating under the veil of ignorance. You should make sure that the morality you decide on is beneficial whoever you are, once it takes effect.

Multiplayer: The one to first convince all other players, wins.

Single player: If you play alone, you just need to convince yourself.

Good luck!

Morality is unsolved

Let me put this another way: Did your mom ever tell you to be a good person? Do you ever feel that sometimes you fail that task? Yes?

To your defense, I doubt anybody ever told you exactly what a good person is, or what you should do to be one.

*

Morality is a famously unsolved problem, in the sense that we don’t have any ethical frameworks that are complete and consistent, that everyone can agree on.

We don’t have a universally accepted set of moral rules to start with either.

An important insight here, is that the disagreements often end up being about whom the rules should apply to.

For example, if you say that everyone should have equal rights of liberty, the question is: who is everyone?

If you say “all persons” you have to define what a person is. Do humans in coma count? Elephants? Sophisticated AIs? How do you draw the line?

And if you start having different rules for different “persons”, then you don’t have a consistent and complete framework, but a patchwork of rules, much like our current mess(es) of judiciary systems.

We also don’t understand metaethics well.

Here are two facts about what the situation is actually like right now:

a) We are currently in a stage where we want and believe different things, some of which are fundamentally at odds which each other.

This is important to remember. We are all subjective agents, with our own collection of ontologies, and our own subjective agendas.

b) We are spending very little time politically and technically, working on ethics, and moral problems.

Implications for AI

This has huge implications for the future of AI.

First of all, it means that there is no universally consistent framework (that doesn’t need constant manual updating) which we can put into an AI.

At least not one that everyone, or even a majority, will morally agree on.

If you think I am wrong about this, I challenge you to inform us what that framework would be.

So, when people talk about solving alignment, we must ask: aligning towards what? For whom?

Secondly, this same problem also applies to any principal who is put in charge of the AI. What morality should they adopt?

Open question.

These are key reason as to why I am in favour of distributed AI governance. It’s like democracy: flawed on its own, but at least it distributes risk. More people should have a say. No unilateral decisions.

Alignment focus on metaethics

As for alignment, I am among those thinking that the theory builders should spend some serious effort working on metaethics now.

Morality is intrinsically tied to ontology and epistemology, to our understanding of this world and reality itself.

Consider this idea: Solving morality may require scientific advancement to the level where we don’t need to discover anything fundamentally new, a level where basic empiric research is somewhat complete.

It means, working within an ontology where we don’t change our physical models of the universe anymore, only refine them. A level where we have reconciled subject and object.

Sidenote 1: For AI problems, it often doesn’t matter whether moral realism is true or not, the problems we currently face look the same regardless. We should not get hang up on moral realism.

Sidenote 2: As our understanding of ethics evolve, there may be fundamental gaps of understanding between the future developers of AI and the current ones, just like there are already fundamental gaps between religious fundamentalists and other religious factions with more complex moral beliefs.

This is another argument for working on metaethics first, AI later.

As this will likely not happen, I would argue that this indirectly is an argument for keeping AI narrow and with humans in control (human-in-the-loop).

Not perhaps a very strong one, on its own, but an argument nonetheless. This way, moral problems are divided, like we are. And hopefully, one day, conquered.