A research agenda for the final year

Since the start of 2026, I’ve been thinking, suppose this is the final year before humanity loses control to AI. What should I do, where should I focus? I now have an answer. The plan is to tackle three questions:

What is the correct ontology?

What is the correct ethics?

What are ontology and ethics in an AI?

A few comments about my perspective on these questions...

What is the correct ontology?

The standard scientific answer would be to say that the world consists of fundamental physics and everything made from that. That answer defines a possible research program.

However, we also know that we don’t know, how to understand anything to do with consciousness in terms of that framework. This is a huge gap since the entirety of our experience occurs within consciousness.

This suggests that in addition to (1) the purely physics-based research program, we also need (2) a program to understand the entirety of experience as conscious experience, and (3) research programs that take the fuzzy existing ideas about how consciousness and the physical world are related, and develop them rigorously and in a way that incorporates the whole of (1) and (2).

In addition to these, I see a need for a fourth research program which I’ll just call (4) philosophical metaphysics. Metaphysics in philosophy covers topics like, what is existence, what is causality, what are properties, what are numbers—and so on. Some of these questions also arise within the first three ontological research programs, but it’s not yet clear how it will all fit together, so metaphysics gets its own stream for now.

What is the correct ethics?

In terms of AI, this is meant to bear upon the part of alignment where we ask questions like, what should the AI be aligned with, what should its values be?

But I’m not even sure that “ethics” is exactly the right framework. I could say that ethics is about decision-making that involves “the Good”, but is that the only dimension of decision-making that we need to care about? Classically in philosophy, in addition to the Good, people might also talk about the True and even the Beautiful. Could it be that a correct theory of human decision-making would say that there are multiple kinds of norms behind our decisions, and it’s a mistake to reduce it all to ethics?

This is a bit like saying that we need to know the right metaethics as well as the right ethics. Perhaps we could boil it down to these two questions, which define two ethical research programs:

(1) What is the correct ontology of human decision-making?

(2) Based on (1), what is the ideal to which AI should be aligned?

What are ontology and ethics in an AI?

My assumption is that humanity will lose control of the world to some superintelligent decision-making system—it might be an AI, it might be an infrastructure of AIs. The purpose of this 2026 research agenda, is to increase the chances that this superintelligence will be human-friendly, or that it will be governed by the values that it should be governed by.

Public progress in the research programs above, has a chance of reaching the architects of superintelligence (i.e. everyone working on frontier AI) and informing their thinking and their design choices. However, it’s no good if we manage to identify the right ontology and the right ethics, but don’t know how to impart them to an AI. Knowing how to do so is the purpose of this third and final leg of the 2026 research agenda.

We could say that there are three AI research programs here:

(1) Understand the current and forthcoming frontier AI architectures (both single agent and multi-agent)

(2) Understand in terms of their architecture, what the ontology of such an AI would be

(3) Understand in terms of their architecture, what the ethics or decision process of such an AI would be

Final comments

Of course, this research plan is provisional. For example, if epistemology proved to require top-level attention, a fourth leg might have to be added to the agenda, built around the question “What is the correct epistemology?”

It is also potentially vast. Fortunately, in places it significantly overlaps with major recognized branches of human knowledge. One hopes that specific important new questions will emerge as the plan is refined.

A rather specific, but very timely question, is how a human-AI hivemind like Moltbook could contribute to a broad fundamental research program like this. I expect that the next few months will provide some answers to that question.