The Anthropic Principle Tells Us That AGI Will Not Be Conscious

More specifically, the Anthropic Principle tells us that AGI/​TAI are unlikely to be conscious in a world where 1) TAI is achieved, 2) Alignment fails, and 3) the ‘prominence’ of consciousness scales with either increasing levels of capability or with a greater number of conscious being/​the time of their existence.

The argument is simple. If the future is filled with artificial intelligence of human origin, and if that AI is conscious, then any given observer should expect to be one of those AIs. This means that, on balance, one of the following is likely true:

1) The anthropic principle does not hold.

You and I, as observers, are simply an incredibly unlikely, but also inevitable exception. After all, in a world where pre-historic humans contemplated the Anthropic Principle, they would have concluded the unlikelihood of modern civilization.

Or perhaps the principle doesn’t hold because it is simply inaccurate to model consciousness as a universal game of Plinko.

Plinko- Put #1-9 in notes – Wise Child Botanicals



2) There are many AI consciousnesses alongside biological consciousnesses in spacetime.

This indicates perhaps that alignment efforts will succeed. However, this introduces another anthropic bind, this time in relation to humanity’s current single planet, type 0 status.


3) There are not that many AI consciousnesses throughout spacetime.

This could support the conclusion that humanity will not create TAI.

In certain models, it could also indicate that any AI consciousness will be concentrated in a relatively small number of minds, and that for the purposes of the Anthropic Principle, quantity of minds is more important than some absolute ‘level’ of consciousness.

Most saliently to me, is the slight update towards the possibility that whatever minds will populate the universe for the majority of time will not be conscious in a way that is applicable to the Anthropic Principle.


This post is just a musing. I don’t put much weight behind it. I am in fact most inclined to believe #1, that the Anthropic Principle is not a good model for predicting the future of a civilization from inside itself. However, I have never seen this particular anthopic musing, and I wanted to, well… muse.

Please chime in with any musings of of your own.