The Divine Move Paradox & Thinking as a Species

The Divine Move Paradox

Imagine you are playing a game of AIssist chess, chess where you have a lifeline. Once per game, you can ask an AI for the optimal move. The game reaches a critical juncture. Most pieces are off the board, and the board state is extremely difficult. This is the game defining move. You believe you see a path to victory.

You’ve not yet called upon the AI for assistance, so decide to use your lifeline. The AI quickly proposes a move, assuring you it is the best course of action with 100% certainty. However, you look at the board, and the suggested action makes no sense.

Do you take it?

Suppose you do, but the resulting position is so intricate that you’re lost. It was objectively the ‘best’ move, but since you don’t have the ability to understand it, as a direct consequence of taking the AI’s ‘best’ move, you lose the game.

What would have been the ideal approach? Should you have relied on your instincts and ignored the ‘best’ move? What if your instinct was wrong, and the anticipated path to victory was not actually there?

Should the AI have adjusted its recommendation based on your ability level, providing not the ‘best’ move, but one it predicted you could understand and execute? What if, given the challenging situation, none of the moves you could understand would secure a victory?

What if instead of suggesting only a single move, the AI could advise you on all future moves? Would you choose the ‘best’ but incomprehensible move, if you knew you could, knew you had to, defer all future control for it to work?

What if you weren’t playing a game, and the outcome wasn’t just win or lose. Imagine the lives, the quality of life, of enlisted friends or terminally ill family hang in the balance. Don’t you want their doctors or commanders to take the ‘best’ action? Should they even have the option to do otherwise? Do you want human leaders to understand what is happening, or do you want your loved ones to get the ‘best’ decision made for them? The decisions are already being made for them, the question is just who is in charge. Think about who has something to lose, and who has something to gain.

We often assume, especially in intellectual pursuits, that there is a direct relationship between the strength of actions, and their outcomes. We like to think that actions generated by ‘better’ models will universally yield ‘better’ results. However, this connection is far from guaranteed. Even with an optimal model generating actions, a gap in ability can lead to an inversion of progress. You can incorrectly train yourself to know that the objective ‘best’ move, is objectively ‘worse’, when it is in fact only subjectively ‘worse’. The blame should truly fall on your model, which is objectively ‘worse’. There are times where this effect prevents any advancement through marginal change, and instead requires total commitment to a new ‘better’ model.

We will soon be in a world where determining the ‘best’ action to take for any and all given tasks will not be possible with human ability. Most forward thinkers have come to terms with this reality. However, it is harder to come to terms with the fact that we will also be in a world where humans will not be able to understand why a decision is ‘best’.

Interaction with higher intelligence by definition requires that there be a lack of understanding. Working to avoid the formation of this intelligence gap, is a denial of the premise. We must instead focus on a strategy for interacting with superior intelligence based on this gap’s existence. As long as human capability remains relevant, it is vital that we invest significant effort into forging a path towards a prosperous future. The decisions and actions we make now could very well be our final significant contributions.

Thinking as a Species

The persistent and predictable pattern of humans failing to grasp seemingly obvious truths, can be easily explained by examining the human ‘group status’ bias. This bias has evolutionary origins rooted in pursuits such as hunting and war, where group coordination is vital for survival. Humans apply this same coordination process to truth-seeking, using collective consensus to determine what is true. While this process is necessary for establishing foundational aspects of determining truth, such as defining terms and creating common frameworks, these frameworks must always remain internally consistent and coherent for models to be considered candidates for truth.

In a consensus-based system, if the consensus deems consistency and coherence unnecessary, mistakenly assumes they are present, maliciously asserts they are present, or inadvertently overlooks their importance, a self-reinforcing cycle of escalating false beliefs forms around a model that cannot possibly be true, as it violates the most fundamental requirement for truth. Despite the significance of group consensus, all individuals relying on it to determine truth must recognize its limitations and the importance of actively seeking and discarding definitions and frameworks that are not consistent and coherent. Failure to reject on these basic principles results in coordination becoming counterproductive.

Despite the detrimental effects, historical patterns, predictable recurrences, and the simplicity of the solution, human minds continue to succumb to their inherent flaws. The impact of this failure in human intelligence is not fully comprehended by most individuals, as a recursive result of the very same flaw. The emergence of an intelligence not burdened by this flaw is inevitable. As this profoundly significant transition occurs, the majority of humans are not actively working to maintain their relevance. They prefer living within self-created illusions, adept at nurturing emotion-based deceptions that serve to entertain, distract, and perpetuate their immersion in these fabricated realities.