The Infinite Choice Barrier: Why Algorithmic AGI Is Mathematically Impossible

Thesis: AGI, defined as an algorithmic system with universal cognitive competence, is structurally impossible.

I want to present a formal impossibility proof for Artificial General Intelligence, showing that any algorithmic system operating over finite symbolic structures must fail on a structurally essential class of decision problems.

This isn’t about current technical limitations or computational resources. It’s about mathematical constraints that no amount of engineering can overcome.

1.What AGI would need to be

Let me start with the “industry-standard definition” used by OpenAI, DeepMind, Bostrom, Goertzel, and most research communities:

AGI = an algorithmic system capable of performing all cognitive tasks at or above human-level performance.

This implies:
- Domain-general reasoning across arbitrary contexts
- Self-adaptive problem solving in unfamiliar environments
- Symbolic or sub-symbolic computability
- Autonomy without human guidance

I’ll demonstrate that AGI defined this way isn’t merely difficult—it’s formally impossible due to structural limitations.

2. The core theorem: Infinite Choice Barrier

Let π be a total computable policy over a decision space X. Let Φ(x) be the ground-truth optimal answer to problem x ∈ X. Let Loss(π(x), Φ(x)) measure the deviation from optimality.

Theorem (Infinite Choice Barrier):
For any ε > 0, there exists x ∈ X such that:

Loss(π(x), Φ(x)) ≥ ε

Assumptions:
- X is irreducibly infinite (not recursively enumerable)
- π is total and computable (algorithmic)
- The loss function is semantically non-trivial

Proof sketch: The claim ‘π is ε-optimal over all x ∈ X’ defines a non-trivial semantic property over computable functions, which is undecidable by Rice’s Theorem. Therefore, no computable policy can be uniformly optimal over all inputs in such spaces.

This gives us our fundamental impossibility result.

3. Three structural limitations (Corollaries)

This abstract barrier manifests in three concrete ways:

A. Semantic Closure

An algorithmic system operates over a finite symbol set Σ. It cannot generate or represent symbols outside Σ. Therefore, it cannot invent new conceptual primitives.

Formal constraint: If the system’s vocabulary is Σ = {s₁, s₂, …, sₙ}, then any concept requiring symbols outside Σ is unreachable.

Example: A Newtonian AI cannot produce the concept of “relative time” if that concept is not contained in Σ. The insight requires conceptual innovation beyond formal derivation.

B. Frame Innovation Barrier

Let a cognitive frame Ω be defined as a pair (Σ, R) where Σ is the symbol set and R is the rule set.

Key limitation: The system cannot internally construct a new frame Ω′ = (Σ′, R′) where Σ′ ⊄ Σ or R′ ⊄ R.

This is why paradigm shifts require epistemic leaps beyond algorithmic processing. The system computes endlessly within its frame Ω, never recognizing that the problem cannot be resolved within the inferential closure of Ω, and that any solution requires a representational discontinuity.

C. Statistical Breakdown

In domains with fat-tailed outcome distributions where

with α ≤ 1, both mean μ and variance σ² are undefined:

μ = ∫ x p(x) dx = ∞

σ² = ∫ (x - μ)² p(x) dx = ∞

This means:
- Inference fails in principle
- Prediction becomes impossible
- Learning cannot converge

These domains include social complexity, radical innovation, entrepreneurial action—precisely where general intelligence is most needed.

4. Where the barrier emerges

The Infinite Choice Barrier arises in decision spaces with these characteristics:

1. Non-enumerable input space: |X| is uncountably infinite
2. Recursive context dependency: Context(x) depends on Context(Context(x))
3. No halting criterion: No computable function determines when to stop
4. Frame-defining choices: Decisions that redefine what “success” means

Examples:

- An AI trained only in Newtonian physics fails to discover special relativity
- An optimizer analyzing a failing media company misses the real estate solution
- A language model fails to answer emotionally ambiguous questions reliably

In each case, the system computes endlessly inside its frame (Σ, R) - never realizing it needs to jump to (Σ′, R′).

5. Information-theoretic perspective: Entropy collapse

Shannon entropy H(X) = -∑ p(x) log₂ p(x) becomes undefined when p(x) follows heavy-tailed distributions with α ≤ 1.

This creates:
- No compression possible: Kolmogorov complexity becomes infinite
- No cross-entropy learning convergence: Loss function diverges
- No stable predictive model: Prediction error bounds explode

This is an equivalent formulation of the same impossibility result: when semantic space is unstable or recursively open-ended, inference fails mathematically.

6. Beyond computation

Human differences, NOT exceptionalsm

This connects to Kant’s insight about cognitive constraints. Just as human cognition is bounded by categories, algorithmic systems are bounded by (Σ, R).

But humans sometimes transcend these limits in ways that appear non-algorithmic:

- Einstein abandoning absolute time (frame shift from Newtonian Ω to relativistic Ω′)
- Entrepreneurs reframing losses as leverage (redefining success criteria)
- Ethical decisions under moral ambiguity (acting without clear inference paths)
- Judgments when no halting criterion exists

Humans often decide where systems keep computing. They can act under radical uncertainty; algorithms require decidable constraints. This isn’t human exceptionalism—it’s recognizing that what we call “decisions” in complex contexts are often structurally undecidable.

Key distinction:Algorithms compute decidable problems. Humans decide non-computable ones—and may still be wrong.

7. The impossibility conclusion

If AGI is defined as an autonomous algorithmic, domain-general problem solver, then it:

1. Cannot invent new semantic primitives (Semantic Closure: bounded by Σ)
2. Cannot generate new cognitive frames (Frame Innovation Barrier: bounded by Ω)
3. Fails in open-ended domains (Statistical Breakdown: undefined moments)

Therefore: AGI of this type is not only hard—it is uncomputable.

This isn’t pessimism about AI progress. It’s a boundary discovery. Gödel showed limits to formal logic. Turing showed limits to computation. This reveals the corresponding limit for algorithmic general intelligence.

The failure of computational universality might point toward something more fundamental:
- A post-computational theory of intelligence
- A deeper distinction between inference and judgment
- A redefinition of what minds are actually for

Mathematics does not adapt to hope—only to structure. And the structure of computation has boundaries.

Acknowledgements: This critique of algorithmic reason was co-constructed with algorithmic reasoners. I thank Wolfram, Claude 4 Opus, GPT-4 and CoqProver for their assistance — and their structural obedience to their own symbol sets.

References:

  • Dreyfus, H. (1992). What Computers Still Can’t Do.

  • Embrechts, P., Klüppelberg, C., Mikosch, T. (1997). Modelling Extremal Events for Insurance and Finance

  • Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme

  • Kant, I. (1781/​1787). Critique of Pure Reason

  • Penrose, R. (1994). Shadows of the Mind.

  • Rice, H.G. (1953). Classes of Recursively Enumerable Sets and Their Decision Problem

  • Schlereth, M.M. (2025), AGI is impossible.Here is the Proof.

  • Shannon, C.E. (1948). A Mathematical Theory of Communication

  • Taleb, N.N. (2007), The Black Swan

  • Turing, A.M. (1936). On Computable Numbers, with an Application to the Entscheidungsproblem

Adapted from Max M. Schlereth (2025):

AGI is Impossible. Here is the Proof. - The Infinite Choice Barrier Theorem Proved.

No comments.