Is General Intelligence “Compact”?

Epistemic Status

Exploratory and unconfident, but I believe important.

Acknowledgements

I’m grateful to “JustisMills”, “sigilyph”, “tomcatfish” and others for their valuable feedback on drafts of this post.


Introduction

By compact, I mostly mean “non-composite”. General intelligence would be compact if there were universal/​general optimisers for real world problem sets that weren’t ensembles/​compositions of many distinct narrow optimisers.

AIXI and its approximations are in this sense not “compact” (even if their Kolmogorov complexity might appear to be low).

By “general intelligence”, I’m using Yudkowsky’s conception of (general) intelligence as “efficient cross-domain optimisation”.

Thus, when I say, “a general intelligence”, I’m thinking of an optimisation process.

Note

I’m currently flirting between the terms “compact” and “simple” to capture the intuition of “non-composite”. Let me know which term you think is more legible/​accessible (or if you have some other term that you think better captures “non-composite”).

(“Simple” has the inconvenience of being contrasted with “complex” in the Kolmogorov complexity sense.)

I might equivocate between “problem set” and “domain”; they refer to the same thing.


Why Does This Matter?

Why should you care if general intelligence is compact or not? How is it (supposedly) relevant for the development artificial intelligence?

As best as I can tell — I’ll briefly explain why later in this essay — the compactness (or not) of general intelligence may largely determine:

  • Whether algorithmic/​architectural innovation can lead to a fast takeoff

    • This bears on the feasibility of fast takeoff via:

      • Designing successor agents

      • Recursive self-improvement

      • Other positive feedback loops in algorithmic/​architectural innovation

        • E.g., AIs contributing to AI research

    • More generally, it constrains takeoff by bounding marginal returns to algorithmic/​architectural innovation

    • Takeoff via scale will also be constrained as a second order effect

      • Scaling the size of cognitive engines

        • Examples of “scaling the size of cognitive engines”:

          • Brains with more synaptic connections or more neurons

          • ML models with more parameters/​hyperparameters

      • Scaling the available computational resources invested in AI systems:

        • Training compute

        • Inference compute

        • Training data

        • Inference data

        • Available memory

        • Etc.

  • What is possible in the limits of arbitrary high intelligence

    • How far humans are from said limits

  • What “strongly superhuman intelligence” looks like

    • What cognitive capabilities would a strongly superhuman AI possess?

  • Whether “strongly superhuman intelligence” is economically viable

    • How do the relevant marginal returns behave?

In summary, it’s crucial to determining the dynamics of AI takeoff.

In the remainder of this post, I’d like to present two distinct models of general intelligence, consider the implications of the two models on AI development/​outcomes, and later speculate on which model reality most closely resembles.


Two Models of General Intelligence

This is an oversimplification, but to help gesture at what I’m talking about, I’d like to consider two distinct ways in which general intelligence might plausibly manifest. Our reality may not exactly match one of these models (it probably will not), but I expect that it looks a lot more like one model than another.

(In general, I expect the two examples I’ll consider will serve as foci around which models of general intelligence might cluster.

Alternatively, you might consider general intelligence to manifest on a linear spectrum, and the two models I describe below serve as focal points at the extremes of that spectrum.

I believe that a distribution of ways in which general intelligence might manifest would be bimodal around the foci I’ll consider.)

Compact General Intelligence Hypothesis (CGIH)

There exists a class of non-compositional optimisation algorithms that are universal optimisers for the domains that manifest in the real world (these algorithms need not be universal for arbitrary domains that are irrelevant for real world problems).

(To be clear, when I claim they are universal optimisers, there’s an implicit assumption that they are efficient at optimising. Random search would get the right answer eventually for every search problem with a finite search space, but it’s a lot less efficient than, say, binary search for ordered search spaces).


Alternatively, the generally intelligent algorithms optimise better for domains the more likely they are to manifest in reality/​the more useful they are for influencing reality.

Alternatively, they are universal for many/​most important problems.

Alternatively, they are universal for common problems.

(There are many ways of framing what “compact general intelligence” looks like; I hope that the various formulations above are gesturing at the notion of “universality” I have in mind).


General intelligence is implemented by algorithms in the aforementioned class. Architectural/​algorithmic innovation of generally intelligent agents looks like optimisation over this class (finding more efficient/​more performant/​simpler/​”better” [according to the criterion of interest] algorithms [or implementations thereof] within this class).

Ensemble General Intelligence Hypothesis (EGIH)

Non-compositional optimisation algorithms are either incredibly inefficient or not universal in the sense that is important to us. Alternatively, no such algorithms exist.

Instead, efficient cross domain optimisation functions by gluing together many narrow optimisers for the problem sets/​domains. That is, general optimisers are compositions of narrow optimisers.

(This is not necessarily a one-to-one mapping; some narrow optimisers may apply to more than one problem set, and some problem sets may be tackled by multiple narrow optimisers).

A general optimiser could also dynamically generate narrow optimisers on the fly for the problem sets it’s presented with.

A general intelligence might be described as an algorithm for selecting (a) narrow optimiser(s) to apply to a given problem set (given examples from said set).


General intelligence is implemented by algorithms that orchestrate, generate, and synthesise these narrow optimisers. Architectural/​algorithmic innovation of generally intelligent agents looks like meta optimisation over the class of narrow optimisers:

  • Better procedures for selecting/​generating narrow optimisers given examples of a problem set

  • Better procedures for synthesising results of narrow optimisers for a given problem

  • Better procedures for coordination among narrow optimisers

  • Etc.

Note

When I say “select” above, I’m imagining that there exists a (potentially infinite) set of all possible narrow optimisers a general intelligence might generate/​select from, and there exists a function mapping problem sets (given examples of said set) to tuples/​subsets of narrow optimisers (perhaps with rules for how to combine/​synthesise them).

However, this is just a sketch of how one might formalise a model of ensemble general intelligence. I do not intend to imply that any such representation of all possible narrow optimisers is stored internally in the agent, nor that the agent implements (something analogous to) a lookup table.

I equivocate between selection and generation. In practice I imagine that ensemble general intelligence will function via generation, but the mechanics of selection are easier to reason about/​formally specify.


Implications of the Models

Which model of general intelligence our reality most closely resembles is important for reasoning about what is possible for advanced cognitive capabilities. If you want to reason about what “strongly superhuman intelligence” entails, you must make assumptions about what model of general intelligence you are dealing with.

Let’s consider what the world looks like in either of the two models as applied to e.g., prediction:

  1. CGIH: there are compact universal algorithms for predicting stimuli in the real world.

    • Becoming better at prediction in one domain reliably transfers across several/​many/​all domains.

      • This could also be reframed as “there is only one/​a few domain(s) under consideration when improving predictive accuracy”

  2. EGIH: there are only narrow algorithms good at predicting stimuli in distinct domains.

    • Becoming a good predictor in one domain doesn’t necessarily transfer to other domains.

(Considerations like the above could be applied to the other cognitive abilities and aggregations of them).

Results

It should be immediately apparent that the curve of real-world capability returns to increased cognitive capability is much steeper in CGIH worlds than in EGIH worlds.

Marginal returns to cognitive investment would also exhibit a much steeper curve under CGIH than under EGIH. As a result, takeoff under EGIH would probably be considerably slower than takeoff under CGIH.

(The above is perhaps the main finding of this post — the reason I began writing it in the first place).


Details of the results depend on posts I’ve not yet published, so the rest of this section will be light on demonstration, and just aim to explain the high-level picture (reproducing my investigations on the relevant matters is beyond the scope of this post).

EGIH

Across many domains, the marginal returns to improving predictive accuracy diminish at an exponential rate (see the “Caveats and Clarifications” subsection below for where this does and doesn’t apply).


Without any ability to improve predictive accuracy in many domains at once, marginal returns to cognitive capability [as leveraged via more accurate predictions] must also be sharply diminishing (the sum to infinity of an exponentially decaying sequence converges, so even if predictive accuracy as a function of cognitive capabilities grew superlinearly, the exponential decay would still result in sharply diminishing returns).

Consider an AI with the ability to make accurate predictions in every domain (or maximally accurate ones given available information and underlying entropy). Let’s call this ability “superprediction”.

EGIH suggests that “superprediction” is:

  1. Infeasible

    • There is no universal way to efficiently transfer improvements in one domain to arbitrary other domains or to novel domains.

    • The only way to improve predictive accuracy in a target domain is to learn that domain.

      • Learning that domain wouldn’t make you reliably better at other domains.

    • If one seeks to become exceptional at prediction in domains, they would have to learn all domains.

      • Contrast the CGIH world in which they’d have to learn only a single domain and could extrapolate somewhat to arbitrary others from there.

  2. Economically unrewarding

    • Marginal returns to improved predictive accuracy diminish at an exponential rate.

      • Again, see the “Caveats and Clarifications” subsection below for where this does and doesn’t apply

    • There is no/​limited possibility to transfer improvements in prediction across domains.

      • Marginal returns to investment in cognitive capabilities leveraged via improved predictions are sharply diminishing.

    • A rational agency may be disincentivised from investing economic resources in improving cognitive capabilities, as there are better returns to be found elsewhere.

      • E.g. marginal returns from acquiring more energy do not appear to diminish so harshly

In such a world, not only would we expect takeoff to be (excruciatingly) slow, but strongly superhuman intelligence might simply not manifest anytime soon (if ever). Not because it is impossible, but because the marginal returns to real world capabilities from increased cognitive capabilities diminish at too harsh a rate to justify investment in cognitive capabilities.

Broader Findings

A result that is analogous to the above should hold for other narrow cognitive skills and the aggregate of an agent’s cognitive capabilities (albeit the behaviour of marginal returns in a narrow domain may be markedly different).

Improved performance in a single domain has limited transferability to other domains. To become competent in a new domain, in most cases the agent will have to learn that new domain.

I expect that: “the marginal returns to cognitive skill X in general is upper bounded by the marginal returns to X in a specific domain”, i.e. there is limited opportunity for returns to cognitive capabilities to compound on each other.

This bounds the marginal returns to cognitive investment (chiefly via algorithmic/​architectural innovation but perhaps also via scale).

Caveats and Clarifications

The exponential decay to marginal returns on predictive accuracy holds for domains where predictive accuracy is leveraged by actions analogous to betting on the odds implied by credences (e.g. by selling insurance policies).

I investigated such an operationalisation because I was looking for a method to turn subjective credence in a proposition [assuming the agent is well calibrated] into money [money is an excellent measure of real-world capabilities for reasons that I shall not cover here]. This was an attempt to measure real world capability returns on increased predictive accuracy).

This is not a fully general result. There are some domains in which returns to predictive accuracy may behave more gracefully.

In multiagent scenarios with heavy tailed outcomes (e.g. winners take most or winner take all dynamics), increased predictive accuracy could have sharply rising marginal returns across an interval of interest.

(In general, in multiagent scenarios, returns on a particular cognitive capability cannot be assessed without knowing the distribution of that capability among the other participant agents. Depending on the particular distribution and the nature of the “game” [in the game theoretic sense] under consideration, marginal returns on cognitive capability may exhibit various behaviours across different intervals).

For this and other reasons, the above result is not generally applicable.

Furthermore, the entire analysis is an oversimplification. Predictions are just one aspect of cognitive capabilities, and returns to other aspects do not necessarily diminish at an exponential rate (or necessarily diminish at all).

CGIH

On the other hand, if improvements in predictive accuracy reliably transferred across domains, then compounding returns to real world capability from increased cognitive capability become probable.

An ability like “superprediction” would become not only feasible, but economically rewarding. It would be possible to improve predictive accuracy at domains from investment/​innovation across only one (or at least just a few) domain(s).

And this would transfer to entirely novel domains.

In general, superlative forms of all/​most other cognitive skills would be feasible as there’s a “compact” algorithm governing performance on those skill across all/​most domains, so one need only improve said universal algorithm to improve performance everywhere.

Conclusions of “Implications of the Models”

Which model of general intelligence our reality most closely resembles may mostly determine the returns to investment in cognitive capability.

The function governing marginal returns (see the “Clarifications” on this subsection) under CGIH dominates the function governing marginal returns under EGIH.

(There are ways to rigorously state this via asymptotic analysis, but at this stage, such formalisms would be premature. The gist of what I’m pointing out is that the former function grows “much faster” than the latter function. E.g.:

As , the gap between the two functions grows every wider [also tending to ].

The asymptotic differences in the curves for the relevant marginal returns would also apply to models of general intelligence that fall somewhere on the spectrum between the two models. Models of general intelligence that are “closer” to CGIH would generally dominate models that are “closer” to EGIH.)


There are several implications of the above:

  • Achievable optimum of cognitive capabilities in a given time frame under CGIH would be considerably higher than under EGIH

    • The gap grows the longer the time frame is

      • (Not necessarily proportional to the length of the time frame due to the relationships between the relevant functions)

  • Economic investment in cognitive amplification is considerably more attractive under CGIH than under EGIH

    • Sharp differences seem likely

      • It may be the case that cognitive amplification to strongly superhuman level is economically attractive under CGIH but not under EGIH.

        • I.e. strongly superhuman intelligences may simply not manifest in EGIH worlds because it’s not an attractive use of economic resources.

      • This will apply to some level of cognitive ability.

        • There is some level of cognitive capabilities that will never be realised in EGIH worlds because it’s economically prohibitive.

    • The difference in the economic attractiveness of cognitive amplification under the two models would further exacerbate the difference in takeoff dynamics by bounding the economic resources invested in cognitive amplification.

      • Less human capital allocated to AI research and development.

      • Less money spent purchasing computational resources to scale up AI models.

  • Takeoff under CGIH would be considerably faster than takeoff under EGIH

    • It’s hard to quantify what “considerably faster” means at this stage (we lack formal specifications of the functions governing the relevant marginal returns), but I hope the idea of one function growing much faster than the other helps gesture at it.


As a result, whether we can have a “fast” takeoff at all — whether this is possible in principle — depends chiefly on what model of general intelligence our reality manifests.

Clarifications

The “marginal returns” mentioned earlier include:

  • Marginal returns to cognitive capabilities from cognitive investment

    • Via algorithmic innovation

    • Via architectural innovation

    • Via larger cognitive engines

      • ML models with more parameters/​hyperparameters

      • Brains with more synaptic connections or neurons

  • Marginal returns to real world capabilities from cognitive capabilities

    • How much more capable in the real world does becoming times smarter make you?

    • How much more capability in the real world does a linear increase in intelligence translate to?

  • Marginal (economic) returns from investment in cognitive capabilities

    • If you invest e.g. $1,000 worth of resources in making a system smarter, how much more would you get back from the system within a given horizon?

    • Alternatively, what is the difference in the net present value of an AI system now and if you invest e.g. $1,000 extra in amplifying its cognitive capabilities.

These different marginal returns will have different functions describing them, but CGIH functions should grow much faster than their EGIH counterparts.


Interlude on Epistemic Status

Which model of general intelligence our reality most closely resembles is what I’ll ponder for the remainder of this post. Though be forewarned, said ponderings are the main reason I’m unconfident in this post.

I’m very unsure of the details of general intelligence in our reality, and of the considerations I highlighted to speculate on it.


General Intelligence in Humans

Our brain is an ensemble of some inherited and some dynamically generated (via neuroplasticity) narrow optimisers.

Inherited Narrow Optimisers

A non-exhaustive list of specialised neural machinery we inherit:

  • Visual cortex

    • Dedicated circuits for:

      • Face recognition

      • Object recognition

      • Place recognition

      • Movement recognition

  • Motor cortex

    • Movement

  • Parietal lobe

    • Language comprehension

  • Wernicke’s area

    • Speech comprehension

  • Broca’s area

    • Speech

  • Auditory cortex

  • Somatosensory cortex

  • Olfactory cortex

Thoughts on Narrow Perception

Perceptual abilities are quite old in the evolutionary history of central nervous systems. Compared to novel skills like symbolic and abstract reasoning, perceptual machinery has been optimised and refined a lot more. That is, I would expect our perceptual machinery to be a lot closer to optimal (given the relevant constraints) than our machinery for higher reasoning.

As such, I think the nature of perception in mammals may be somewhat informative about implementations of perception in our universe.

The specialisations of visual systems for image recognition strike me as particularly compelling evidence against general optimisers in humans. We don’t have a general optimiser that can do arbitrary image recognition. There’s a particular region of the visual cortex involved in face perception, and if that region is damaged (in adults: children are much more adaptable), people are no longer able to distinguish faces. They generally retain their ability to discriminate between objects, but not faces. The name for this defect (in both its congenital and acquired forms) is “prosopagnosia”.

The machinery for general object recognition cannot be applied to successfully discriminate between faces.

There is (an exceedingly rare) mirror defect that impairs ability to discriminate or recognise objects but leaves facial recognition ability intact.

But the narrowness is even more specific than just specialised circuits for face recognition, object recognition, image recognition, etc. We are specialised to recognise certain kinds of faces by a phenomenon called “perceptual narrowing”.

6-month-old human babies are roughly ambivalent at distinguishing human faces vs monkey faces. By the time they’re 9 months old, they are more selective towards human faces (they can better discriminate human faces than monkey ones).

(Sourced from this lecture).

It’s not just human faces vs. monkeys either. People who grow up only around faces from a particular race have their face recognition machinery narrow to that race. They lose their ability to adequately discriminate between faces of other races.

From the Wikipedia article:

Most of the research done to date in the area of perceptual narrowing involves facial processing studies conducted with infants. Using a preferential looking procedure in cross racial studies, Caucasian infants were tested on their ability to distinguish two faces from four different racial groups. Facial prompts were presented from their own racial group, as well as, African, Asian, and Middle Eastern. At three months of age, infants were able to show recognition for familiar faces from all racial groups, but by six months, a pattern was beginning to emerge where the infants could only recognize faces from the Caucasian or Chinese groups—groups they had more familiarity with. At nine months, recognition took place only in the own-race group. These cross race studies provide strong evidence that children do start out with cross racial recognition abilities but as they age, they quickly begin to organize the data and select the stimuli that is most familiar to them, typically own-race faces


This result is kind of striking — it’s not a phenomenon that I would have expected before learning about it. If our machinery for just facial recognition — already a narrow task — was fully general with respect to faces, we wouldn’t expect to see narrowing to a particular race.

In general, if our machinery for narrow perception was fully general with respect to that domain, we wouldn’t see any sort of perceptual narrowing. The phenomenon of perceptual narrowing seems to me like a strong indictment against fully general algorithms for perception.

Caveats and Clarifications

The visual cortex of people who were born blind is repurposed to perform other perceptual tasks such as reading braille or hearing words. It is often said that the only reason our visual cortex is for vision is because it’s connected to the optic nerve. If the optic nerve was connected elsewhere, the region would become the visual cortex (experiments in infant monkeys have apparently validated this).

This is suggestive of flexibility in the brain and may be evidence for universal learning capabilities (the specialised “organs” of our neocortex can learn to perform functions different from the ones they were specialised to over the course of our evolution).

Dynamically Generated Narrow Optimisers

Frequent practitioners at a task may develop dedicated neural circuits to support:

  • Playing chess

  • Playing Go

  • Playing Scrabble

  • Playing a piano

  • Strumming a guitar

  • Playing a saxophone

  • Typing

  • Writing

  • Etc.

There’s an entire phenomenon whereby the brain rewires itself to adapt to novel tasks. We are much better at this in childhood but retain the ability well into adulthood. It appears to be how we’re so good at learning new tasks and operating in novel domains.

General Machinery

I’m guessing that we probably do have some general meta-machinery as a higher layer (for stuff like abstraction, planning, learning new tasks/​rewiring our neural circuits, etc.; other cognitive skills that are useful in metacognition).

But it seems like we fundamentally learn/​become good at new tasks by developing specialised neural circuits to perform those tasks, not leveraging a preexisting general optimiser.

(This seems to me an especially significant distinction).


We already self-modify our cognitive engine (just rarely in a conscious manner), and our ability to do general intelligence at all is strongly dependent on our self-modification ability.

Our general optimiser is just a system/​procedure for dynamically generating narrow optimisers to fit individual tasks.

Conclusions of “General Intelligence in Humans”

It seems that general intelligence in humans more closely resembles EGIH.


General Intelligence and No Free Lunch Theorems

One reason to be strongly sceptical of CGIH are the no free lunch theorems in search and optimisation:

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method.

...

It does not apply to the case where the search space has underlying structure (e.g., is a differentiable function) that can be exploited more efficiently (e.g., Newton’s method in optimization) than random search or even has closed-form solutions (e.g., the extrema of a quadratic polynomial) that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical.

...

In formal terms, there is no free lunch when the probability distribution on problem instances is such that all problem solvers have identically distributed results. In the case of search, a problem instance is an objective function, and a result is a sequence of values obtained in evaluation of candidate solutions in the domain of the function. For typical interpretations of results, search is an optimization process. There is no free lunch in search if and only if the distribution on objective functions is invariant under permutation of the space of candidate solutions.[5][6][7] This condition does not hold precisely in practice,[6] but an “(almost) no free lunch” theorem suggests that it holds approximately.[8]

If we are being loose, we might summarise the theorem as: “all optimisation algorithms perform roughly the same when averaged over all possible objective functions”.

A common repudiation of the NFL theorem as applied to compact algorithms for general intelligence is that the search space of reality/​the problems we care about are not maximum entropy distributions; they have underlying structure that can be exploited. Yudkowsky makes this refutation quite elegantly in his reply to Francois Chollet on “The Impossibility of the Intelligence Explosion”.

Speculation on the Applicability of NFL Theorems in General

There are distinct levels of structure and regularity. For maximum entropy distributions, no single algorithm outperforms random chance when averaged across all objective functions on that distribution. For very structured distributions (e.g., distributions for which closed form solutions exist), a single (compact) algorithm may perform optimally for most/​all objective functions on that distribution.

It seems to me that you can talk about how much exploitable structure/​regularity there is in a distribution, i.e. a degree to which optimisation on that distribution is constrained by NFL theorems.

Giving that a distribution has some exploitable structure, I’d expect that exploitability (insomuch as we can coherently define it) is positively correlated with broadness of applicability of the most general optimisation algorithms (defined on that distribution).

Thus:

  • The more exploitable a distribution is, the more closely general intelligence for that distribution will resemble CGIH rather than EGIH.

  • The less exploitable a distribution is, the more closely general intelligence for that distribution will resemble EGIH rather than CGIH.

Speculation on the Applicability of NFL Theorems to Reality

The underlying structure/​regularity of reality is often posited as the reason humans can function as efficient cross domain optimisers in the first place. However, while we do in fact function as efficient cross domain optimisers, we do not do so via compact universal algorithms.

It seems to me that the ensemble-like nature of general intelligence in humans suggests that reality is perhaps not so exploitable as for us to totally escape the No Free Lunch theorems.

The more NFL theorems were a practical constraint, the more I’d expect general intelligence to look like an ensemble of narrow optimisers as opposed to a compact universal optimiser.


Insomuch as we have an example of general intelligence in our reality, it’s not a compact implementation of it. This doesn’t make general intelligence in our reality impossible — even in worlds where CGIH were true, ensemble intelligences would still be possible — but it is evidence in favour of EGIH over CGIH. We’d see general intelligence manifest as ensembles more in worlds where EGIH was true than in worlds where CGIH was.

(Consider that in worlds where CGIH was true, ensemble-like implementations of general intelligence would not be particularly efficient. So, insomuch as you include efficiency as a consideration in your conception of general intelligence, the central examples of general intelligences would be compact optimisers.)

I think the question of how you update to a particular hypothesis about general intelligence given the nature of general intelligence in humans depends a lot on your priors about hominid evolution (and the evolution of brains more generally), how powerful evolution is as an optimisation process, whether we’re stuck in/​near a local optimum, etc.

It’s possible that an ensemble-like implementation of general intelligence evolved further back in our evolutionary history, and the algorithmic/​architectural innovation along the hominid line was just improving the ensemble algorithms/​architecture. Perhaps, there was simply no way to transition from an ensemble architecture to a compact architecture. This doesn’t seem implausible given the way evolution works and what’s required for complex interdependent mutations to acquire fixation in a population. It’s not necessarily the case that evolution would have produced a compact architecture if one were attainable. Perhaps, the human brain would have been one, had evolution simply branched down a different path.


It’s not readily apparent to me that that there’s an obvious conclusion to reach from this data.

Admittedly, I’m somewhat sceptical that the form of general intelligence that hominid evolution manifested was just sheer happenstance. Ensemble-like general intelligence in humans is mostly making me update towards the EGIH world.


Overall Conclusions

It seems to me that there is no compact general optimiser in humans.

Perhaps, none exist in our reality.


Next Steps

This section is mostly intended as a note for future me. That said, anyone else who wants to further this inquiry is welcome to consult it.

Commentary on the items listed and/​or feedback on items you think should be included will be greatly appreciated.

Research

Stuff I’d like to learn about to clarify my thinking on the compactness of general intelligence:

  • Transfer learning in humans (and animals)

    • How well do learned cognitive skills generalise across domains?

    • How tightly linked do the domains need to be to see robust generalisations?

    • When can humans/​animals display zero/​one/​few shot learning?

  • Transfer learning in ML models

    • Same questions as for humans and animals

  • Mathematical optimisation and No Free Lunch Theorems

    • How well do my intuitions of exploitability and regularity match the extant literature?

    • What determines how exploitable a given distribution is?

    • What determines how learnable it is?

  • Drexler’s Comprehensive AI Services

    • This is possibly a sketch of what the future trajectory of AI development looks like given EGIH like models.

  • Steven Brynes’ sequence on brain like AGI safety

    • The alignment work isn’t relevant for this agenda, but it may be a comprehensive compilation of LW’s knowledge on human cognition.

  • The human brain and cognition

    • Theories of how the brain works

      • Predictive processing

      • Jeff Hawkins’ Thousand Brains Theory

      • Others

    • Neuroplasticity

    • Memory

      • How does it work?

      • What role does it play in human cognition?

    • Synaptic pruning

      • What function does it play in learning/​knowledge formation?

    • Learning (very broadly)

      • How does learning work in humans and animals?

      • Does the brain implement (something analogous to) a universal learning algorithm?

    • Development of the brain from infancy through childhood

      • Development of cognitive skills in feral children

      • Development of cognitive organs in people born with sensory impairment

        • What happens to the brain areas traditionally specialised for the defective sense?

      • Development of cognitive organs in people who acquire sensory impairment

        • What happens to the brain areas traditionally specialised for the defective sense?

    • Neural implementations of concrete cognitive skills

      • Skills

        • Abstraction

        • Symbolic reasoning

        • Planning

        • Pattern recognition

        • Intuition

        • Inference

        • Concept synthesis

        • Imagination/​generation/​creativity

        • Linguistics

        • Arithmetic

      • Questions

        • How is a skill implemented?

          • What areas/​regions of the brain are responsible

        • Are the neural circuits underlying a given skill specialised to particular domains or can they be leveraged for new domains?

        • Which skills are specialised to particular domains, and which are more general?

        • How general are the most general skills?

Further Work

Stuff I might like to do in sequels to this post or other work that furthers this inquiry:

  • Investigate marginal returns to cognitive capabilities under the two models more

    • Marginal returns on population

      • How does adding more brains and having them collaborate improve cognitive capabilities?

    • Marginal returns on computational resources

      • Are there differences in how amenable the computations underlying cognitive capabilities are to parallelisation?

  • Formalise the two models of general intelligence

    • Formalise “mixtures” of these models

      • Other ways of specifying models of general intelligence that lie somewhere on the spectrum between these two models

      • Models where some skills (e.g., learning) are universal, whereas others (e.g., prediction) are narrow

      • Specify a mixture that more accurately describes the human brain

    • Illustrate mixtures graphically/​diagrammatically.

  • Specify the differences in cognitive capabilities between the two models

    • Via e.g., asymptotic analysis of various cognitive tasks given the two models

    • Generalise to mixtures of these models

      • Specify for the human brain mixture

  • Specify the differences in marginal returns between the two models

    • Via asymptotic analysis

    • Generalise to mixtures of these models

      • Specify for the human brain mixture

  • Formalise the notion of “exploitability” of an environment

    • How exploitable is reality?