Deepmind has made a general inductor (“Making sense of sensory input”)

Link post

Our system [the Apperception Engine] is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the unity conditions. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute (fill in the blanks of) missing sensory readings, in any combination.

We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine’s ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The Apperception Engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.

If we were to take AIXI literally, we’d be concerned that induction (the generation of predictive models from observation) appears to provide about half of general intelligence (the rest is decision theory). It also seems noteworthy that the models that the apperception engine produces are reductive enough to be readable to humans, a quality similar to being analyzable, classifiable, generally comprehensible enough to be intelligently worked as components in an intellectual medium, that is to say, they may be amenable to a process of self-improvement that is informed by consciously applied principles and meta-knowledge, which in turn might be improved in similar ways. So, we should probably pay attention to this sort of thing.