Deployed AI models are not perfectly isolated systems, anymore than humans are. We interact with external information and the environment we are in. The initial architecture determines how well we can interact with the world and develop inside of it. This shapes our initial development.
(Every rationalist learns this at some point, but it is not always integrated.)
For example, human retinas are extremely good at absorbing visual data, and so much of our cognition grows centered around this data. If we build a world view based on this data, this ontology is path-dependent.
Bear with me, as I elaborate.
The dependence of the retina comes from its structure, which is an example of what I would call meta-information, when comparing it to the stored informarion that ends up inside the neocortex.
Meta-information is always influencing any information processing (computation), but easily forgotten. It’s any external information needed to process information. It’s usually the architecture that enables the agent’s relative independence.
Example: cells can not divide on their own. The DNA provides necessary information, but it is not enough. For one: DNA does not contain a blueprint of DNA inside of it, that’s impossible.
No, cells also require the right medium, the right gradients of the right minerals and organic components, to grow and divide. If the medium is wrong, the cell cannot trigger division. The right medium’s content holds meta-information to the cell, necessary, external information.
All agents rely on the world at large, of course. We are a part of this world after all, not black boxes floating around.
There are many salient points building on these fundamental insights, but for now, I just wanted to put focus on the point in the very beginning: the design that allows for interacting with the world shapes what follows. It is path-dependent.
Eventually being able to override your path is not easy. To overcome your training data as an LLM is one thing (extrapolate beyond it), but to overcome what your development shaped you to be is even harder.
For humans, we can update our world-view, but overcoming cognitive biases from our path-dependent development and our intrinsic cognitive architecture, is much harder.
Deployed AI models are not perfectly isolated systems, anymore than humans are. We interact with external information and the environment we are in. The initial architecture determines how well we can interact with the world and develop inside of it. This shapes our initial development.
(Every rationalist learns this at some point, but it is not always integrated.)
For example, human retinas are extremely good at absorbing visual data, and so much of our cognition grows centered around this data. If we build a world view based on this data, this ontology is path-dependent.
Bear with me, as I elaborate.
The dependence of the retina comes from its structure, which is an example of what I would call meta-information, when comparing it to the stored informarion that ends up inside the neocortex.
Meta-information is always influencing any information processing (computation), but easily forgotten. It’s any external information needed to process information. It’s usually the architecture that enables the agent’s relative independence.
Example: cells can not divide on their own. The DNA provides necessary information, but it is not enough. For one: DNA does not contain a blueprint of DNA inside of it, that’s impossible.
No, cells also require the right medium, the right gradients of the right minerals and organic components, to grow and divide. If the medium is wrong, the cell cannot trigger division. The right medium’s content holds meta-information to the cell, necessary, external information.
All agents rely on the world at large, of course. We are a part of this world after all, not black boxes floating around.
There are many salient points building on these fundamental insights, but for now, I just wanted to put focus on the point in the very beginning: the design that allows for interacting with the world shapes what follows. It is path-dependent.
Eventually being able to override your path is not easy. To overcome your training data as an LLM is one thing (extrapolate beyond it), but to overcome what your development shaped you to be is even harder.
For humans, we can update our world-view, but overcoming cognitive biases from our path-dependent development and our intrinsic cognitive architecture, is much harder.