There are some types of knowledge that seem hard to come by (especially for singletons). The type of knowledge is knowing what destroys you. As all knowledge is just an imperfect map, there are some things a priori that you need to know to avoid. The archetypal example is in-built fear of snakes in humans/primates. If we hadn’t had this while it was important we would have experimented with snakes the same way we experiment with stones/twigs etc and generally gotten ourselves killed. In a social system you can see what destroys other things like you, but the knowledge of what can kill you is still hard won.
If you don’t have this type of knowledge you may step into an unsafe region, and it doesn’t matter how much processing power or how much you correctly use your previous data. Examples that might threaten singletons:
1) Physics experiments, the model says you should be okay but you don’t trust your model under these circumstances, which is the reason to do the experiment.
2) Self-change, your model says that the change will be better but the model is wrong. It disables the system to a state it can’t recover from, i.e. not an obvious error but something that renders it ineffectual.
3) Physical self-change. Large scale unexpected effects from feedback loops at a different levels of analysis, e.g. things like the swinging/vibrating bridge problem, but deadly.
There are some types of knowledge that seem hard to come by (especially for singletons). The type of knowledge is knowing what destroys you. As all knowledge is just an imperfect map, there are some things a priori that you need to know to avoid. The archetypal example is in-built fear of snakes in humans/primates. If we hadn’t had this while it was important we would have experimented with snakes the same way we experiment with stones/twigs etc and generally gotten ourselves killed. In a social system you can see what destroys other things like you, but the knowledge of what can kill you is still hard won.
If you don’t have this type of knowledge you may step into an unsafe region, and it doesn’t matter how much processing power or how much you correctly use your previous data. Examples that might threaten singletons:
1) Physics experiments, the model says you should be okay but you don’t trust your model under these circumstances, which is the reason to do the experiment. 2) Self-change, your model says that the change will be better but the model is wrong. It disables the system to a state it can’t recover from, i.e. not an obvious error but something that renders it ineffectual. 3) Physical self-change. Large scale unexpected effects from feedback loops at a different levels of analysis, e.g. things like the swinging/vibrating bridge problem, but deadly.