[note: not sure where I saw this concept, and I haven’t explored it enough to know if it’s useful]
Some things called “theories” aren’t predictive, but are explanatory. Such models may be useful for organizing your beliefs, rather than for updating your beliefs.
The idea would be that these kinds of frameworks can improve the salience or accessibility of information, used when evaluating or executing more predictive models. Human brains can’t actually access all the details of all the evidence they have experienced, so indexing is necessary to help determine which are available.
Thinking more about it, though, this may be just a restatement of what ALL models do—they’re not evidence in themselves, they’re filters on evidence to make the quantity manageable and the weightings useful.
[note: not sure where I saw this concept, and I haven’t explored it enough to know if it’s useful]
Some things called “theories” aren’t predictive, but are explanatory. Such models may be useful for organizing your beliefs, rather than for updating your beliefs.
Interesting idea. What is the use of organising beliefs without updating them?
The idea would be that these kinds of frameworks can improve the salience or accessibility of information, used when evaluating or executing more predictive models. Human brains can’t actually access all the details of all the evidence they have experienced, so indexing is necessary to help determine which are available.
Thinking more about it, though, this may be just a restatement of what ALL models do—they’re not evidence in themselves, they’re filters on evidence to make the quantity manageable and the weightings useful.