Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case?
Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just “git gud”?
This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics that I am unaware of, that I can use to actually generate decisions that have good outcomes.
It might then be possible that I haven’t had enough training? When was the last time I actually made a decision unguided by the epistemic modesty shaped thing that I otherwise use, because relying on my own thought processes is known to have bad outcomes and mess things up?
In which case the solution might be to have training in a low-stakes environment, where I can mess up without consequence, and learn from that. Problem: these are hard to generate in a way that carries over cross-domain. If I trust my decision process about which tech to buy in Endless Legend, that says nothing about my decision process about what to do when I graduate.
Endless Legend is simple, and the world is complicated. I can therefore fully understand: “this is the best tech to research: I need to convert a lot of villages, so I need influence to spend on that, and this tech generates a lot of it”. While figuring out what path to take such that the world benefits the most requires understanding what the world needs, an unsolved problem in itself, and the various effects each path is likely to have. Or on even the small scale, where to put a particular object in the REACH that doesn’t seem to have an obvious location is a question I would go to another host for.
Both of these are a problem that does not have a simple, understandable solution that has everything fall into place; they do not take place in a universe that I can trust my reasoning about, unlike, say, a Python script. And yet people seem to be relatively good at making decisions in the case of hard-to-understand environments with many uncertainties (such as the actual world).
So the low-stakes environment to train in still has to be hard to understand, and the decisions must be things that affect this environment in non-abstractable ways… or do they? It’s possible that what training data gives a person is a good way of abstracting things away into understandable chunks, which one can trust to work on their own. I suppose this is what we call “concepts” or “patterns”, took me long enough to arrive at something so obvious. So does this mean that I need a better toolbox of these, so that I can understand the world better and therefore make better decisions? This seems a daunting task: the world is complicated, how can I abstract things away as non-lossily as possible, especially when these are patterns I cannot receive training data on, such as “what AGI (or similar) will look like”? So then the question is: how do I acquire enough of a toolbox of these such that the world appears simpler and more understandable? (The answer is likely “slowly, and with a lot of revisions to perceived patterns as you get evidence against them or clarifying what they look like.”)
Most people in the rationality community are more likely to generate correct conclusions than I am, and are in general better at making decisions. Why is that the case?
Because they have more training data, and are in general more competent than I am. They actually understand the substrate on which they make decisions, and what is likely to happen, and therefore have reason to trust themselves based on their past track record, while I do not. Is the solution therefore just “git gud”?
This sounds unsatisfactory, it compresses competence to a single nebulous attribute rather than recommending concrete steps. It is possible that there are in fact generalizable decisionmaking algorithms/heuristics that I am unaware of, that I can use to actually generate decisions that have good outcomes.
It might then be possible that I haven’t had enough training? When was the last time I actually made a decision unguided by the epistemic modesty shaped thing that I otherwise use, because relying on my own thought processes is known to have bad outcomes and mess things up?
In which case the solution might be to have training in a low-stakes environment, where I can mess up without consequence, and learn from that. Problem: these are hard to generate in a way that carries over cross-domain. If I trust my decision process about which tech to buy in Endless Legend, that says nothing about my decision process about what to do when I graduate.
Endless Legend is simple, and the world is complicated. I can therefore fully understand: “this is the best tech to research: I need to convert a lot of villages, so I need influence to spend on that, and this tech generates a lot of it”. While figuring out what path to take such that the world benefits the most requires understanding what the world needs, an unsolved problem in itself, and the various effects each path is likely to have. Or on even the small scale, where to put a particular object in the REACH that doesn’t seem to have an obvious location is a question I would go to another host for.
Both of these are a problem that does not have a simple, understandable solution that has everything fall into place; they do not take place in a universe that I can trust my reasoning about, unlike, say, a Python script. And yet people seem to be relatively good at making decisions in the case of hard-to-understand environments with many uncertainties (such as the actual world).
So the low-stakes environment to train in still has to be hard to understand, and the decisions must be things that affect this environment in non-abstractable ways… or do they? It’s possible that what training data gives a person is a good way of abstracting things away into understandable chunks, which one can trust to work on their own. I suppose this is what we call “concepts” or “patterns”, took me long enough to arrive at something so obvious. So does this mean that I need a better toolbox of these, so that I can understand the world better and therefore make better decisions? This seems a daunting task: the world is complicated, how can I abstract things away as non-lossily as possible, especially when these are patterns I cannot receive training data on, such as “what AGI (or similar) will look like”? So then the question is: how do I acquire enough of a toolbox of these such that the world appears simpler and more understandable? (The answer is likely “slowly, and with a lot of revisions to perceived patterns as you get evidence against them or clarifying what they look like.”)