1) You are assuming that your mind did or did not do certain things in those moments when it was quietly answering the question.
This is a fair criticism, and you’re right—I can’t say with definite certainty that these things were actually never considered at all. Still, if those things were considered, they don’t seem to be reflected in the final output. If instead of sayng “the thing is, it probably didn’t”, I said “the thing is, it probably didn’t—or if it did, it’s difficult to notice from the provided answer”, would you consider that acceptable?
2) You seem to be framing a lot of scenarios as if they were all instances of the same type of problem...
I think you might be somewhat misinterpreting me here. I didn’t say that substitution is necessarily a problem—I specifically said it probably works pretty well most of the time. Heck, I imagine that if I was building an AI, I would explicitly program something like a substitution heuristic into it, to be used most of the time—because difficult problems are genuinely difficult, both in the sense of being computationally expensive and requiring information that isn’t usually at hand. A system that always tried to compute the exact answer for everything would never get anything done. Much better to usually employ some sort of quick heuristic that tended to at least point in the right direction, and then only spend more effort on the problem if it seemed to be important.
For that matter, you could say that a large part of science consists of a kind of substitution. Does System 1 actually ignore all those complicated considerations when considering its answer? Well, we can’t really answer that directly… but we can substitute the question with “does it seem to be ignoring them in certain kinds of experimental setups”, and then reflect upon what the answers to that question seem to tell us. This is part of the reason why I felt confident in saying that the brain probably never did take all the considerations into account—because simplifying problems to easier ones is such an essential part of actually ever getting anything done that it would seem odd if the brain didn’t do that.
So I agree that there are many cases (including your chess example) where substitution isn’t actually a problem, but rather the optimal course of action. And I agree that there are also many cases of bias where the substitution frame isn’t the best one.
The reason why I nevertheless brought it up was that, if I were building my hypothetical AI, there’s still one thing that I’d do differently than how the human brain seems to do it. A lot of the time, humans seem to be completely unaware of the fact that they are actually making a substitution, and treat the answer they get as the actual answer to the question they were asking. Like I mentioned in my other comment, I think the substitution principle is useful because it gives a useful rule of thumb that we can use to notice when we might be mistaken, and might need to think about the matter a bit more before assigning our intuitive result complete confidence.
This is a fair criticism, and you’re right—I can’t say with definite certainty that these things were actually never considered at all. Still, if those things were considered, they don’t seem to be reflected in the final output. If instead of sayng “the thing is, it probably didn’t”, I said “the thing is, it probably didn’t—or if it did, it’s difficult to notice from the provided answer”, would you consider that acceptable?
I think you might be somewhat misinterpreting me here. I didn’t say that substitution is necessarily a problem—I specifically said it probably works pretty well most of the time. Heck, I imagine that if I was building an AI, I would explicitly program something like a substitution heuristic into it, to be used most of the time—because difficult problems are genuinely difficult, both in the sense of being computationally expensive and requiring information that isn’t usually at hand. A system that always tried to compute the exact answer for everything would never get anything done. Much better to usually employ some sort of quick heuristic that tended to at least point in the right direction, and then only spend more effort on the problem if it seemed to be important.
For that matter, you could say that a large part of science consists of a kind of substitution. Does System 1 actually ignore all those complicated considerations when considering its answer? Well, we can’t really answer that directly… but we can substitute the question with “does it seem to be ignoring them in certain kinds of experimental setups”, and then reflect upon what the answers to that question seem to tell us. This is part of the reason why I felt confident in saying that the brain probably never did take all the considerations into account—because simplifying problems to easier ones is such an essential part of actually ever getting anything done that it would seem odd if the brain didn’t do that.
So I agree that there are many cases (including your chess example) where substitution isn’t actually a problem, but rather the optimal course of action. And I agree that there are also many cases of bias where the substitution frame isn’t the best one.
The reason why I nevertheless brought it up was that, if I were building my hypothetical AI, there’s still one thing that I’d do differently than how the human brain seems to do it. A lot of the time, humans seem to be completely unaware of the fact that they are actually making a substitution, and treat the answer they get as the actual answer to the question they were asking. Like I mentioned in my other comment, I think the substitution principle is useful because it gives a useful rule of thumb that we can use to notice when we might be mistaken, and might need to think about the matter a bit more before assigning our intuitive result complete confidence.