The basic problem is that, generically, if your model uses more free parameters than data points, then it is mathematically trivial that you can get an exact fit to your data set, regardless of what the data are: thus you’ve provided exactly zero Bayesian evidence that your model fits this particular phenomenon.
I’m not sure I follow you. I didn’t get the impression that Marken’s model had more tunable parameters than there were data points under study, or that it actually was tunable in such a way as to create any desired result.
If every cognitive circuit is so complicated that you can’t make an observable prediction (about an individual in varying circumstances, or different people in the same circumstances, etc) without assuming more parameters than data points...
I don’t follow how this is the case. If I establish that a person is controlling for, say, “having a social life”, and I know that one of the sub-controlled perceptions is “being on Twitter”, then I can predict that if I interfere with their twitter usage they’ll try to compensate in some way. I can also observe whether a person’s behavior matches their expressed priorities—i.e., akrasia—and attempt to directly identify the variables they’re controlling.
If at this point, you say that this is “obvious” and not supportive of PCT, then I must admit I’m still baffled as to what sort of result we should expect to be supportive of PCT.
For example, let’s consider various results that (ISTM) were anticipated to some extent by PCT. Dunning-Kruger says that people who aren’t good at something don’t know whether they’re doing it well. PCT said—many years earlier, AFAICT—that the ability to perceive a quality must inevitably precede the ability to consistently control that quality.
Which directly implies that “people who are good at something must have good perception of that thing”, and “people who are poor at perceiving something will have poor performance at it.”
That’s not quite D-K, of course, but it’s pretty good for a couple decades ahead of them. It also pretty directly implies that people who are the best at something are more likely to be aware of their errors than anyone else—a pretty observable phenomenon among high performers in almost any field.
I’m still quite able to revise my probability estimate upwards if presented with a legitimate experimental result, but at the moment PCT is down in the “don’t waste your time and risk your rationality” bin of fringe theories.
This baffles me, since AFAICT you previously agreed that it appears valid for “motor” functions, as opposed to “cognitive” ones.
I consider this boundary to be essentially meaningless myself, btw, since I find it almost impossible to think without some kind of “motor” movement taking place, even if it’s just my eyes flitting around, but more often, my hands and voice as well, even if it’s under my breath.
It’s also not evolutionarily sane to assume some sort of hard distinction between “cognitive” and “motor” activity, since the former had to evolve from some form of the latter.
In any event, the nice thing about PCT is that it is the most falsifiable psychological model imaginable, since we will sooner or later get hard results from neurobiology to confirm its truth or falsehood at successively higher levels of abstraction. As has previously been pointed out here, neuroscience has already uncovered four or five of PCT’s expected 9-12 hardware-distinctive controller levels. (I don’t know how many of these were known about at the time of PCT’s formulation, alas.)
I consider this boundary to be essentially meaningless myself, btw, since I
find it almost impossible to think without some kind of “motor” movement
taking place, even if it’s just my eyes flitting around, but more often, my hands
and voice as well, even if it’s under my breath.
I’m not sure I follow you. I didn’t get the impression that Marken’s model had more tunable parameters than there were data points under study, or that it actually was tunable in such a way as to create any desired result.
In the section “Quantitative Validation”, under Table 1, it says (italics mine):
The model was fit to the data in Table 1 by adjusting only the speed parameter, s, for each prescription component control system… The results in Table 1 show that the distribution of error types produced by the model corresponds almost exactly to the empirical distribution of these rates. The values of s that produced these results were 0.000684, 0.000669, 0.000731 and 0.000738 for the Drug, Dosage, Route and Other component writing control systems, respectively.
As you vary each speed component within the model, the fraction of errors by that component varies all the way from 0 to 1, rather independently of each other. Thus for any empirical or made-up distribution of the four error types, Marken would have calculated values for his four parameters that caused the model to match the four data points; so despite his claims, the empirical data offer literally zero evidence in favor of his model. Ditto with his claim that his model predicts the overall error rate.
I’m not sure I follow you. I didn’t get the impression that Marken’s model had more tunable parameters than there were data points under study, or that it actually was tunable in such a way as to create any desired result.
I don’t follow how this is the case. If I establish that a person is controlling for, say, “having a social life”, and I know that one of the sub-controlled perceptions is “being on Twitter”, then I can predict that if I interfere with their twitter usage they’ll try to compensate in some way. I can also observe whether a person’s behavior matches their expressed priorities—i.e., akrasia—and attempt to directly identify the variables they’re controlling.
If at this point, you say that this is “obvious” and not supportive of PCT, then I must admit I’m still baffled as to what sort of result we should expect to be supportive of PCT.
For example, let’s consider various results that (ISTM) were anticipated to some extent by PCT. Dunning-Kruger says that people who aren’t good at something don’t know whether they’re doing it well. PCT said—many years earlier, AFAICT—that the ability to perceive a quality must inevitably precede the ability to consistently control that quality.
Which directly implies that “people who are good at something must have good perception of that thing”, and “people who are poor at perceiving something will have poor performance at it.”
That’s not quite D-K, of course, but it’s pretty good for a couple decades ahead of them. It also pretty directly implies that people who are the best at something are more likely to be aware of their errors than anyone else—a pretty observable phenomenon among high performers in almost any field.
This baffles me, since AFAICT you previously agreed that it appears valid for “motor” functions, as opposed to “cognitive” ones.
I consider this boundary to be essentially meaningless myself, btw, since I find it almost impossible to think without some kind of “motor” movement taking place, even if it’s just my eyes flitting around, but more often, my hands and voice as well, even if it’s under my breath.
It’s also not evolutionarily sane to assume some sort of hard distinction between “cognitive” and “motor” activity, since the former had to evolve from some form of the latter.
In any event, the nice thing about PCT is that it is the most falsifiable psychological model imaginable, since we will sooner or later get hard results from neurobiology to confirm its truth or falsehood at successively higher levels of abstraction. As has previously been pointed out here, neuroscience has already uncovered four or five of PCT’s expected 9-12 hardware-distinctive controller levels. (I don’t know how many of these were known about at the time of PCT’s formulation, alas.)
Or as Rodolfo Llinás puts it:
″… thinking may be nothing else but internalized movement.”
“So thinking is a premotor act.”
In the section “Quantitative Validation”, under Table 1, it says (italics mine):
As you vary each speed component within the model, the fraction of errors by that component varies all the way from 0 to 1, rather independently of each other. Thus for any empirical or made-up distribution of the four error types, Marken would have calculated values for his four parameters that caused the model to match the four data points; so despite his claims, the empirical data offer literally zero evidence in favor of his model. Ditto with his claim that his model predicts the overall error rate.
I’ll get to the rest of this later.