You say that the above assumes that time spent examining new objects should be consistent—are you saying that variations in the new-object times imply confounding factors which corrupt the results?
That’s how I read it. The hypothesis is that rats spend a fairly consistent amount of time investigating new objects, and that their deviations from that consistent amount of time can be used to gauge whether they perceive an object as new or old. In practice, rats don’t spend a consistent amount of time investigating new objects—by whatever definition of ‘new object’ the test in question was using—so the concept ‘deviations from that consistent amount of time’ isn’t coherent and can’t be used.
It should work, if you have enough rats. And standard statistics should tell you whether you have enough rats. But it looks very suspicious in this case, even though they satisfied the statistical rules-of-thumb.
As a general rule of thumb, BTW, any time someone says “95% confidence”, you should interpret it as less than 95% confidence. The assumptions that go into those confidence calculations are never completely true; and the ways in which they are untrue usually make the justified confidence smaller, not larger.
But since we don’t know how much less than 95%, we just say “95%”, and hope the listener knows that doesn’t really mean 95%.
This could be a function of the fact that I have very little training in statistics and am trying to get by on common sense and raw intelligence, but it seems to me that ‘enough rats’ implies, among other things, enough rats to see a ‘fairly’ (or ‘statistically’) consistent amount of time spent investigating new objects if the first part of the hypothesis as I stated it is true. If how much time the rats spend investigating new objects is affected by how recently they investigated a different new object, or some other variable that would affect all the rats on a given trial, rather than being consistent, or random, or affected by something that would affect some random subset of rats independently of a given trial, then I don’t see how adding more rats will help—you’d just get a clearer picture of the fact that the time spent investigating new objects varies based on some unconsidered variable that your test is allowing to affect the situation, which you’d then need to find and control for.
That’s a good point. If baseline rat curiosity can suddenly drop by half, then the baseline differential between time spent exploring new and old objects could also suddenly change.
That’s how I read it. The hypothesis is that rats spend a fairly consistent amount of time investigating new objects, and that their deviations from that consistent amount of time can be used to gauge whether they perceive an object as new or old. In practice, rats don’t spend a consistent amount of time investigating new objects—by whatever definition of ‘new object’ the test in question was using—so the concept ‘deviations from that consistent amount of time’ isn’t coherent and can’t be used.
It should work, if you have enough rats. And standard statistics should tell you whether you have enough rats. But it looks very suspicious in this case, even though they satisfied the statistical rules-of-thumb.
As a general rule of thumb, BTW, any time someone says “95% confidence”, you should interpret it as less than 95% confidence. The assumptions that go into those confidence calculations are never completely true; and the ways in which they are untrue usually make the justified confidence smaller, not larger.
But since we don’t know how much less than 95%, we just say “95%”, and hope the listener knows that doesn’t really mean 95%.
This could be a function of the fact that I have very little training in statistics and am trying to get by on common sense and raw intelligence, but it seems to me that ‘enough rats’ implies, among other things, enough rats to see a ‘fairly’ (or ‘statistically’) consistent amount of time spent investigating new objects if the first part of the hypothesis as I stated it is true. If how much time the rats spend investigating new objects is affected by how recently they investigated a different new object, or some other variable that would affect all the rats on a given trial, rather than being consistent, or random, or affected by something that would affect some random subset of rats independently of a given trial, then I don’t see how adding more rats will help—you’d just get a clearer picture of the fact that the time spent investigating new objects varies based on some unconsidered variable that your test is allowing to affect the situation, which you’d then need to find and control for.
That’s a good point. If baseline rat curiosity can suddenly drop by half, then the baseline differential between time spent exploring new and old objects could also suddenly change.