I think I might be spazzing out a bit (I haven’t been sleeping well), so let me try to get this straight:
The hypothesis is that a rat will spend more time investigating a new object than an old object—the old object will be ignored as familiar, the new object will be attended to as unfamiliar.
Therefore, if the rats recognize the old object, they will ignore it (time spent examining it will be less); if they do not recognize the old object, they will attend to it (time spent examining it will be more).
...and here is where I get lost. You say that the above assumes that time spent examining new objects should be consistent—are you saying that variations in the new-object times imply confounding factors which corrupt the results?
You say that the above assumes that time spent examining new objects should be consistent—are you saying that variations in the new-object times imply confounding factors which corrupt the results?
That’s how I read it. The hypothesis is that rats spend a fairly consistent amount of time investigating new objects, and that their deviations from that consistent amount of time can be used to gauge whether they perceive an object as new or old. In practice, rats don’t spend a consistent amount of time investigating new objects—by whatever definition of ‘new object’ the test in question was using—so the concept ‘deviations from that consistent amount of time’ isn’t coherent and can’t be used.
It should work, if you have enough rats. And standard statistics should tell you whether you have enough rats. But it looks very suspicious in this case, even though they satisfied the statistical rules-of-thumb.
As a general rule of thumb, BTW, any time someone says “95% confidence”, you should interpret it as less than 95% confidence. The assumptions that go into those confidence calculations are never completely true; and the ways in which they are untrue usually make the justified confidence smaller, not larger.
But since we don’t know how much less than 95%, we just say “95%”, and hope the listener knows that doesn’t really mean 95%.
This could be a function of the fact that I have very little training in statistics and am trying to get by on common sense and raw intelligence, but it seems to me that ‘enough rats’ implies, among other things, enough rats to see a ‘fairly’ (or ‘statistically’) consistent amount of time spent investigating new objects if the first part of the hypothesis as I stated it is true. If how much time the rats spend investigating new objects is affected by how recently they investigated a different new object, or some other variable that would affect all the rats on a given trial, rather than being consistent, or random, or affected by something that would affect some random subset of rats independently of a given trial, then I don’t see how adding more rats will help—you’d just get a clearer picture of the fact that the time spent investigating new objects varies based on some unconsidered variable that your test is allowing to affect the situation, which you’d then need to find and control for.
That’s a good point. If baseline rat curiosity can suddenly drop by half, then the baseline differential between time spent exploring new and old objects could also suddenly change.
are you saying that variations in the new-object times imply confounding factors which corrupt the results?
Technically, yes. But phrasing it that way sounds like the test algorithm should include, “Check that the new object times are consistent”. That’s not how I detected the error. I said, “Remember that what we originally wanted to know is whether the old object times are different—and they aren’t.”
The data show the rats spending the same amount of time examining the old objects in all cases. The investigators concluded that the rats didn’t recognize the old objects in those cases where they spent less time than usual examining new objects. That interpretation requires believing that it’s more likely that the new-object-times plot a strange but reliable function f(M) describing how curious rats are about new objects M minutes after being exposed to a different object, than that your experiment is messed up.
Note also the leftmost two points in figure 1B. This shows that the control rats and the gene-therapy rats both spent the same amount of time investigating the old objects. So now, to continue with the interpretation that the new-object-time is a good control, you have to believe that the gene therapy has both improved the rats’ ORM, and made them more inherently curious about objects shown to them 60 minutes after being shown some other object.
I think I might be spazzing out a bit (I haven’t been sleeping well), so let me try to get this straight:
The hypothesis is that a rat will spend more time investigating a new object than an old object—the old object will be ignored as familiar, the new object will be attended to as unfamiliar.
Therefore, if the rats recognize the old object, they will ignore it (time spent examining it will be less); if they do not recognize the old object, they will attend to it (time spent examining it will be more).
...and here is where I get lost. You say that the above assumes that time spent examining new objects should be consistent—are you saying that variations in the new-object times imply confounding factors which corrupt the results?
That’s how I read it. The hypothesis is that rats spend a fairly consistent amount of time investigating new objects, and that their deviations from that consistent amount of time can be used to gauge whether they perceive an object as new or old. In practice, rats don’t spend a consistent amount of time investigating new objects—by whatever definition of ‘new object’ the test in question was using—so the concept ‘deviations from that consistent amount of time’ isn’t coherent and can’t be used.
It should work, if you have enough rats. And standard statistics should tell you whether you have enough rats. But it looks very suspicious in this case, even though they satisfied the statistical rules-of-thumb.
As a general rule of thumb, BTW, any time someone says “95% confidence”, you should interpret it as less than 95% confidence. The assumptions that go into those confidence calculations are never completely true; and the ways in which they are untrue usually make the justified confidence smaller, not larger.
But since we don’t know how much less than 95%, we just say “95%”, and hope the listener knows that doesn’t really mean 95%.
This could be a function of the fact that I have very little training in statistics and am trying to get by on common sense and raw intelligence, but it seems to me that ‘enough rats’ implies, among other things, enough rats to see a ‘fairly’ (or ‘statistically’) consistent amount of time spent investigating new objects if the first part of the hypothesis as I stated it is true. If how much time the rats spend investigating new objects is affected by how recently they investigated a different new object, or some other variable that would affect all the rats on a given trial, rather than being consistent, or random, or affected by something that would affect some random subset of rats independently of a given trial, then I don’t see how adding more rats will help—you’d just get a clearer picture of the fact that the time spent investigating new objects varies based on some unconsidered variable that your test is allowing to affect the situation, which you’d then need to find and control for.
That’s a good point. If baseline rat curiosity can suddenly drop by half, then the baseline differential between time spent exploring new and old objects could also suddenly change.
Technically, yes. But phrasing it that way sounds like the test algorithm should include, “Check that the new object times are consistent”. That’s not how I detected the error. I said, “Remember that what we originally wanted to know is whether the old object times are different—and they aren’t.”
The data show the rats spending the same amount of time examining the old objects in all cases. The investigators concluded that the rats didn’t recognize the old objects in those cases where they spent less time than usual examining new objects. That interpretation requires believing that it’s more likely that the new-object-times plot a strange but reliable function f(M) describing how curious rats are about new objects M minutes after being exposed to a different object, than that your experiment is messed up.
Note also the leftmost two points in figure 1B. This shows that the control rats and the gene-therapy rats both spent the same amount of time investigating the old objects. So now, to continue with the interpretation that the new-object-time is a good control, you have to believe that the gene therapy has both improved the rats’ ORM, and made them more inherently curious about objects shown to them 60 minutes after being shown some other object.
In other words, if the setup were good, the old object time ought to increase, rather than the new object time decrease.
That’s what I’d expect to see.