Hmm. If your replication attempt was good science, you could help the world by publishing it. If it wasn’t good science, you probably shouldn’t update on it very strongly.
I don’t know anyone at Singularity Institute who believes either of those statements.
Did you mean this seriously or is this a level?
EDIT: I was thinking of “good science” as the social process of science. By that standard, it’s well known on less wrong that lots of junk can be published, that many published results are bogus, and that lots of useful experiments which you could meaningfully update on could not be published.
That’s why I thought you must be joking. It’s like if my friend was at a gay pride rally and said, “We’re just trying to undermine family values and destroy America.” I would laugh. It’s such a naive and backwards statement given what everyone around him believes, that I could safely assume it was complete sarcasm (even though that sentence could be taken another way).
Also, Anna claims to believe your statements. So I was wrong to say no one at Sing Inst believes them. I can’t rightfully claim to speak for others like that so sorry to confuse anyone. In her defense, Anna must be reading “good science” and “publishing” as referring to the theoretical ideals of Science and Journals. In that case, I’d guess lots of people could agree with your statements.
I’m not sure what the opinions of folks at SIAI have to do with this (just mentioning them doesn’t constitute an valid argument, I’m not at SIAI, I was speaking seriously), but I can recall a quote from Eliezer expressing a sentiment that’s pretty close to my second one:
“What happened to me personally is only anecdotal evidence,” Harry explained to her. “It doesn’t carry the same weight as a replicated, peer-reviewed journal article about a controlled study with random assignment, many subjects, large effect sizes and strong statistical significance.”
But that’s not very relevant. What’s relevant is this: if I believed in the efficacy ot the DNB task, it would be wrong for me to change my opinion substantially after reading your comment on LW. 13 people, 1 week, methodology and results unpublished and unverified?
The opinion of folks at Singularity Institute is VERY relevant when discussing the methodology of studies done at Singularity Institute. They laughed at the thought of going through the motions of collecting the extra evidence and doing the extra rituals with statistics to impress journal editors. They did the Bayesian version instead.
I actually just found this study in the past hour. Their conclusion: Brain Training software doesn’t work. But you get better at the games!
They cover Lumosity too which is funny because I was just looking into them myself. I was a bit concerned when I tried to look up the evidence for how “scientifically proven” Lumosity was (since they claim it ALL OVER THEIR SITE) and I later realized that the extent of their published findings were 2 conference posters, that aren’t available anywhere online, and that were accepted to conferences I’ve never even heard of.
I think I’m gonna go with the researchers from Nature and Singularity Institute on this one.
If you want an enormous, controlled, statistically-significant study that’s been published in a high-quality, peer reviewed journal, check out this study of brain training software in Nature.
The finding is surprising.
The training experience that I have most appreciated is that of pushing my brain towards the state of flow, releasing any stress or rumination and constantly letting go of the attachment to the frustration of failure while also not being frustrated by the fact that I may be frustrated about failure. This has a strong overlap with the process involved in some forms of meditation and is certainly the kind of thing that I would expect to have generalized benefit—albeit not necessarily to one of an improvement on tests of general intelligence. The format of game with a score to be maximised invokes my rather strong competitive instincts and so rather more motivating than the abstract thought “I should do meditation because meditation is good for me’.
The abstract is insufficiently concrete and vague for me to tell whether their studies relate to directly to the kind of training that I am interesting. I would expect not—since my interest is in things that are rather hard to test! My curiosity is not quite sufficient for me to bipass the feeling of disgust and frustration at the paywall and round up the rest of the document
Re your edit: yes, I was referring to the theoretical ideal of Science. (Don’t care about Journals.) If a lot of published science is bogus, IMO the right response is to try to do better and nail down our results with more precision, not less. Especially in a topic as important as intelligence amplification.
That said, I wasn’t very convinced by the evidence in favor of the DNB task in the first place. In my mind the jury’s still out.
and nail down our results with more precision, not less
I disagree. For example, you can rule out Dual N-Back as a possible Intelligence Amplification intervention with less precision than Jaeggi used to repeatedly mis-prove it as one. Depends on what you mean by precision I suppose. If you mean more time, effort, people, and statistical significance then precision is not needed. If by precision, you just mean being more right… well, I agree, we should be more right.
Most bogus science is very precise: That’s why it looks stronger than it is. Poor methodology and experimental design will still allow someone to prove any correlation with p < 0.05 significance. If I want to disprove someone who published an incorrect result, should I have to expend more time, people, and resources than they used just to over-prove the counter-claim with “more precision”—even though their claim was never wrong due to “lack of precision” in the first place?
Calling for “more precision” is like calling for “more preparation”. It has 100% applause appeal and costs nothing for people to call for. But it costs people actually doing research a lot of time. When you advocate for smarter people to use “more precision”, you’re also advocating for smarter people to do “less research”… the extra precision comes from somewhere.
Are you actually in favor of smarter people doing less research than they currently do?
Let’s agree on the interpretation “we should be more right” and skip over the issues of time and costs.
Sometimes a published result can indeed be overturned by a small amount of Bayesian evidence. But that’s only possible if you also prove that your methodology was much more right than the original paper’s methodology. Right now I have no way of knowing that from your comments. If you add a critique of Jaeggi’s study and an explanation why your study was better, that will work for me.
Are you actually in favor of smarter people doing less research than they currently do?
Considering how much research, given the low levels of confidence warranted by its methodology, is essentially worthless, yes, I am willing to say without reservation that there is dead wood to cut away.
Upvoted; I think this is a good downvoting policy but hope that whoever uses it takes the time to point out what they perceive as empty rhetoric. (I think the habit of spouting such rhetoric is particularly poisonous and particularly easy to stop, making it rather worth the effort of correcting.)
Hmm. If your replication attempt was good science, you could help the world by publishing it. If it wasn’t good science, you probably shouldn’t update on it very strongly.
I don’t know anyone at Singularity Institute who believes either of those statements.
Did you mean this seriously or is this a level?
EDIT: I was thinking of “good science” as the social process of science. By that standard, it’s well known on less wrong that lots of junk can be published, that many published results are bogus, and that lots of useful experiments which you could meaningfully update on could not be published.
That’s why I thought you must be joking. It’s like if my friend was at a gay pride rally and said, “We’re just trying to undermine family values and destroy America.” I would laugh. It’s such a naive and backwards statement given what everyone around him believes, that I could safely assume it was complete sarcasm (even though that sentence could be taken another way).
Also, Anna claims to believe your statements. So I was wrong to say no one at Sing Inst believes them. I can’t rightfully claim to speak for others like that so sorry to confuse anyone. In her defense, Anna must be reading “good science” and “publishing” as referring to the theoretical ideals of Science and Journals. In that case, I’d guess lots of people could agree with your statements.
I’m not sure what the opinions of folks at SIAI have to do with this (just mentioning them doesn’t constitute an valid argument, I’m not at SIAI, I was speaking seriously), but I can recall a quote from Eliezer expressing a sentiment that’s pretty close to my second one:
-- HP:MOR, Ch.6
But that’s not very relevant. What’s relevant is this: if I believed in the efficacy ot the DNB task, it would be wrong for me to change my opinion substantially after reading your comment on LW. 13 people, 1 week, methodology and results unpublished and unverified?
The opinion of folks at Singularity Institute is VERY relevant when discussing the methodology of studies done at Singularity Institute. They laughed at the thought of going through the motions of collecting the extra evidence and doing the extra rituals with statistics to impress journal editors. They did the Bayesian version instead.
If you want an enormous, controlled, statistically-significant study that’s been published in a high-quality, peer reviewed journal, check out this study of brain training software in Nature.
I actually just found this study in the past hour. Their conclusion: Brain Training software doesn’t work. But you get better at the games!
They cover Lumosity too which is funny because I was just looking into them myself. I was a bit concerned when I tried to look up the evidence for how “scientifically proven” Lumosity was (since they claim it ALL OVER THEIR SITE) and I later realized that the extent of their published findings were 2 conference posters, that aren’t available anywhere online, and that were accepted to conferences I’ve never even heard of.
I think I’m gonna go with the researchers from Nature and Singularity Institute on this one.
This seems to miss the point cousin_it was making.
The finding is surprising.
The training experience that I have most appreciated is that of pushing my brain towards the state of flow, releasing any stress or rumination and constantly letting go of the attachment to the frustration of failure while also not being frustrated by the fact that I may be frustrated about failure. This has a strong overlap with the process involved in some forms of meditation and is certainly the kind of thing that I would expect to have generalized benefit—albeit not necessarily to one of an improvement on tests of general intelligence. The format of game with a score to be maximised invokes my rather strong competitive instincts and so rather more motivating than the abstract thought “I should do meditation because meditation is good for me’.
The abstract is insufficiently concrete and vague for me to tell whether their studies relate to directly to the kind of training that I am interesting. I would expect not—since my interest is in things that are rather hard to test! My curiosity is not quite sufficient for me to bipass the feeling of disgust and frustration at the paywall and round up the rest of the document
Re your edit: yes, I was referring to the theoretical ideal of Science. (Don’t care about Journals.) If a lot of published science is bogus, IMO the right response is to try to do better and nail down our results with more precision, not less. Especially in a topic as important as intelligence amplification.
That said, I wasn’t very convinced by the evidence in favor of the DNB task in the first place. In my mind the jury’s still out.
Agreed. Better is better.
I disagree. For example, you can rule out Dual N-Back as a possible Intelligence Amplification intervention with less precision than Jaeggi used to repeatedly mis-prove it as one. Depends on what you mean by precision I suppose. If you mean more time, effort, people, and statistical significance then precision is not needed. If by precision, you just mean being more right… well, I agree, we should be more right.
Most bogus science is very precise: That’s why it looks stronger than it is. Poor methodology and experimental design will still allow someone to prove any correlation with p < 0.05 significance. If I want to disprove someone who published an incorrect result, should I have to expend more time, people, and resources than they used just to over-prove the counter-claim with “more precision”—even though their claim was never wrong due to “lack of precision” in the first place?
Calling for “more precision” is like calling for “more preparation”. It has 100% applause appeal and costs nothing for people to call for. But it costs people actually doing research a lot of time. When you advocate for smarter people to use “more precision”, you’re also advocating for smarter people to do “less research”… the extra precision comes from somewhere.
Are you actually in favor of smarter people doing less research than they currently do?
Let’s agree on the interpretation “we should be more right” and skip over the issues of time and costs.
Sometimes a published result can indeed be overturned by a small amount of Bayesian evidence. But that’s only possible if you also prove that your methodology was much more right than the original paper’s methodology. Right now I have no way of knowing that from your comments. If you add a critique of Jaeggi’s study and an explanation why your study was better, that will work for me.
Considering how much research, given the low levels of confidence warranted by its methodology, is essentially worthless, yes, I am willing to say without reservation that there is dead wood to cut away.
Downvoted for this piece of empty rhetoric.
Upvoted; I think this is a good downvoting policy but hope that whoever uses it takes the time to point out what they perceive as empty rhetoric. (I think the habit of spouting such rhetoric is particularly poisonous and particularly easy to stop, making it rather worth the effort of correcting.)