It’s interesting what this means for psychiatry as a science. If two psychiatrist who do the same technique have radically different effects on their clients the assumptions behind evidence-based psychiatrist are shacky and it’s basically what Feymann called cargo-cult science.
The evidence-based medicine paradigm won’t bring the goods that are promised when different therapists who do the same standardized technique get radically different results.
We need to stop treating the evidence-based ideology as an ideal and move on to prediction-based medicine that can actually fulfill the promise of telling us whether treatment A or treatment B is more likely to cure us.
This position is roughly 80 years old. Personally I think the best heuristic for telling wether a therapist is any good is whether they believe the connection between the two of you is more important than their personal preferred theoretic approach.
I model therapy as an art and craft, a special subset of social skills, just like empathetic listening or building rapport, rather than a deterministic applied science like EEng or MechEng. Then, getting different results from the same therapist doing the same technique is exactly what I would expect because they probably aren’t doing all the non-verbal communication which makes the largest impact in social situations. Just like Scott has the passive ability of making people around him devolve into really civil calm discussions, a desired social effect is not something easily reproduced by others, nor learned in many cases less rare than Scott’s reality distortion field of civility.
I feel like I’m being a little mean and condescending here. My apologies, I’ve been up too late. But I think it’s poor practice to throw stones at glass houses of the academy, without doing some good scholarship first.
I’m aware of the studies that suggest that empathy and alliance are more important than the theoretic approach of the therapist and made that point previously on LW. It’s just that there are so many possible stones to throw that I’m not throwing them all at once.
Why do you expect prediction markets to be more useful for this than evidence-based methods which take into account interactions between the practitioner’s characteristics and whatever method they are using?
I’m not advocating what Hanson calls prediction-markets. I’m advocating a different setup that’s described in the linked article.
The core problem is that even if it would be possible in a perfect world to run evidence-based studies to gather the knowledge in the present system there are no economic incentives for anybody to run the required studies in a way that’s likely to lead to effective clinical predictions. There’s no accountability that pushes clinical trial design in a way that leads to clear clinical benefits. The incentives are mostly about overstating the effect of the intervention that’s studied.
Even if there would be a sincere attempt at running the required studies it would be much more expensive than the way we currently study interventions and that means we are likely to study less interventions and thereby slowing down innovation by making the invention of new interventions more costly.
It’s interesting what this means for psychiatry as a science. If two psychiatrist who do the same technique have radically different effects on their clients the assumptions behind evidence-based psychiatrist are shacky and it’s basically what Feymann called cargo-cult science.
The evidence-based medicine paradigm won’t bring the goods that are promised when different therapists who do the same standardized technique get radically different results.
We need to stop treating the evidence-based ideology as an ideal and move on to prediction-based medicine that can actually fulfill the promise of telling us whether treatment A or treatment B is more likely to cure us.
One of the best predictors of successful outcomes for a therapy patient, is that the patient trusts the therapist.
https://en.wikipedia.org/wiki/Therapeutic_relationship
This position is roughly 80 years old. Personally I think the best heuristic for telling wether a therapist is any good is whether they believe the connection between the two of you is more important than their personal preferred theoretic approach.
Relevant scholarship:
https://en.wikipedia.org/wiki/Dodo_bird_verdict
https://en.wikipedia.org/wiki/Common_factors_theory
I model therapy as an art and craft, a special subset of social skills, just like empathetic listening or building rapport, rather than a deterministic applied science like EEng or MechEng. Then, getting different results from the same therapist doing the same technique is exactly what I would expect because they probably aren’t doing all the non-verbal communication which makes the largest impact in social situations. Just like Scott has the passive ability of making people around him devolve into really civil calm discussions, a desired social effect is not something easily reproduced by others, nor learned in many cases less rare than Scott’s reality distortion field of civility.
I feel like I’m being a little mean and condescending here. My apologies, I’ve been up too late. But I think it’s poor practice to throw stones at glass houses of the academy, without doing some good scholarship first.
I’m aware of the studies that suggest that empathy and alliance are more important than the theoretic approach of the therapist and made that point previously on LW. It’s just that there are so many possible stones to throw that I’m not throwing them all at once.
Why do you expect prediction markets to be more useful for this than evidence-based methods which take into account interactions between the practitioner’s characteristics and whatever method they are using?
I’m not advocating what Hanson calls prediction-markets. I’m advocating a different setup that’s described in the linked article.
The core problem is that even if it would be possible in a perfect world to run evidence-based studies to gather the knowledge in the present system there are no economic incentives for anybody to run the required studies in a way that’s likely to lead to effective clinical predictions. There’s no accountability that pushes clinical trial design in a way that leads to clear clinical benefits. The incentives are mostly about overstating the effect of the intervention that’s studied.
Even if there would be a sincere attempt at running the required studies it would be much more expensive than the way we currently study interventions and that means we are likely to study less interventions and thereby slowing down innovation by making the invention of new interventions more costly.