Just like gold farmers in online games can sell virtual items to people with dollars, entities within the computational market could sell reputation or other results for real money in the external market.
Oh—when I use the term “computational market”, I do not mean a market using fake money. I mean an algorithmic market using real money. Current financial markets are already somewhat computational, but they also have rather arbitrary restrictions and limitations that preclude much of the interesting computational space (such as generalized bet contracts ala prediction markets).
Pharmaceutical companies spend their money on advertising and patent wars instead of research.
There is nothing inherently wrong with this or even obviously suboptimal about these behaviours. Advertising can be good and necessary when you have information which has high positive impact only when promoted—consider the case of smoking and cancer.
The general problem—as I discussed in the OP—is that the current market structure does not incentivize big pharma to solve health.
I am interested in concrete proposals to avoid those issues, but to me the problem sounds a lot like the longstanding problem of market regulation.
Well … yes.
How, specifically, will computational mechanism design succeed where years of social/economic/political trial and error have failed?
Current political and economic structures are all essentially pre-information age technologies. There are many things which can only be done with big computers and the internet.
Also, I don’t see the years of trial and error so far as outright failures—it’s more of a mixed bag.
Now I realize that doesn’t specifically answer your question, but a really specific answer would involve a whole post or more.
But here’s a simple summary. It’s easier to start with the public single payer version of the idea rather than the private payer version.
The gov sets aside a budget—say 10 billion a year or so—for a health prediction market. They collect data from all the hospitals, clinics, etc and then aggregate and anonymize that data (with opt-in incentives for those who don’t care about anonymity). Anybody can download subsets of the data to train predictive models. There is an ongoing public competition—a market contest—where entrants attempt to predict various subsets of the new data before it is released (every month, week, day, whatever).
The best winning models are then used to predict the effect of possible interventions: what if demographic B3 was put on 2000 IU vit D? What if demographic Z2 stopped using coffee? What if demographic Y3 was put on drug ZB4? etc etc.
This allows the market to solve the hard prediction problems—by properly incentivizing the correct resource flow into individuals/companies that actually know what they are doing and have actual predictive ability. The gov then just mainly needs to decide roughly how much money these questions are worth.
The best winning models are then used to predict the effect of possible interventions: what if demographic B3 was put on 2000 IU vit D? What if demographic Z2 stopped using coffee? What if demographic Y3 was put on drug ZB4? etc etc.
What about predictions of the form “highly expensive and rare treatment F2 has marginal benefit at treating the common cold” that can drive a side market in selling F2 just to produce data for the competition? Especially if there are advertisements saying “Look at all these important/rich people betting that F2 helps to cure your cold” in which case the placebo affect will tend to bear out the prediction. What if tiny demographic G given treatment H2 is shorted against life expectancy by the doctors/nurses who are secretly administering H2.cyanide instead? There is already market pressure to distort reporting of drug prescriptions/administration and nonfavorable outcomes, not to mention outright insurance fraud. Adding more money will reinforce that behavior.
And how is the null prediction problem handled? I can predict pretty accurately that cohort X given sugar pills will have results very similar to the placebo affect. I can repeat that for sugar pill cohort X2, X3, …, XN and look like a really great predictor. It seems like judging the efficacy of tentative treatments is a prerequisite for judging the efficacy of predictors. Is there a theorem that shows it’s possible to distinguish useful predictors from useless predictors in most scenarios? Especially when allowing predictions over subsets of the data? I suppose one could not reward predictors who make vacuous predictions ex post facto, but that might have a chilling effect on predictors who would otherwise bet on homeopathy looking like a placebo.
Basically any sort of self-fulfilling prophesy looks like a way to steal money away from solving the health care problem.
Oh—when I use the term “computational market”, I do not mean a market using fake money. I mean an algorithmic market using real money. Current financial markets are already somewhat computational, but they also have rather arbitrary restrictions and limitations that preclude much of the interesting computational space (such as generalized bet contracts ala prediction markets).
There is nothing inherently wrong with this or even obviously suboptimal about these behaviours. Advertising can be good and necessary when you have information which has high positive impact only when promoted—consider the case of smoking and cancer.
The general problem—as I discussed in the OP—is that the current market structure does not incentivize big pharma to solve health.
Well … yes.
Current political and economic structures are all essentially pre-information age technologies. There are many things which can only be done with big computers and the internet.
Also, I don’t see the years of trial and error so far as outright failures—it’s more of a mixed bag.
Now I realize that doesn’t specifically answer your question, but a really specific answer would involve a whole post or more.
But here’s a simple summary. It’s easier to start with the public single payer version of the idea rather than the private payer version.
The gov sets aside a budget—say 10 billion a year or so—for a health prediction market. They collect data from all the hospitals, clinics, etc and then aggregate and anonymize that data (with opt-in incentives for those who don’t care about anonymity). Anybody can download subsets of the data to train predictive models. There is an ongoing public competition—a market contest—where entrants attempt to predict various subsets of the new data before it is released (every month, week, day, whatever).
The best winning models are then used to predict the effect of possible interventions: what if demographic B3 was put on 2000 IU vit D? What if demographic Z2 stopped using coffee? What if demographic Y3 was put on drug ZB4? etc etc.
This allows the market to solve the hard prediction problems—by properly incentivizing the correct resource flow into individuals/companies that actually know what they are doing and have actual predictive ability. The gov then just mainly needs to decide roughly how much money these questions are worth.
What about predictions of the form “highly expensive and rare treatment F2 has marginal benefit at treating the common cold” that can drive a side market in selling F2 just to produce data for the competition? Especially if there are advertisements saying “Look at all these important/rich people betting that F2 helps to cure your cold” in which case the placebo affect will tend to bear out the prediction. What if tiny demographic G given treatment H2 is shorted against life expectancy by the doctors/nurses who are secretly administering H2.cyanide instead? There is already market pressure to distort reporting of drug prescriptions/administration and nonfavorable outcomes, not to mention outright insurance fraud. Adding more money will reinforce that behavior.
And how is the null prediction problem handled? I can predict pretty accurately that cohort X given sugar pills will have results very similar to the placebo affect. I can repeat that for sugar pill cohort X2, X3, …, XN and look like a really great predictor. It seems like judging the efficacy of tentative treatments is a prerequisite for judging the efficacy of predictors. Is there a theorem that shows it’s possible to distinguish useful predictors from useless predictors in most scenarios? Especially when allowing predictions over subsets of the data? I suppose one could not reward predictors who make vacuous predictions ex post facto, but that might have a chilling effect on predictors who would otherwise bet on homeopathy looking like a placebo.
Basically any sort of self-fulfilling prophesy looks like a way to steal money away from solving the health care problem.