This makes sense. Using Bayes rule to develop the weights was the (/a) missing link for me. I was trying to do it all conditional on the possible outcomes.
Correct me if I’m wrong, but there should be a different weight between the models at different parts of the dependent variable? When the dependent variable is near its mean, the regressions will have narrower forecast distributions and so less weight should go to the insider methodology.
This particular method doesn’t do that. Think of the weight for a given model as the probability that the model is ‘true.’
I think you can make the weights depend on the dependent variable by specifying the prior weights conditional on the dependent variable. For example, if your dependent variable, x, is continous, you might set P(M1|x)=P(M2|x)=logit(x)/2 and P(M3|x)=1-logit(x). The key would be choosing appropriate functions of x that reflect your actual prior knowledge.
On the other hand, there’s probably a method that automatically takes into account each model’s prediction error as a function of the dependent variable(s), but I’m not aware of it.
This makes sense. Using Bayes rule to develop the weights was the (/a) missing link for me. I was trying to do it all conditional on the possible outcomes.
Correct me if I’m wrong, but there should be a different weight between the models at different parts of the dependent variable? When the dependent variable is near its mean, the regressions will have narrower forecast distributions and so less weight should go to the insider methodology.
This particular method doesn’t do that. Think of the weight for a given model as the probability that the model is ‘true.’
I think you can make the weights depend on the dependent variable by specifying the prior weights conditional on the dependent variable. For example, if your dependent variable, x, is continous, you might set P(M1|x)=P(M2|x)=logit(x)/2 and P(M3|x)=1-logit(x). The key would be choosing appropriate functions of x that reflect your actual prior knowledge.
On the other hand, there’s probably a method that automatically takes into account each model’s prediction error as a function of the dependent variable(s), but I’m not aware of it.