In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model …
I don’t think this actually makes sense. Models only make predictions when they’re instantiated, just as algorithms only generate output when run. And models can only be instantiated in someone’s head[1].
… the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide …
This is a statement about philosophy of mathematics, and not exactly an uncontroversial one! As such, I hardly think it can support the sort of rhetorical weight you’re putting on it…
[1] Or, if the model is sufficiently formal, in a computer—but that is, of course, not the sort of model we’re discussing.
I think models can be run on computers and I think people passing papers can work as computers. I do think it’s possible to have an organization that does informational work that none of it’s human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a “bad faith” human actor it might be possible and even feasible.
I don’t think this actually makes sense. Models only make predictions when they’re instantiated, just as algorithms only generate output when run. And models can only be instantiated in someone’s head[1].
This is a statement about philosophy of mathematics, and not exactly an uncontroversial one! As such, I hardly think it can support the sort of rhetorical weight you’re putting on it…
[1] Or, if the model is sufficiently formal, in a computer—but that is, of course, not the sort of model we’re discussing.
I think models can be run on computers and I think people passing papers can work as computers. I do think it’s possible to have an organization that does informational work that none of it’s human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a “bad faith” human actor it might be possible and even feasible.