If you start with the model of an embedded agent in a partially internally predictable world (it has to be at least partially internally predictable, otherwise embedded agency would not make sense), the rest falls out of that. If you define an embedded agent as a subsystem that has a course model of the world and a set of goals to optimize the world for, as well as a way to interact with the outside world, then “evidence” is just that interaction with the outside world, processed and incorporated into the map, and sometimes into the goals. So, the assumption “evidence exists” is grounded in the idea of embedded agency.
If, on the other hand, you reject that approach in favor of another one, it pays to explicate your model of the world first. Is it solipsism? Cartesian dualism? Something else?
I think you’re spot on with the “internally predictable world”. One can observe patterns in the world, and make the bet that these originate from an underlying regularity.
Then there are no evidence, only experiences that are more compatible (in the Bayesian sense) with one model than with an other, which you use to make the bet that this model is more likely to predict the regularities well than other models.
If you start with the model of an embedded agent in a partially internally predictable world (it has to be at least partially internally predictable, otherwise embedded agency would not make sense), the rest falls out of that. If you define an embedded agent as a subsystem that has a course model of the world and a set of goals to optimize the world for, as well as a way to interact with the outside world, then “evidence” is just that interaction with the outside world, processed and incorporated into the map, and sometimes into the goals. So, the assumption “evidence exists” is grounded in the idea of embedded agency.
If, on the other hand, you reject that approach in favor of another one, it pays to explicate your model of the world first. Is it solipsism? Cartesian dualism? Something else?
I think you’re spot on with the “internally predictable world”. One can observe patterns in the world, and make the bet that these originate from an underlying regularity.
Then there are no evidence, only experiences that are more compatible (in the Bayesian sense) with one model than with an other, which you use to make the bet that this model is more likely to predict the regularities well than other models.