Sometimes toy models are helpful and some times they are distractions that lead nowhere or embody a mistaken preconception. I see you as claiming these models are distractions, not that no model is possible. Accurate?
I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori.
The classes U and F above, should something like that ever come to pass, need not be AIXI-like (nor need they involve utility functions).
Right. Which is precisely why I don’t like when we attempt to do FAI research under the assumption of AIXI-like-ness.
I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori.
(edit: I think I might understand after-all; it sounds like you’re claiming AIXI-like things are unlikely to be useful since they’re based mostly on preconceptions that are likely false?)
I don’t think I understand what you mean here. Everyone favors modeling based on real evidence as opposed to fake evidence, and everyone favors avoiding the import of false preconceptions. It sounds like you prefer more constructive approaches?
Right. Which is precisely why I don’t like when we attempt to do FAI research under the assumption of AIXI-like-ness.
I agree if you’re saying that we shouldn’t assume AIXI-like-ness to define the field. I disagree if you’re saying it’s a waste for people to explore that idea space though: it seems ripe to me.
I don’t think it’s an active waste of time to explore the research that can be done with things like AIXI models. I do, however, think that, for instance, flaws of AIXI-like models should be taken as flaws of AIXI-like models, rather than generalized to all possible AI designs.
So for example, some people (on this site and elsewhere) have said we shouldn’t presume that a real AGI or real FAI will necessarily use VNM utility theory to make decisions. For various reasons, I think that exploring that idea-space is a good idea, in that relaxing the VNM utility and rationality assumptions can both take us closer to how real, actually-existing minds work, and to how we normatively want an artificial agent to behave.
I very much favor bottom-up modelling based on real evidence rather than mathematical models that come out looking neat by imposing our preconceptions on the problem a priori.
Right. Which is precisely why I don’t like when we attempt to do FAI research under the assumption of AIXI-like-ness.
(edit: I think I might understand after-all; it sounds like you’re claiming AIXI-like things are unlikely to be useful since they’re based mostly on preconceptions that are likely false?)
I don’t think I understand what you mean here. Everyone favors modeling based on real evidence as opposed to fake evidence, and everyone favors avoiding the import of false preconceptions. It sounds like you prefer more constructive approaches?
I agree if you’re saying that we shouldn’t assume AIXI-like-ness to define the field. I disagree if you’re saying it’s a waste for people to explore that idea space though: it seems ripe to me.
I don’t think it’s an active waste of time to explore the research that can be done with things like AIXI models. I do, however, think that, for instance, flaws of AIXI-like models should be taken as flaws of AIXI-like models, rather than generalized to all possible AI designs.
So for example, some people (on this site and elsewhere) have said we shouldn’t presume that a real AGI or real FAI will necessarily use VNM utility theory to make decisions. For various reasons, I think that exploring that idea-space is a good idea, in that relaxing the VNM utility and rationality assumptions can both take us closer to how real, actually-existing minds work, and to how we normatively want an artificial agent to behave.
Modulo nitpicking, agreed on both points.