I think if you start having meta-priors, then what, you gotta have meta-meta-priors and so on? At some point that’s just having more basic, fundamental priors that embrace a wider range of possibilities. The question is what would those look like, or if being general enough doesn’t descend into a completely uniform (or very little informative) prior that is essentially of no help; you can think anything, but the trade-off is it’s always going to be inefficient.
True, but I think in this case there’s at least no risk of an infinite regress. At one end, yes, it bottoms out in an extremely vague and inefficient but general hyperprior. I would guess from the little I’ve read that in humans these are the layers that govern how we learn from even before we’re born. I would imagine an ASI would have at least one layer more fundamental than this, which enable it to change various fixed-in-humans assumptions about things.
At the other end would be the most specific or most abstracted layer of priors that has proven useful to date. Somewhere in the stack are your current best processes for deciding whether particular priors or layers of priors are useful or worth keeping or if you need a new one.
I am actually not sure whether ‘prior’ is quite the right term here? Some of it feels like the distinction between thingspace and conceptspace, where the priors might be more about the expectations what things exist and where natural concept boundaries lie and how to evaluate and re-evaluate those?
I think if you start having meta-priors, then what, you gotta have meta-meta-priors and so on? At some point that’s just having more basic, fundamental priors that embrace a wider range of possibilities. The question is what would those look like, or if being general enough doesn’t descend into a completely uniform (or very little informative) prior that is essentially of no help; you can think anything, but the trade-off is it’s always going to be inefficient.
True, but I think in this case there’s at least no risk of an infinite regress. At one end, yes, it bottoms out in an extremely vague and inefficient but general hyperprior. I would guess from the little I’ve read that in humans these are the layers that govern how we learn from even before we’re born. I would imagine an ASI would have at least one layer more fundamental than this, which enable it to change various fixed-in-humans assumptions about things.
At the other end would be the most specific or most abstracted layer of priors that has proven useful to date. Somewhere in the stack are your current best processes for deciding whether particular priors or layers of priors are useful or worth keeping or if you need a new one.
I am actually not sure whether ‘prior’ is quite the right term here? Some of it feels like the distinction between thingspace and conceptspace, where the priors might be more about the expectations what things exist and where natural concept boundaries lie and how to evaluate and re-evaluate those?