I lack motivation myself. I’m interested in AIrisk but I think exploring abstract decision theories where the costs of doing the computation to make the decision are ignored is like trying to build a vehicle and ignoring drag entirely.
I may well be wrong so I still skim the agent foundations stuff, but I am unconvinced of its practicality. So I’m unlikely to be commenting on it or participating in that.
Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
When computations have costs I think the nature of the problems change drastically. I’ve argued that we need to go up to meta-decision theories because of it here.
The idea of solomonov induction is not needed for building Neural networks (or useful for reasoning about them). So my pragmatic heart is cold towards a theory of logical induction as well.
I lack motivation myself. I’m interested in AIrisk but I think exploring abstract decision theories where the costs of doing the computation to make the decision are ignored is like trying to build a vehicle and ignoring drag entirely.
I may well be wrong so I still skim the agent foundations stuff, but I am unconvinced of its practicality. So I’m unlikely to be commenting on it or participating in that.
Maybe you’ve heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though—the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.
Further, my intuition from studying logical induction is that practical systems will be “close enough” to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.
When computations have costs I think the nature of the problems change drastically. I’ve argued that we need to go up to meta-decision theories because of it here.
The idea of solomonov induction is not needed for building Neural networks (or useful for reasoning about them). So my pragmatic heart is cold towards a theory of logical induction as well.