I agree that we need a notion of “intent” that doesn’t require a purely behavioral notion of a model’s objectives, but I think it should also not be limited strictly to mesa-optimizers, which neither Rohin nor I expect to appear in practice. (Mesa-optimizers appear to me to be the formalization of the idea “what if ML systems, which by default are not well-described as EU maximizers, learned to be EU maximizers?” I suspect MIRI people have some unshared intuitions about why we might expect this, but I currently don’t have a good reason to believe this.)
For myself, my reaction is “behavioral objectives also assume a system is well-described as EU maximizers”. In either case, you’re assuming that you can summarize a policy by a function it optimizes; the difference is whether you think the system itself thinks explicitly in those terms.
I haven’t engaged that much with the anti-EU-theory stuff, but my experience so far is that it usually involves a pretty strict idea of what is supposed to fit EU theory, and often, misunderstandings of EU theory. I have my own complaints about EU theory, but they just don’t resonate at all with other people’s complaints, it seems.
For example, I don’t put much stock in the idea of utility functions, but I endorse a form of EU theory which avoids them. Specifically, I believe in approximately coherent expectations: you assign expected values to events, and a large part of cognition is devoted to making these expectations as coherent as possible (updating them based on experience, propagating expectations of more distant events to nearer, etc). This is in contrast to keeping some centrally represented utility function, and devoting cognition to computing expectations for this utility function.
In this picture, there is no clear distinction between terminal values and instrumental values. Something is “more terminal” if you treat it as more fixed (you resolve contradictions by updating the other values), and “more instrumental” if its value is more changeable based on other things.
I want to be able to talk about how we can shape goals which may be messier, perhaps somewhat competing, internal representations or heuristics or proxies that determine behavior.
(Possibly you should consider my “approximately coherent expectations” idea)
I haven’t engaged that much with the anti-EU-theory stuff, but my experience so far is that it usually involves a pretty strict idea of what is supposed to fit EU theory, and often, misunderstandings of EU theory. I have my own complaints about EU theory, but they just don’t resonate at all with other people’s complaints, it seems.
For example, I don’t put much stock in the idea of utility functions, but I endorse a form of EU theory which avoids them. Specifically, I believe in approximately coherent expectations: you assign expected values to events, and a large part of cognition is devoted to making these expectations as coherent as possible (updating them based on experience, propagating expectations of more distant events to nearer, etc). This is in contrast to keeping some centrally represented utility function, and devoting cognition to computing expectations for this utility function.
For myself, my reaction is “behavioral objectives also assume a system is well-described as EU maximizers”. In either case, you’re assuming that you can summarize a policy by a function it optimizes; the difference is whether you think the system itself thinks explicitly in those terms.
I haven’t engaged that much with the anti-EU-theory stuff, but my experience so far is that it usually involves a pretty strict idea of what is supposed to fit EU theory, and often, misunderstandings of EU theory. I have my own complaints about EU theory, but they just don’t resonate at all with other people’s complaints, it seems.
For example, I don’t put much stock in the idea of utility functions, but I endorse a form of EU theory which avoids them. Specifically, I believe in approximately coherent expectations: you assign expected values to events, and a large part of cognition is devoted to making these expectations as coherent as possible (updating them based on experience, propagating expectations of more distant events to nearer, etc). This is in contrast to keeping some centrally represented utility function, and devoting cognition to computing expectations for this utility function.
In this picture, there is no clear distinction between terminal values and instrumental values. Something is “more terminal” if you treat it as more fixed (you resolve contradictions by updating the other values), and “more instrumental” if its value is more changeable based on other things.
(Possibly you should consider my “approximately coherent expectations” idea)
Is this related to your post An Orthodox Case Against Utility Functions? It’s been on my to-read list for a while; I’ll be sure to give it a look now.
Right, exactly. (I should probably have just referred to that, but I was trying to avoid reference-dumping.)