There are still a few issues here. First, instrumental convergence is motivated by a particular (vague) measure on goal directed agents. Any behaviour that completes any particular long horizon task (or even any list of such behaviours) will have tiny measure here, so low P(X|Y) is easily satisfied.
Secondly, just because “long horizon task achievement” is typical of instrumental convergence doom agents, it does not follow that instrumental convergence doom is typical of long horizon task achievers.
The task I chose seems to be a sticking point, but you totally can pick a task you like better and try to run the arguments for it yourself.
You need to banish all the talk of “isn’t it incentivised to…?”, it’s not justified yet.
I was going with X being the event of any entity that is doing long horizon things, not a specific one. As such, small P(X|Y) is not so trivially satisfied. I agree this is vague, and if you could make it specific that would be a great paper.
Sure, typicality isn’t symmetrical—but the assumptions above (X is a subset of Y, P(A|Y)~=1) mean that I’m interested whether “long horizon task achievement” is typical of instrumental convergence doom agents not the other way around. In other words, I’m checking whether P(X|Y) is large or small.
Make money. Make lots of spiral shaped molecules (colloquially known as paper clips). Build stadiums where more is better. Explore the universe. Really any task that does not have an end condition (and isn’t “keep humans alive and well”) is an issue.
Regarding this last point, could you explain further? We are positing an entity that acts as though it has a purpose, right? It is eg moving the universe towards a state with more stadiums. Why not model it using “incentives”?
You do realise that the core messages of my post are a) that weakening your premise and forgetting the original can lead you astray, and b) the move from “performs useful long horizon action” to “can reason about as if it is a stereotyped goal driven agent” is blocked, right? Of course if you reproduce the error and assume the conclusion you will find plenty to disagree with.
By way of clarification: there are two different data generating processes, both thought experiments: one proposes useful things we’d like advanced AI to do. This leads to things like the stadium builder. The other proposes “agents” of a certain type and leads to the instrumental convergence thesis. What you can get is that you can choose a set of high probability according to the first process that ends up being low probability according to the second.
You are not proposing tasks you are proposing goals. A task here is like “suppose it successfully Xed”, without commitment to whether it wants X or in what way it wants X.
There are still a few issues here. First, instrumental convergence is motivated by a particular (vague) measure on goal directed agents. Any behaviour that completes any particular long horizon task (or even any list of such behaviours) will have tiny measure here, so low P(X|Y) is easily satisfied.
Secondly, just because “long horizon task achievement” is typical of instrumental convergence doom agents, it does not follow that instrumental convergence doom is typical of long horizon task achievers.
The task I chose seems to be a sticking point, but you totally can pick a task you like better and try to run the arguments for it yourself.
You need to banish all the talk of “isn’t it incentivised to…?”, it’s not justified yet.
I was going with X being the event of any entity that is doing long horizon things, not a specific one. As such, small P(X|Y) is not so trivially satisfied. I agree this is vague, and if you could make it specific that would be a great paper.
Sure, typicality isn’t symmetrical—but the assumptions above (X is a subset of Y, P(A|Y)~=1) mean that I’m interested whether “long horizon task achievement” is typical of instrumental convergence doom agents not the other way around. In other words, I’m checking whether P(X|Y) is large or small.
Make money. Make lots of spiral shaped molecules (colloquially known as paper clips). Build stadiums where more is better. Explore the universe. Really any task that does not have an end condition (and isn’t “keep humans alive and well”) is an issue.
Regarding this last point, could you explain further? We are positing an entity that acts as though it has a purpose, right? It is eg moving the universe towards a state with more stadiums. Why not model it using “incentives”?
You do realise that the core messages of my post are a) that weakening your premise and forgetting the original can lead you astray, and b) the move from “performs useful long horizon action” to “can reason about as if it is a stereotyped goal driven agent” is blocked, right? Of course if you reproduce the error and assume the conclusion you will find plenty to disagree with.
By way of clarification: there are two different data generating processes, both thought experiments: one proposes useful things we’d like advanced AI to do. This leads to things like the stadium builder. The other proposes “agents” of a certain type and leads to the instrumental convergence thesis. What you can get is that you can choose a set of high probability according to the first process that ends up being low probability according to the second.
You are not proposing tasks you are proposing goals. A task here is like “suppose it successfully Xed”, without commitment to whether it wants X or in what way it wants X.