You could also simply continue working on the review: you are clearly motivated to explore these issues deeper so why not start fleshing out the paper?

Note that I said “continue” rather than start. The barrier is often not the ideas themselves but getting it written in something approaching a complete paper. this is still the issue for me and I have 50+ peer reviewed papers in the past 20 years (although not in this field).

Can’t this be modelled as uncertainty over functional equivalence? (or over input-output maps)?

Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.

You could also simply continue working on the review: you are clearly motivated to explore these issues deeper so why not start fleshing out the paper?

Note that I said “continue” rather than start. The barrier is often not the ideas themselves but getting it written in something approaching a complete paper. this is still the issue for me and I have 50+ peer reviewed papers in the past 20 years (although not in this field).

I will then.

I suggest you check with Nate what exactly he thinks, but my opinion is:

I think Nate agrees with this, and any lack of functional equivalence is due to not being able to fully specify that yet.

Can’t this be modelled as uncertainty over functional equivalence? (or over input-output maps)?

Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.