I’d be very sceptical of applying something like this on experts in a rich-domain/somewhat-pre-paradigmatic field like, say, conceptual alignment. Their expertise is their particular set of tools. And in a rich domain like this, there are likely to be many other tools that lets you work on the problems productively. Even if you concluded that the paradigmatic tools seem most suited for the problems, you may still wish to maximise the chance that you’ll end up with a productively different set of tools, just because they allow you to pursue a neglected angle of attack. If you look overmuch to how experts are doing it, you’ll Einstellung yourself into their paradigm and end up hacking at an area of the wall that’s proven to be very sturdy indeed.
For pre-paradigmatic fields, I agree that the insights you extract have a good chance of not being useful. But if you some people who are talking past each other because they can’t understand each others viewpoints, then I would expect this sort of thing to help make both groups legible to one another. Which is certainly true of the AI safety field. And communicating each other’s models is precisely what is advocating now, and by the looks of it, not much progress has been made.
To me, it is pretty plausible that Yudkowsky’s purported knowledge is tacit, given his failures to communicate it so far. Hence, I think it would be valuable if someone took tried ACTA on Yudkowsky. He seems to be focusing on communicating his views and giving his brain a break, so now would be a good time to try.
I’d be very sceptical of applying something like this on experts in a rich-domain/somewhat-pre-paradigmatic field like, say, conceptual alignment. Their expertise is their particular set of tools. And in a rich domain like this, there are likely to be many other tools that lets you work on the problems productively. Even if you concluded that the paradigmatic tools seem most suited for the problems, you may still wish to maximise the chance that you’ll end up with a productively different set of tools, just because they allow you to pursue a neglected angle of attack. If you look overmuch to how experts are doing it, you’ll Einstellung yourself into their paradigm and end up hacking at an area of the wall that’s proven to be very sturdy indeed.
For pre-paradigmatic fields, I agree that the insights you extract have a good chance of not being useful. But if you some people who are talking past each other because they can’t understand each others viewpoints, then I would expect this sort of thing to help make both groups legible to one another. Which is certainly true of the AI safety field. And communicating each other’s models is precisely what is advocating now, and by the looks of it, not much progress has been made.
To me, it is pretty plausible that Yudkowsky’s purported knowledge is tacit, given his failures to communicate it so far. Hence, I think it would be valuable if someone took tried ACTA on Yudkowsky. He seems to be focusing on communicating his views and giving his brain a break, so now would be a good time to try.