This post is probably right that illegible skills rely on tracking non-obvious bits of information. But I don’t think that discovering that info is as simple as asking “What Are You Tracking in Your Head”. Remember that there’s a lot of inferential distance between the you and an expert, and they’ve likely forgotten all that you don’t know.
Thankfully the problem of getting tacit knowledge out of someone has a growing literature on it that is quite useful. The field of Naturalistic Decision Making developed some techniques to do this, one of which is fairly simple. It is called Applied Cognitive Task Analysis. Here’s a summary of it from CommonCog[1]:
There are four techniques in ACTA, and all of them are pretty straightforward to put to practice:
You start by creating a task diagram. A task diagram gives you a broad overview of the task in question and identifies the difficult cognitive elements. You’ll want to do this at the beginning, because you’ll want to know which parts of the task are worth focusing on.
You do a knowledge audit. A knowledge audit is an interview that identifies all the ways in which expertise is used in a domain, and provides examples based on actual experience.
You do a simulation interview. The simulation interview allows you to better understand an expert’s cognitive processes within the context of an single incident (e.g. a firefighter arrives at the scene of a fire; a programmer is handed an initial specification). This allows you to extract cognitive processes that are difficult to get at using a knowledge audit, such as situational assessment, and how such changing events impacts subsequent courses of action.
You create a cognitive demands table. After conducting ACTA interviews with multiple experts, you create something called a ‘cognitive demands table’ which synthesises all that you’ve uncovered in the previous three steps. This becomes the primary output of the ACTA process, and the main artefact you’ll use when you apply your findings to course design or to systems design.
The blog post goes in depth on this method, the theory that undergirds it and how to notice and acquire the perception that experts possess.
As to your actual question, I guess I’d say that the same holds true for video games. If you want to beat a difficult boss, then try to gather info first on what the timing is like, what cues there are for attacks and so forth.
Another area is when doing QFT calculations, you need to keep track of the interaction terms in the lagrangian and the free field terms in order to turn the time translation operator into a series of Feynman diagrams without ever bother to expand out the power series and use Wick’s theorem and whatever. Makes writing out scattering amplitudes less of a chore.
Also, Feynman’s trick applies to reading near anything. Keep an example in your head and see if it matches what the text says about it. Most of the time when I get confused by a text, doing this will clear things up.
P.S.
This is unrelated to your post, but if you could
choose anyone to work on AI alignment, who’d you pick?
A fantastic blog that is concerned with applied rationality. It outlines how to find and acquire tacit knowledge. I’d recommend starting from the Tacit Knowledge Series.
On first glance, CommonCog looked kinda MBA-flavored bullshitty (especially alongside the ACTA thing, which also sounds MBA-flavored bullshitty). But after reading a bit, it is indeed pretty great! Thanks for the link.
I’d be very sceptical of applying something like this on experts in a rich-domain/somewhat-pre-paradigmatic field like, say, conceptual alignment. Their expertise is their particular set of tools. And in a rich domain like this, there are likely to be many other tools that lets you work on the problems productively. Even if you concluded that the paradigmatic tools seem most suited for the problems, you may still wish to maximise the chance that you’ll end up with a productively different set of tools, just because they allow you to pursue a neglected angle of attack. If you look overmuch to how experts are doing it, you’ll Einstellung yourself into their paradigm and end up hacking at an area of the wall that’s proven to be very sturdy indeed.
For pre-paradigmatic fields, I agree that the insights you extract have a good chance of not being useful. But if you some people who are talking past each other because they can’t understand each others viewpoints, then I would expect this sort of thing to help make both groups legible to one another. Which is certainly true of the AI safety field. And communicating each other’s models is precisely what is advocating now, and by the looks of it, not much progress has been made.
To me, it is pretty plausible that Yudkowsky’s purported knowledge is tacit, given his failures to communicate it so far. Hence, I think it would be valuable if someone took tried ACTA on Yudkowsky. He seems to be focusing on communicating his views and giving his brain a break, so now would be a good time to try.
This post is probably right that illegible skills rely on tracking non-obvious bits of information. But I don’t think that discovering that info is as simple as asking “What Are You Tracking in Your Head”. Remember that there’s a lot of inferential distance between the you and an expert, and they’ve likely forgotten all that you don’t know.
Thankfully the problem of getting tacit knowledge out of someone has a growing literature on it that is quite useful. The field of Naturalistic Decision Making developed some techniques to do this, one of which is fairly simple. It is called Applied Cognitive Task Analysis. Here’s a summary of it from CommonCog [1]:
The blog post goes in depth on this method, the theory that undergirds it and how to notice and acquire the perception that experts possess.
As to your actual question, I guess I’d say that the same holds true for video games. If you want to beat a difficult boss, then try to gather info first on what the timing is like, what cues there are for attacks and so forth.
Another area is when doing QFT calculations, you need to keep track of the interaction terms in the lagrangian and the free field terms in order to turn the time translation operator into a series of Feynman diagrams without ever bother to expand out the power series and use Wick’s theorem and whatever. Makes writing out scattering amplitudes less of a chore.
Also, Feynman’s trick applies to reading near anything. Keep an example in your head and see if it matches what the text says about it. Most of the time when I get confused by a text, doing this will clear things up.
P.S.
This is unrelated to your post, but if you could
choose anyone to work on AI alignment, who’d you pick?
A fantastic blog that is concerned with applied rationality. It outlines how to find and acquire tacit knowledge. I’d recommend starting from the Tacit Knowledge Series.
On first glance, CommonCog looked kinda MBA-flavored bullshitty (especially alongside the ACTA thing, which also sounds MBA-flavored bullshitty). But after reading a bit, it is indeed pretty great! Thanks for the link.
I’d be very sceptical of applying something like this on experts in a rich-domain/somewhat-pre-paradigmatic field like, say, conceptual alignment. Their expertise is their particular set of tools. And in a rich domain like this, there are likely to be many other tools that lets you work on the problems productively. Even if you concluded that the paradigmatic tools seem most suited for the problems, you may still wish to maximise the chance that you’ll end up with a productively different set of tools, just because they allow you to pursue a neglected angle of attack. If you look overmuch to how experts are doing it, you’ll Einstellung yourself into their paradigm and end up hacking at an area of the wall that’s proven to be very sturdy indeed.
For pre-paradigmatic fields, I agree that the insights you extract have a good chance of not being useful. But if you some people who are talking past each other because they can’t understand each others viewpoints, then I would expect this sort of thing to help make both groups legible to one another. Which is certainly true of the AI safety field. And communicating each other’s models is precisely what is advocating now, and by the looks of it, not much progress has been made.
To me, it is pretty plausible that Yudkowsky’s purported knowledge is tacit, given his failures to communicate it so far. Hence, I think it would be valuable if someone took tried ACTA on Yudkowsky. He seems to be focusing on communicating his views and giving his brain a break, so now would be a good time to try.