What it feels like for me to understand something is that I have a mental model of its gears and can trace through their interactions. This is usually a mental picture with a bunch of details I can explore and trace through. Usually it feels like a circuit diagram (a biased bit of my ontology that’s the result of my education) and I’m literally tracing the flow of electrons through the circuit to see what signal comes out the end or what actuators get activated. This is true even of my model of things like mathematics, biology, and other people.
My models are also full of black boxes, though, where signals go in and noisy signals come out. When I don’t understand something, it’s a black box. As I study the signals, I slowly work out a theory of what’s inside the black box. Sometimes I get so far as to have a nice circuit drawn up where the black box was. Other time I just have a note pinned on the box with my findings. Over time I try to convert black boxes to inspectable circuits.
Interesting post!
What it feels like for me to understand something is that I have a mental model of its gears and can trace through their interactions. This is usually a mental picture with a bunch of details I can explore and trace through. Usually it feels like a circuit diagram (a biased bit of my ontology that’s the result of my education) and I’m literally tracing the flow of electrons through the circuit to see what signal comes out the end or what actuators get activated. This is true even of my model of things like mathematics, biology, and other people.
My models are also full of black boxes, though, where signals go in and noisy signals come out. When I don’t understand something, it’s a black box. As I study the signals, I slowly work out a theory of what’s inside the black box. Sometimes I get so far as to have a nice circuit drawn up where the black box was. Other time I just have a note pinned on the box with my findings. Over time I try to convert black boxes to inspectable circuits.