I think this raises the question of what it even means to have a biological explanation (or explanation on any other specific level of abstraction), rather than a psychological one.
In a literal sense, it’s true that any human trait must be explainable biologically. Even something like preferring Star Wars to Star Trek: If you had a 100% accurate model of the biology of a human, you could load up that model with a scan, play a simulated version of both series, and look for simulated signs of approval.
But it feels a bit brute-forcey, doesn’t it? Like not a real explanation?
One idea I’ve had is that an explanation on a specified level of abstraction should be in terms of simple features of the abstraction. Such as linear and low-order polynomial functions, rather than crazily deeply run complex simulations. This has practical utility, in that very shallow functions are much easier to work with, and it also captures the notion that reductionism can bring you to an inappropriate level of abstraction if you are working with information that is nonlinearly encoded into an underlying substrate.
For an example of how to apply this, imagine that you were trying to explain a bug in some code as a program is running. Technically this is reducible to an electronic level of abstraction, but the memory locations the program uses will be unpredictable based on the allocators involved, so attempts at actually explaining it electronically would require strange nonlinear features whose many job is to extract the computational abstractions. It wouldn’t actually be an electronic rather than computational explanation. On the other hand, if e.g. a powerful cosmic ray entered the computer and broke it, then you would have a much more straightforward electronic explanation, and more ad-hoc computational explanation.
In terms of transness, a simple biological MIGI explanation could be something like “this hormone interacting with this cell starts a developmental cascade for gender identity, and it can be interfered with through these mechanisms, which cause transness”. Meanwhile, a simple biological AGP explanation could be something like “this area in male brains recognizes that one is pursuing attractive women, and under ordinary circumstances this other brain region sends a suppressing signal to it when one is considering oneself, but for AGPs it doesn’t do that”. However, one could have more complex explanations that don’t fit a simple biological story. For instance the meme that AGP is caused by a culture of women being presented as desirable and men not, is presumably relying on complex, open-ended cognition that can vary in similar ways to how a memory allocator cna vary.
I think this raises the question of what it even means to have a biological explanation (or explanation on any other specific level of abstraction), rather than a psychological one.
In a literal sense, it’s true that any human trait must be explainable biologically. Even something like preferring Star Wars to Star Trek: If you had a 100% accurate model of the biology of a human, you could load up that model with a scan, play a simulated version of both series, and look for simulated signs of approval.
But it feels a bit brute-forcey, doesn’t it? Like not a real explanation?
One idea I’ve had is that an explanation on a specified level of abstraction should be in terms of simple features of the abstraction. Such as linear and low-order polynomial functions, rather than crazily deeply run complex simulations. This has practical utility, in that very shallow functions are much easier to work with, and it also captures the notion that reductionism can bring you to an inappropriate level of abstraction if you are working with information that is nonlinearly encoded into an underlying substrate.
For an example of how to apply this, imagine that you were trying to explain a bug in some code as a program is running. Technically this is reducible to an electronic level of abstraction, but the memory locations the program uses will be unpredictable based on the allocators involved, so attempts at actually explaining it electronically would require strange nonlinear features whose many job is to extract the computational abstractions. It wouldn’t actually be an electronic rather than computational explanation. On the other hand, if e.g. a powerful cosmic ray entered the computer and broke it, then you would have a much more straightforward electronic explanation, and more ad-hoc computational explanation.
In terms of transness, a simple biological MIGI explanation could be something like “this hormone interacting with this cell starts a developmental cascade for gender identity, and it can be interfered with through these mechanisms, which cause transness”. Meanwhile, a simple biological AGP explanation could be something like “this area in male brains recognizes that one is pursuing attractive women, and under ordinary circumstances this other brain region sends a suppressing signal to it when one is considering oneself, but for AGPs it doesn’t do that”. However, one could have more complex explanations that don’t fit a simple biological story. For instance the meme that AGP is caused by a culture of women being presented as desirable and men not, is presumably relying on complex, open-ended cognition that can vary in similar ways to how a memory allocator cna vary.