I think I disagree, or need some clarification. As an example, the phenomenon in question is that the physical features of children look more or less like combinations of the parents features. Is the right kind of abstraction a taxonomy and theory of physical features at the level of nose-shapes and eyebrow thickness? Or is it at the low-level ontology of molecules and genes, or is it in the understanding of how those levels relate to eachother?
I’m unsure whether it’s a good analogy. Let me make a remark, and then you could reask or rephrase.
The discovery that the phenome is largely a result of the genome, is of course super important for understanding and also useful. The discovery of mechanically how (transcribe, splice, translate, enhance/promote/silence, trans-regulation, …) the phenome is a result of the genome is separately important, and still ongoing. The understanding of “structurally how” characters are made, both in ontogeny and phylogeny, is a blob of open problems (evodevo, niches, …). Likewise, more simply, “structurally what”—how to even think of characters. Cf. Günter Wagner, Rupert Riedl.
I would say the “structurally how” and “structurally what” is most analogous. The questions we want to answer about minds aren’t like “what is a sufficient set of physical conditions to determine—however opaquely—a mind’s effects”, but rather “what smallish, accessible-ish, designable-ish structures in a mind can [understandably to us, after learning how] determine a mind’s effects, specifically as we think of those effects”. That is more like organology and developmental biology and telic/partial-niche evodevo (<-made up term but hopefully you see what I mean).
I suppose it depends on what one wants to do with their “understanding” of the system? Here’s one AI safety case I worry about: if we (humans) don’t understand the lower-level ontology that gives rise to the phenomenon that we are more directly interested in (in this case I think thats something like an AI systems behavior/internal “mental” states—your “structurally what”, if I’m understanding correctly, which to be honest I’m not very confident I am), then a sufficiently intelligent AI system that does understand that relationship will be able to exploit the extra degrees of freedom in the lower level ontology to our disadvantage, and we won’t be able to see it coming.
I very much agree that structurally what matters a lot, but that seems like half the battle to me.
I very much agree that structurally what matters a lot, but that seems like half the battle to me.
But somehow this topic is not afforded much care or interest. Some people will pay lip service to caring, others will deny that mental states exist, but either way the field of alignment doesn’t put much force (money, smart young/new people, social support) toward these questions. This is understandable, as we have much less legible traction on this topic, but that’s… undignified, I guess is the expression.
a sufficiently intelligent AI system that does understand that relationship will be able to exploit the extra degrees of freedom in the lower level ontology to our disadvantage, and we won’t be able to see it coming.
Even if you do understand the lower level, you couldn’t stop such an adversarial AI from exploiting it, or exploiting something else, and taking control. If you understand the mental states (yeah, the structure), then maybe you can figure out how to make an AI that wants to not do that. In other words, it’s not sufficient, and probably not necessary / not a priority.
I think I disagree, or need some clarification. As an example, the phenomenon in question is that the physical features of children look more or less like combinations of the parents features. Is the right kind of abstraction a taxonomy and theory of physical features at the level of nose-shapes and eyebrow thickness? Or is it at the low-level ontology of molecules and genes, or is it in the understanding of how those levels relate to eachother?
Or is that not a good analogy?
I’m unsure whether it’s a good analogy. Let me make a remark, and then you could reask or rephrase.
The discovery that the phenome is largely a result of the genome, is of course super important for understanding and also useful. The discovery of mechanically how (transcribe, splice, translate, enhance/promote/silence, trans-regulation, …) the phenome is a result of the genome is separately important, and still ongoing. The understanding of “structurally how” characters are made, both in ontogeny and phylogeny, is a blob of open problems (evodevo, niches, …). Likewise, more simply, “structurally what”—how to even think of characters. Cf. Günter Wagner, Rupert Riedl.
I would say the “structurally how” and “structurally what” is most analogous. The questions we want to answer about minds aren’t like “what is a sufficient set of physical conditions to determine—however opaquely—a mind’s effects”, but rather “what smallish, accessible-ish, designable-ish structures in a mind can [understandably to us, after learning how] determine a mind’s effects, specifically as we think of those effects”. That is more like organology and developmental biology and telic/partial-niche evodevo (<-made up term but hopefully you see what I mean).
https://tsvibt.blogspot.com/2023/04/fundamental-question-what-determines.html
I suppose it depends on what one wants to do with their “understanding” of the system? Here’s one AI safety case I worry about: if we (humans) don’t understand the lower-level ontology that gives rise to the phenomenon that we are more directly interested in (in this case I think thats something like an AI systems behavior/internal “mental” states—your “structurally what”, if I’m understanding correctly, which to be honest I’m not very confident I am), then a sufficiently intelligent AI system that does understand that relationship will be able to exploit the extra degrees of freedom in the lower level ontology to our disadvantage, and we won’t be able to see it coming.
I very much agree that structurally what matters a lot, but that seems like half the battle to me.
But somehow this topic is not afforded much care or interest. Some people will pay lip service to caring, others will deny that mental states exist, but either way the field of alignment doesn’t put much force (money, smart young/new people, social support) toward these questions. This is understandable, as we have much less legible traction on this topic, but that’s… undignified, I guess is the expression.
Even if you do understand the lower level, you couldn’t stop such an adversarial AI from exploiting it, or exploiting something else, and taking control. If you understand the mental states (yeah, the structure), then maybe you can figure out how to make an AI that wants to not do that. In other words, it’s not sufficient, and probably not necessary / not a priority.