I am assuming that the twins communicating thoughts requires an act of will like speaking does. I do have reasons for this. Watching their faces when they communicate thoughts makes it seem voluntary.
But most of what you are doing when speaking is already subconscious: One can “understand” the rules of grammar well enough to form correct sentences on nearly all attempts, and yet be unable to explain the rules to a computer program (or to a child or ESL student). There is an element of will, but it’s only an element.
It may be the case that even with a high-bandwidth direct-brain interface it would take a lot of time and practice to understand another’s thoughts. Humans have a common cognitive architecture by virtue of shared genes, but most of our individual connectomes are randomized and shaped by individual experience. Our internal representations may thus be highly idiosyncratic, meaning a direct interface would be ad-hoc and only work on one person. How true this is, I can only speculate without more data.
In your programming language analogy, these data types are only abstractions built on top of a more fundamental CPU architecture where the only data types are bytes. Maybe an implementation of C# could be made that uses exactly the same bit pattern for an int as Haskell does. Human neurons work pretty much the same way across individuals, and even cortical columns seem to use the same architecture.
I don’t think the inability to communicate qualia is primarily due to the limitation of language, but due to the limitation of imagination. I can explain what a tesseract is, but that doesn’t mean you can visualize it. I could give you analogies with lower dimensions. Maybe you could understand well enough to make a mental model that gives you good predictions, but you still can’t visualize it. Similarly, I could explain what it’s like to be a tetrachromat, how septarine and octarine are colors distinct from the others, and maybe you can develop a model good enough to make good predictions about how it would work, but again you can’t visualize these colors. This failing is not on English.
Sure the difference between hearing about a tesseract and being able to visualise it is significant but I think the difference might not be an impossibility barrier but just skill level of imagination.
Having learned some echolocation my qualia involved in hearing have changed and it makes it seem possible to be able to make a similar transition from a trichromat visual space into a tetrachromat visual space. The weird thing about it is that my ear receives as much information that it did before but I just pay attention to it differently. Having deficient understanding in the sense of getting things wrong is easy line to draw. But it seems at some point the understanding becomes vivid instead of theorethical.
I am assuming that the twins communicating thoughts requires an act of will like speaking does. I do have reasons for this. Watching their faces when they communicate thoughts makes it seem voluntary.
But most of what you are doing when speaking is already subconscious: One can “understand” the rules of grammar well enough to form correct sentences on nearly all attempts, and yet be unable to explain the rules to a computer program (or to a child or ESL student). There is an element of will, but it’s only an element.
It may be the case that even with a high-bandwidth direct-brain interface it would take a lot of time and practice to understand another’s thoughts. Humans have a common cognitive architecture by virtue of shared genes, but most of our individual connectomes are randomized and shaped by individual experience. Our internal representations may thus be highly idiosyncratic, meaning a direct interface would be ad-hoc and only work on one person. How true this is, I can only speculate without more data.
In your programming language analogy, these data types are only abstractions built on top of a more fundamental CPU architecture where the only data types are bytes. Maybe an implementation of C# could be made that uses exactly the same bit pattern for an int as Haskell does. Human neurons work pretty much the same way across individuals, and even cortical columns seem to use the same architecture.
I don’t think the inability to communicate qualia is primarily due to the limitation of language, but due to the limitation of imagination. I can explain what a tesseract is, but that doesn’t mean you can visualize it. I could give you analogies with lower dimensions. Maybe you could understand well enough to make a mental model that gives you good predictions, but you still can’t visualize it. Similarly, I could explain what it’s like to be a tetrachromat, how septarine and octarine are colors distinct from the others, and maybe you can develop a model good enough to make good predictions about how it would work, but again you can’t visualize these colors. This failing is not on English.
Sure the difference between hearing about a tesseract and being able to visualise it is significant but I think the difference might not be an impossibility barrier but just skill level of imagination.
Having learned some echolocation my qualia involved in hearing have changed and it makes it seem possible to be able to make a similar transition from a trichromat visual space into a tetrachromat visual space. The weird thing about it is that my ear receives as much information that it did before but I just pay attention to it differently. Having deficient understanding in the sense of getting things wrong is easy line to draw. But it seems at some point the understanding becomes vivid instead of theorethical.