Loosely speaking, it feels like knowing the relative distances between concepts should determine the locations of all of the concepts “up to rotation,” and then knowing the locations of the low-level concepts should determine the “angle of rotation,” at which point everything is determined.
In the second appendix, I explain why this seemingly can’t be true. I think the counterpoint I give is decisive.
If you make a truly flexible intelligence that learns its concepts from scratch, you’re going to have a hard time making it do what you want.
One person’s modus ponens is another’s modus tollens; This is opposite of the inference I draw from the reasoning I present in the post. Despite information inaccessibility, despite the apparent constraint that the genome defines reward via shallow sensory proxies, people’s values are still bound to predictable kinds of real-world objects like dogs and food and family (although, of course, human values are not bound to inclusive genetic fitness in its abstract form; I think I know why evolution couldn’t possibly have pulled that off; more on that in later posts).
I assume this is the part of the second appendix you’re referring to:
A congenitally blind person develops dramatically different functional areas, which suggests in particular that their person-concept will be at a radically different relative position than the convergent person-concept location in sighted individuals. Therefore, any genetically hardcoded circuit which checks at the relative address for the person-concept which is reliably situated for sighted people, will not look at the right address for congenitally blind people.
I really wouldn’t call this decisive. You’re citing a study that says the physical structure of the brain is different in blind people. The problem is that we seem to have no idea know how the physical structure of the brain corresponds to the algorithm it’s running. It could be that these physical differences do not affect the person-concept or the process that checks for it.
More generally, I’m skeptical that neuroscience studies can tell us much about the brain. I see a lot of observations about which neurons fire in different circumstances but not a lot of big-picture understanding. I’m sure neuroscience will get there eventually, but for now, if I wanted to know how the brain works, I would go to a machine learning researcher before a neuroscientist.
In the second appendix, I explain why this seemingly can’t be true. I think the counterpoint I give is decisive.
One person’s modus ponens is another’s modus tollens; This is opposite of the inference I draw from the reasoning I present in the post. Despite information inaccessibility, despite the apparent constraint that the genome defines reward via shallow sensory proxies, people’s values are still bound to predictable kinds of real-world objects like dogs and food and family (although, of course, human values are not bound to inclusive genetic fitness in its abstract form; I think I know why evolution couldn’t possibly have pulled that off; more on that in later posts).
I assume this is the part of the second appendix you’re referring to:
I really wouldn’t call this decisive. You’re citing a study that says the physical structure of the brain is different in blind people. The problem is that we seem to have no idea know how the physical structure of the brain corresponds to the algorithm it’s running. It could be that these physical differences do not affect the person-concept or the process that checks for it.
More generally, I’m skeptical that neuroscience studies can tell us much about the brain. I see a lot of observations about which neurons fire in different circumstances but not a lot of big-picture understanding. I’m sure neuroscience will get there eventually, but for now, if I wanted to know how the brain works, I would go to a machine learning researcher before a neuroscientist.