A mind can only represent a complex concept X by embedding it into a tightly intervowen network of other concepts that combine to give X its meaning.
I’m going to object right there. A mind can represent a concept as a high-level regularity of sensory data. For example, “cat” is the high level regularity that explains the sensory data obtained from looking at cats. Cats have many regularity properties: they are solid objects which have a constant size and a shape that varies only in certain predictable ways. There is more than one cat in the world, and they have similar appearances. They also behave similarly.
This “concept-as-regularity” idea means that you don’t have a symbol grounding problem, you don’t have to define the semantics of concepts in terms of other concepts, and you don’t have the problem of having to hand-pick an ontology for your system; rather, it generates the ontology that is appropriate for the world that it sees, hears and senses.
In the most basic case, simply ignore the possibility that this can happen.
In the more advanced case, I would say that you need to identify robust features of external reality using the first sensory apparatus you have. I.e. construct an ontology. Once you have that, you can utilize a different set of sensory apparatus, and note that many robust features of external reality manifest themselves as an isomorphic set of regularities in the new sensory apparatus.
For example, viewing a cat through an IR camera will not yield all and only the regularities that we see when looking at a cat through echo-location or visible light. But there will be a mapping, mediated by the fact that these sensor systems are all looking at the same reality.
In the simplest case, the initial agent doesn’t allow changes in its I/O construction. Any modified agent would be a special case of what the initial agent constructs in environment, acting through the initial I/O, using the initial definition of preference expressed in terms of that initial I/O. Since the initial agent is part of environment, its control over the environment allows, in particular, to deconstruct or change the initial agent, understood as a pattern in environment (in the model of sensory input/reaction to output, seen through preference).
Yup. And for preference, it’s the same situation, except that there is only one preference (expressed in terms of I/O) and it doesn’t depend on observations (but it determines what should be done for each possible observation sequence). As concepts adapt to actual observations, so could representations of preference, constructed specifically for efficient access in this particular world (but they don’t take over the general preference definition).
I agree with the “concept as regularity” concept. You can see that in how computers use network packets to communicate with each other. They don’t define a packet as a discrete message from another computer, they just chop it up and process it according to its regularities
This leads to problems trying to point at humans in an AI motivational system though. Which you have to build yourself.… The problem is this. Starting at the level of visual and audio input signals, build a regularity parser that returns a 1 when it apprehends a human, and 0 when it sees apprehends something else. You have to do the following, future proof it so it recognises post/trans humans as humans (else if might get confused when we seem to want to wipe ourselves out). Make sure it is not fooled by pictures, mannequins, answer phones, chat bots.
Basically you have to build a system that can abstract out the computational underpinning of what it means to be human, and recognise it from physical interaction. And not just any computational underpinning, as physics is computational there is tons of physics of our brains we don’t care about, such as exactly how we get different types of brain damage from different types of blunt trauma. So you have to build a regularity processor that abstracts what humans think are important about the computational part of humans.
If you understand how it does this, you should be able to make uploads.
We develop an understanding of what it means to be human, through interactions with humans. With a motivational system that can be somewhat gamed by static images and simulations, but one we don’t trust fully. This however leads to conflicting notions about humanity. Whether uploads are humans or not, for example. So this type of process should probably not be used for something that might go foom.
I’ve kind of wanted to write about the concept-as-regularity thing for a while, but it seems akrasia is getting the best of me. Here’s a compressed block of my thoughts on the issue.
Concept-as-regularity ought to be formalized. It is possible to conclude that a concept makes sense under certain circumstances involving other existing concepts that are correlated with no apparent determining factor. Since a Y-delta transformation on a Bayesian network looks like CAR, I’m guessing that the required number of mutually correlated concepts is three. Formalizing CAR would allow us to “formally” define lots of concepts, hopefully all of them. Bleggs and rubes are a perfect example of what CAR is useful for.
OK now I see what a Y-Delta transform is, but I doubt that anything that simple is the key to a rigorous definition of “concept as regularity”. Better, see the paper “The discovery of structural form” By Charles Kemp and Joshua B. Tenenbaum.
I didn’t think it was the electrical engineering trick of turning a star-connected load into a triangle-connected one, but on further reflection, we are talking about a network...
I’m going to object right there. A mind can represent a concept as a high-level regularity of sensory data. For example, “cat” is the high level regularity that explains the sensory data obtained from looking at cats. Cats have many regularity properties: they are solid objects which have a constant size and a shape that varies only in certain predictable ways. There is more than one cat in the world, and they have similar appearances. They also behave similarly.
This “concept-as-regularity” idea means that you don’t have a symbol grounding problem, you don’t have to define the semantics of concepts in terms of other concepts, and you don’t have the problem of having to hand-pick an ontology for your system; rather, it generates the ontology that is appropriate for the world that it sees, hears and senses.
Of course, you’re still taking sensory inputs as primitives. How do you then evaluate changes to your sensory apparatus?
In the most basic case, simply ignore the possibility that this can happen.
In the more advanced case, I would say that you need to identify robust features of external reality using the first sensory apparatus you have. I.e. construct an ontology. Once you have that, you can utilize a different set of sensory apparatus, and note that many robust features of external reality manifest themselves as an isomorphic set of regularities in the new sensory apparatus.
For example, viewing a cat through an IR camera will not yield all and only the regularities that we see when looking at a cat through echo-location or visible light. But there will be a mapping, mediated by the fact that these sensor systems are all looking at the same reality.
In the simplest case, the initial agent doesn’t allow changes in its I/O construction. Any modified agent would be a special case of what the initial agent constructs in environment, acting through the initial I/O, using the initial definition of preference expressed in terms of that initial I/O. Since the initial agent is part of environment, its control over the environment allows, in particular, to deconstruct or change the initial agent, understood as a pattern in environment (in the model of sensory input/reaction to output, seen through preference).
Yup. And for preference, it’s the same situation, except that there is only one preference (expressed in terms of I/O) and it doesn’t depend on observations (but it determines what should be done for each possible observation sequence). As concepts adapt to actual observations, so could representations of preference, constructed specifically for efficient access in this particular world (but they don’t take over the general preference definition).
I agree with the “concept as regularity” concept. You can see that in how computers use network packets to communicate with each other. They don’t define a packet as a discrete message from another computer, they just chop it up and process it according to its regularities
This leads to problems trying to point at humans in an AI motivational system though. Which you have to build yourself.… The problem is this. Starting at the level of visual and audio input signals, build a regularity parser that returns a 1 when it apprehends a human, and 0 when it sees apprehends something else. You have to do the following, future proof it so it recognises post/trans humans as humans (else if might get confused when we seem to want to wipe ourselves out). Make sure it is not fooled by pictures, mannequins, answer phones, chat bots.
Basically you have to build a system that can abstract out the computational underpinning of what it means to be human, and recognise it from physical interaction. And not just any computational underpinning, as physics is computational there is tons of physics of our brains we don’t care about, such as exactly how we get different types of brain damage from different types of blunt trauma. So you have to build a regularity processor that abstracts what humans think are important about the computational part of humans.
If you understand how it does this, you should be able to make uploads.
We develop an understanding of what it means to be human, through interactions with humans. With a motivational system that can be somewhat gamed by static images and simulations, but one we don’t trust fully. This however leads to conflicting notions about humanity. Whether uploads are humans or not, for example. So this type of process should probably not be used for something that might go foom.
I’ve kind of wanted to write about the concept-as-regularity thing for a while, but it seems akrasia is getting the best of me. Here’s a compressed block of my thoughts on the issue.
Concept-as-regularity ought to be formalized. It is possible to conclude that a concept makes sense under certain circumstances involving other existing concepts that are correlated with no apparent determining factor. Since a Y-delta transformation on a Bayesian network looks like CAR, I’m guessing that the required number of mutually correlated concepts is three. Formalizing CAR would allow us to “formally” define lots of concepts, hopefully all of them. Bleggs and rubes are a perfect example of what CAR is useful for.
OK now I see what a Y-Delta transform is, but I doubt that anything that simple is the key to a rigorous definition of “concept as regularity”. Better, see the paper “The discovery of structural form” By Charles Kemp and Joshua B. Tenenbaum.
what’s a Y-delta transformation ?
I [no longer] believe it’s (http://en.wikipedia.org/wiki/Yang-Baxter_equation).
Whilst it would be intellectually pleasing if this were the concept that Warrigal is referencing, I doubt it.
I didn’t think it was the electrical engineering trick of turning a star-connected load into a triangle-connected one, but on further reflection, we are talking about a network...
The electrical engineering trick was several decades before Yang and Baxter and has its own wikipedia entry.