Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)
Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I’m not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.
Perhaps what I have in mind can be said more clearly this way: there’s a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts… call that bitstring S1. There’s a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts… call that bitstring S2.
S2 is much, much, shorter than S1.
It’s inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn’t physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.
I have no idea what S1 or S2 are.
And I don’t expect that they’re expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings… it’s not like MPEG compression of a movie of an auto race creates an indexed “car” data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.
But I’m committed enough to a computational model of human concept storage that I believe they exist. (Of course, it’s possible that our concept-space of an omelette simply can’t be picked out by a bit-string, but I can’t see why I should take that possibility seriously.)