People who want to read more about this topic online may find that it is sometimes referred to as a “humongous” (slang for huge) lookup table or HLUT. Googling on that term will find some additional hits.
Psy-Kosh’s point about implementations that use lookup tables internally of various sizes I think echos Moravec’s point in Mind Children. The idea is that you could replace various sub-parts of your conscious AI with LUTs, ranging all the way from trivial substitutions up to a GLUT for the whole thing. Then as he says, when and where is the consciousness?
I would suggest that the answer is meaningless, that consciousness cannot necessarily be localized in the same way as some other properties. Where, after all, in our own brains, is the consciousness, if we zoom in and look at individual neurons? Is there a “consciousness scalar field” where we can indicate, at each point in the brain, how much consciousness is present there? I doubt it.
One other question this raises is the issue of implementation. There is an extensive philosophical debate (also involving Chalmers) on when a given system can be said to implement a given computation, in particular a conscious computation. I recall several years back Eliezer writing on these topics and at the time he saw this as a major stumbling block for functionalism. I would be interested in hearing how his thoughts have evolved, and I hope he can write about this soon.
People who want to read more about this topic online may find that it is sometimes referred to as a “humongous” (slang for huge) lookup table or HLUT. Googling on that term will find some additional hits.
Psy-Kosh’s point about implementations that use lookup tables internally of various sizes I think echos Moravec’s point in Mind Children. The idea is that you could replace various sub-parts of your conscious AI with LUTs, ranging all the way from trivial substitutions up to a GLUT for the whole thing. Then as he says, when and where is the consciousness?
I would suggest that the answer is meaningless, that consciousness cannot necessarily be localized in the same way as some other properties. Where, after all, in our own brains, is the consciousness, if we zoom in and look at individual neurons? Is there a “consciousness scalar field” where we can indicate, at each point in the brain, how much consciousness is present there? I doubt it.
One other question this raises is the issue of implementation. There is an extensive philosophical debate (also involving Chalmers) on when a given system can be said to implement a given computation, in particular a conscious computation. I recall several years back Eliezer writing on these topics and at the time he saw this as a major stumbling block for functionalism. I would be interested in hearing how his thoughts have evolved, and I hope he can write about this soon.