The difference between code and computation is that code (i.e. program text) is merely one particular way of expressing a computation with particular properties. Perceiving the properties of that computation is what I imagine the job of a “sensory modality for code” would be. (Just like the job of vision is more general than extracting properties of any one visual field.)
One way to think about this is to consider the towers of Hanoi puzzle.
ToH is a relatively simple computation, some of its salient features are “intuitive” even to humans. The elementary recursive solution can be expressed as “move all but the bottom disk to a storage peg, so that the bottom disk can be moved at once to the target peg”. I’d suppose that anyone with some programming experience in modern languages will directly perceive the feature “recursion”. (But maybe not a Cobol programmer of old?)
However it takes some cognitive work to get at subtler features, like computational cost of the solution (e.g. in number of moves), or the existence and “shape” of the non-recursive algorithm, or the Sierpiński Triangle showing up in a graph representation.
So as a first approximation of what it would feel like to have a sensory modality for code, I might imagine being able to directly intuit these properties, merely by “looking at” a description of the rules for moving disks around.
That seems to involve “easily perceiving” solutions to NP problems (general proofs of properties or existence and shapes of algorithms), and I’m not sure what simplifications could be used to avoid this without getting a ton of false negatives. Also, how would this help the AI think high-level thoughts about computation?
Worse than that. Most of the properties you care about in code aren’t NP. NP is the set of decision problems such that a “yes” answer can be verified in polynomial time, given a witness string. Properties like “this program is secure/deterministic/terminates” don’t, in general, have short proofs. Many of the properties you care about are undecidable if you assume unlimited memory, and intractable even if you don’t.
In contrast, the human visual system, as I understand, mostly does constant-time work, like edge detection, checking for color differences, etc.
how would this help the AI think high-level thoughts about computation
I’m checking out of the discussion temporarily while I reread the LOGI paper. I want to make sure I have the proper context to think of the above question.
The difference between code and computation is that code (i.e. program text) is merely one particular way of expressing a computation with particular properties. Perceiving the properties of that computation is what I imagine the job of a “sensory modality for code” would be. (Just like the job of vision is more general than extracting properties of any one visual field.)
One way to think about this is to consider the towers of Hanoi puzzle.
ToH is a relatively simple computation, some of its salient features are “intuitive” even to humans. The elementary recursive solution can be expressed as “move all but the bottom disk to a storage peg, so that the bottom disk can be moved at once to the target peg”. I’d suppose that anyone with some programming experience in modern languages will directly perceive the feature “recursion”. (But maybe not a Cobol programmer of old?)
However it takes some cognitive work to get at subtler features, like computational cost of the solution (e.g. in number of moves), or the existence and “shape” of the non-recursive algorithm, or the Sierpiński Triangle showing up in a graph representation.
So as a first approximation of what it would feel like to have a sensory modality for code, I might imagine being able to directly intuit these properties, merely by “looking at” a description of the rules for moving disks around.
That seems to involve “easily perceiving” solutions to NP problems (general proofs of properties or existence and shapes of algorithms), and I’m not sure what simplifications could be used to avoid this without getting a ton of false negatives. Also, how would this help the AI think high-level thoughts about computation?
Worse than that. Most of the properties you care about in code aren’t NP. NP is the set of decision problems such that a “yes” answer can be verified in polynomial time, given a witness string. Properties like “this program is secure/deterministic/terminates” don’t, in general, have short proofs. Many of the properties you care about are undecidable if you assume unlimited memory, and intractable even if you don’t.
In contrast, the human visual system, as I understand, mostly does constant-time work, like edge detection, checking for color differences, etc.
I’m checking out of the discussion temporarily while I reread the LOGI paper. I want to make sure I have the proper context to think of the above question.