Eliezer Yudkowsky has written about the idea of a “codic cortex”; that is, a specialized mental module for modelling the behavior of executable code.
And something like that would be really useful! For instance, there’s fundamentally no good reason to have any implementation bugs when writing code, or to not easily notice them when reading it. The techniques for how to prove code correctness are well known; but in practice, for a human programmer to actually use them is so expensive (in terms of productivity) that it’s usually more efficient to skip them, and find and fix bugs after the fact.
This is despite the fact that humans already have (in absolute terms) very good models of how the code they write works; if we didn’t, we couldn’t do nontrivial programming at all. But those models are sloppy, and there’s some details that are easy to miss for humans. If we built the models instead from a formal and precise analysis of the code on hand, we’d get much better predictions out of them.
A lot of programming language/environment development has been concerned with having the compiler or runtime handle certain things (translating high-level structures to machine code, garbage collection, type safety, etc.) so that the programmer doesn’t need to worry about getting them right—both so they don’t end up getting them wrong, and so that having to painstakingly get them right doesn’t drain their productivity. But such approaches usually come with some performance cost, and in the end they’re crutches to deal with the fact that humans are no good at programming. None of them would be necessary for an intelligence that had a decent specialized module to handle code modelling.
When programming, I frequently find myself kind of guessing at what code will work, pumping some input through the function I write and checking to see if the output is consistent with my expectations.
I do not always bother to figure out whether a loop should start at zero or one before I just try it.
Yes, this can cause problems, but that process seems to run counter to what is said here.
As you say, there are differences between individual human programmers in just how detailed their models about code behavior are (and indeed, for the same human based on mental state; lack of concentration can lead to sloppiness in that activity as anywhere else).
Even so, I maintain that if your software works at all, you have had a much-better-than-nothing model in your mind (and conversely, if you ever have implementation bugs, that model is not perfect). You might not precisely model the right way to start a loop, but you probably had a good reason to put a loop there—as opposed to a triple pointer dereference or something—and to make it dependent on certain specific data (even if the precise dependence isn’t clear in your mind without experimentation), as opposed to just trying out random variables you have lying around as targets for (approximate) loop iteration count.
This is in contrast to an unintelligent process like e.g. natural evolution, which would try entirely random things and simply look at how well they perform. You couldn’t reasonably program anything in that manner with human typing speeds; the number of attempts would make it infeasible.
Eliezer Yudkowsky has written about the idea of a “codic cortex”; that is, a specialized mental module for modelling the behavior of executable code.
And something like that would be really useful! For instance, there’s fundamentally no good reason to have any implementation bugs when writing code, or to not easily notice them when reading it. The techniques for how to prove code correctness are well known; but in practice, for a human programmer to actually use them is so expensive (in terms of productivity) that it’s usually more efficient to skip them, and find and fix bugs after the fact.
This is despite the fact that humans already have (in absolute terms) very good models of how the code they write works; if we didn’t, we couldn’t do nontrivial programming at all. But those models are sloppy, and there’s some details that are easy to miss for humans. If we built the models instead from a formal and precise analysis of the code on hand, we’d get much better predictions out of them.
A lot of programming language/environment development has been concerned with having the compiler or runtime handle certain things (translating high-level structures to machine code, garbage collection, type safety, etc.) so that the programmer doesn’t need to worry about getting them right—both so they don’t end up getting them wrong, and so that having to painstakingly get them right doesn’t drain their productivity. But such approaches usually come with some performance cost, and in the end they’re crutches to deal with the fact that humans are no good at programming. None of them would be necessary for an intelligence that had a decent specialized module to handle code modelling.
When programming, I frequently find myself kind of guessing at what code will work, pumping some input through the function I write and checking to see if the output is consistent with my expectations.
I do not always bother to figure out whether a loop should start at zero or one before I just try it.
Yes, this can cause problems, but that process seems to run counter to what is said here.
As you say, there are differences between individual human programmers in just how detailed their models about code behavior are (and indeed, for the same human based on mental state; lack of concentration can lead to sloppiness in that activity as anywhere else).
Even so, I maintain that if your software works at all, you have had a much-better-than-nothing model in your mind (and conversely, if you ever have implementation bugs, that model is not perfect). You might not precisely model the right way to start a loop, but you probably had a good reason to put a loop there—as opposed to a triple pointer dereference or something—and to make it dependent on certain specific data (even if the precise dependence isn’t clear in your mind without experimentation), as opposed to just trying out random variables you have lying around as targets for (approximate) loop iteration count.
This is in contrast to an unintelligent process like e.g. natural evolution, which would try entirely random things and simply look at how well they perform. You couldn’t reasonably program anything in that manner with human typing speeds; the number of attempts would make it infeasible.