I suspect you hit hard theoretical limits fairly quickly. Many questions about programs are undecidable in general. This tends not to be a major problem for working programmers because humans tend to write programs that humans can reason about and to keep comments, annotations, etc around to help that understanding.
But it doesn’t mean that it’s possible to reason about arbitrary correct and useful code. There might be a one-way transform that converts arbitrary intelligible programs into a likely-unintelligible form. Program shrouding with cryptographic guarantees, as it were.
Good point. But I think it’s fine to not be prepared for arbitrary code, similar to how visual systems aren’t prepared for arbitrary visual input. Human-style code currently dominates, and if AIs want to write in a different style they’ll have to figure out how to think about it anyhow—the only obscured code would be deliberate.
I suspect you hit hard theoretical limits fairly quickly. Many questions about programs are undecidable in general. This tends not to be a major problem for working programmers because humans tend to write programs that humans can reason about and to keep comments, annotations, etc around to help that understanding.
But it doesn’t mean that it’s possible to reason about arbitrary correct and useful code. There might be a one-way transform that converts arbitrary intelligible programs into a likely-unintelligible form. Program shrouding with cryptographic guarantees, as it were.
Good point. But I think it’s fine to not be prepared for arbitrary code, similar to how visual systems aren’t prepared for arbitrary visual input. Human-style code currently dominates, and if AIs want to write in a different style they’ll have to figure out how to think about it anyhow—the only obscured code would be deliberate.