Consider a first-order language and a theory in that language (defining the way agent reasons, the kinds of concepts it can understand and the kinds of statements it can prove). This could be a set theory such as ZFC or a theory of arithmetic such as PA. The theory should provide sufficient tools to define recursive functions and/or other necessary concepts.
This is unclear to me, and I’ve read and understood Enderton. I would have thought that ZFC and PA were sets of axioms and would say nothing about how an agent reasons.
Also,
In first-order logic, all valid statements are also provable by a formal syntactic argument.
Do you mean in the context of some axioms? (of course, you can always talk about whether the statement “PA implies X” is valid, so it doesn’t really matter).
I haven’t read the rest yet. I’m confident that you have a very precise and formally defined idea in mind, but I’d appreciate it if you could spell out your definitions, or link to them (mathworld, wikipedia, or even some textbook).
I would have thought that ZFC and PA were sets of axioms and would say nothing about how an agent reasons.
The other way around: the agent reasons using ZFC or PA. (And not just sets of axioms, but associated deductive system, so rules of what can be proved how.)
In first-order logic, all valid statements are also provable by a formal syntactic argument.
An agent that reasoned by proving things in ZFC could exist.
Stupid argument: “This program, computed with this data, produces this result” is a statement in ZFC and is provable or disprovable as appropriate.
Obviously, a real ZFC-based AI would be more efficient than that.
ZFC is nice because Newton’s laws, for example, can be formulated in ZFC but aren’t computable. A computable agent could reason about those laws using ZFC, for example deriving the conservation of energy, which would allow him to compute certain things.
This is unclear to me, and I’ve read and understood Enderton. I would have thought that ZFC and PA were sets of axioms and would say nothing about how an agent reasons.
Also,
Do you mean in the context of some axioms? (of course, you can always talk about whether the statement “PA implies X” is valid, so it doesn’t really matter).
I haven’t read the rest yet. I’m confident that you have a very precise and formally defined idea in mind, but I’d appreciate it if you could spell out your definitions, or link to them (mathworld, wikipedia, or even some textbook).
The other way around: the agent reasons using ZFC or PA. (And not just sets of axioms, but associated deductive system, so rules of what can be proved how.)
I simply mean completeness of first-order logic.
Okay, thanks. I’ll certainly read the rest tomorrow :)
An agent that reasoned by proving things in ZFC could exist.
Stupid argument: “This program, computed with this data, produces this result” is a statement in ZFC and is provable or disprovable as appropriate.
Obviously, a real ZFC-based AI would be more efficient than that.
ZFC is nice because Newton’s laws, for example, can be formulated in ZFC but aren’t computable. A computable agent could reason about those laws using ZFC, for example deriving the conservation of energy, which would allow him to compute certain things.
A first-order logic comes with a set of axioms by definition.