You may have misunderstood the point of Botworld. It is a tool for understanding our progress on open problems in FAI, not a tool for solving such problems directly.
We’re trying to learn more about naturalistic agents and overcome certain obstacles of self-reference. Many classical models of machine intelligence fall short in counter-intuitive ways, and many proposed solutions are quite abstract. Botworld gives us a way to concretely illustrate both the flaws and the proposals.
We release Botworld not because it’s the cutting edge of our research, but because when we do start talking about the research we’ve been doing it will be helpful to have a concrete way to illustrate the problems that we have found or the solutions that we’re exploring.
You may have misunderstood the point of Botworld. It is a tool for understanding our progress on open problems in FAI, not a tool for solving such problems directly.
Despite that, from the descriptions, I would have labelled it a reasonably significant research or open-source project on its own.
I appreciate the effort, but you want to study agent that solve problem by using mutual program analysis and self-modification (in the form of generating different successors). Will come up with non-trivial examples where such strategies pay off in your simulator? It seems quite hard to me. Due to technical limitations, anything involving automated theorem proving or complex planning is going to be off the table.
In this report, the register machines use a very simple instruction set which we call the constree language. A full implementation can be found in Appendix B. However, when modelling concrete decision problems in Botworld, we may choose to replace this simple language by something easier to use. (In particular, many robot programs will need to reason about Botworld’s laws. Encoding Botworld into the constree language is no trivial task.)
You may have misunderstood the point of Botworld. It is a tool for understanding our progress on open problems in FAI, not a tool for solving such problems directly.
We’re trying to learn more about naturalistic agents and overcome certain obstacles of self-reference. Many classical models of machine intelligence fall short in counter-intuitive ways, and many proposed solutions are quite abstract. Botworld gives us a way to concretely illustrate both the flaws and the proposals.
We release Botworld not because it’s the cutting edge of our research, but because when we do start talking about the research we’ve been doing it will be helpful to have a concrete way to illustrate the problems that we have found or the solutions that we’re exploring.
Despite that, from the descriptions, I would have labelled it a reasonably significant research or open-source project on its own.
I appreciate the effort, but you want to study agent that solve problem by using mutual program analysis and self-modification (in the form of generating different successors). Will come up with non-trivial examples where such strategies pay off in your simulator?
It seems quite hard to me. Due to technical limitations, anything involving automated theorem proving or complex planning is going to be off the table.
From the technical report:
So next version will accept robot programs written in Coq, I suppose ;)