Try to find a proof of theorem X, possibly using resources from the external world. Use this simple mathematical prior over possible external worlds.
A proof is not a known world state, though. The easy way would be to have a proof checker in the world model and the goal is to pound on the world in such a way so that the proof checker says “proof valid” (a known world state on the output of the proof checker), the obvious solution is to mess with the proof checker, while actual use of the resources runs into the problem of prediction of what this use will, exactly, produce, or how exactly it will happen—you don’t know what’s going to go into the proof checker if you act in the proof-finding-using-external-resources way. And if you don’t represent all parts of the AI as embodied in the real world then the AI can not predict consequences of the damage to physical structures representing it.
The real killer though is that you got a really huge model, for which you need a lot of computational resources to begin with. Plus with a simple prior over possible worlds, you will be dealing with very super duper fundamental laws of physics (below quarks). A very huge number of technologies each of which is a lot more useful elsewhere.
I know the standard response to this, it’s that if something doesn’t work someone tries something different, but the different is very simple to picture: you restrict the model, a lot, which a: speeds up the AI by a mindbogglingly huge factor, and b: eliminates most of unwanted exploration (both are intrinsically related). You can’t tell AI to “self improve”, either, you have to define what improvement is, and a lot of improvement is about better culling of anything you can cull.
Congratulations! Please keep up this sort of work.
Thanks, I guess, but I do not view it as work. I am sick with cold, bored, and burnt out from doing actual work, and suffering from “someone wrong on the internet” syndrome, in combination with knowing that extremely rationalized wrongitude affects people like you.
A proof is not a known world state, though. The easy way would be to have a proof checker in the world model and the goal is to pound on the world in such a way so that the proof checker says “proof valid” (a known world state on the output of the proof checker), the obvious solution is to mess with the proof checker, while actual use of the resources runs into the problem of prediction of what this use will, exactly, produce, or how exactly it will happen—you don’t know what’s going to go into the proof checker if you act in the proof-finding-using-external-resources way. And if you don’t represent all parts of the AI as embodied in the real world then the AI can not predict consequences of the damage to physical structures representing it.
The real killer though is that you got a really huge model, for which you need a lot of computational resources to begin with. Plus with a simple prior over possible worlds, you will be dealing with very super duper fundamental laws of physics (below quarks). A very huge number of technologies each of which is a lot more useful elsewhere.
I know the standard response to this, it’s that if something doesn’t work someone tries something different, but the different is very simple to picture: you restrict the model, a lot, which a: speeds up the AI by a mindbogglingly huge factor, and b: eliminates most of unwanted exploration (both are intrinsically related). You can’t tell AI to “self improve”, either, you have to define what improvement is, and a lot of improvement is about better culling of anything you can cull.
Thanks, I guess, but I do not view it as work. I am sick with cold, bored, and burnt out from doing actual work, and suffering from “someone wrong on the internet” syndrome, in combination with knowing that extremely rationalized wrongitude affects people like you.