The questions you wanted to ask in that thread were poly-time algorithm for SAT, and short proofs for math theorems. For those, why do you need to instantiate an AI in a simulated universe (which allows it to potentially create what we’d consider negative utility within the simulated universe) instead of just running a (relatively simple, sure to lack consciousness) theorem prover?
Is it because you think that being “embodied” helps with ability to do math? Why? And does the reason carry through even if the AI has a prior that assigns probability 1 to a particular universe? (It seems plausible that having experience dealing with empirical uncertainty might be helpful for handling mathematical uncertainty, but that doesn’t apply if you have no empirical uncertainty...)
An AI in a simulated universe can self-improve, which would make it more powerful than the theorem provers of today. I’m not convinced that AI-ish behavior, like self-improvement, requires empirical uncertainty about the universe.
But self improvement doesn’t require interacting with an outside environment (unless “improvement” means increasing computational resources, but the outside being simulated nullifies that). For example, a theorem prover designed to self improve can do so by writing a provably better theorem prover and then transferring control to (i.e., calling) it. Why bother with a simulated universe?
A simulated universe gives precise meaning to “actions” and “utility functions”, as I explained sometime ago. It seems more elegant to give the agent a quined description of itself within the simulated universe, and a utility function over states of that same universe, instead of allowing only actions like “output a provably better version of myself and then call it”.
One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question.
The questions you wanted to ask in that thread were poly-time algorithm for SAT, and short proofs for math theorems. For those, why do you need to instantiate an AI in a simulated universe (which allows it to potentially create what we’d consider negative utility within the simulated universe) instead of just running a (relatively simple, sure to lack consciousness) theorem prover?
Is it because you think that being “embodied” helps with ability to do math? Why? And does the reason carry through even if the AI has a prior that assigns probability 1 to a particular universe? (It seems plausible that having experience dealing with empirical uncertainty might be helpful for handling mathematical uncertainty, but that doesn’t apply if you have no empirical uncertainty...)
An AI in a simulated universe can self-improve, which would make it more powerful than the theorem provers of today. I’m not convinced that AI-ish behavior, like self-improvement, requires empirical uncertainty about the universe.
But self improvement doesn’t require interacting with an outside environment (unless “improvement” means increasing computational resources, but the outside being simulated nullifies that). For example, a theorem prover designed to self improve can do so by writing a provably better theorem prover and then transferring control to (i.e., calling) it. Why bother with a simulated universe?
A simulated universe gives precise meaning to “actions” and “utility functions”, as I explained sometime ago. It seems more elegant to give the agent a quined description of itself within the simulated universe, and a utility function over states of that same universe, instead of allowing only actions like “output a provably better version of myself and then call it”.
From the FAI wikipedia page:
Cousin_it’s approach may be enough to avoid that.