If you want to build a smart machine, decision theory seems sooo not the problem.
Deep Blue just maximised its expected success. That worked just fine for beating humans.
We have decision theories. The main problem is implementing approximations to them with limited spacetime.
IMO, this is probably all to do with crazyness about provability—originating from paranoia.
Obsessions with the irrelevant are potentially damaging—due to the risks of caution.
If you want to build a smart machine, decision theory seems sooo not the problem.
Deep Blue just maximised its expected success. That worked just fine for beating humans.
We have decision theories. The main problem is implementing approximations to them with limited spacetime.
IMO, this is probably all to do with crazyness about provability—originating from paranoia.
Obsessions with the irrelevant are potentially damaging—due to the risks of caution.