I confess that I lost track of the reasoning about which-order-logic-can-do-what-and-why somewhere in the last post or so.
Me too.
I’m also not clear how and why it is important in understanding this “Highly Advanced Epistemology 101 for Beginners”.
It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
If that were the reasoning, it’d be nice if he came out and explained why he believes that to be the case. Becuase just about any A(G)I researcher would take issue with that statement...
Me too.
It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
If that were the reasoning, it’d be nice if he came out and explained why he believes that to be the case. Becuase just about any A(G)I researcher would take issue with that statement...
Maybe we need a handy summary table of the which’s, what’s, and why’s...