I confess that I lost track of the reasoning about which-order-logic-can-do-what-and-why somewhere in the last post or so. I’m also not clear how and why it is important in understanding this “Highly Advanced Epistemology 101 for Beginners”. I’m wondering whether (or how) the ability to “properly talk about unbounded finite times, global connectedness, particular infinite cardinalities, or true spatial continuity” is essential or even important for (F)AI research.
I confess that I lost track of the reasoning about which-order-logic-can-do-what-and-why somewhere in the last post or so.
Me too.
I’m also not clear how and why it is important in understanding this “Highly Advanced Epistemology 101 for Beginners”.
It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
If that were the reasoning, it’d be nice if he came out and explained why he believes that to be the case. Becuase just about any A(G)I researcher would take issue with that statement...
I confess that I lost track of the reasoning about which-order-logic-can-do-what-and-why somewhere in the last post or so. I’m also not clear how and why it is important in understanding this “Highly Advanced Epistemology 101 for Beginners”. I’m wondering whether (or how) the ability to “properly talk about unbounded finite times, global connectedness, particular infinite cardinalities, or true spatial continuity” is essential or even important for (F)AI research.
Me too.
It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
If that were the reasoning, it’d be nice if he came out and explained why he believes that to be the case. Becuase just about any A(G)I researcher would take issue with that statement...
Maybe we need a handy summary table of the which’s, what’s, and why’s...
I assume we need to ask some of these questions in order to decide if, or in what sense, an AGI needs second-order logic.