It’s the buildup to the “open problems in FAI”. Large parts of the internals of an AI look like systems for reasoning in rigorous ways about math, models, etc.
If that were the reasoning, it’d be nice if he came out and explained why he believes that to be the case. Becuase just about any A(G)I researcher would take issue with that statement...
If that were the reasoning, it’d be nice if he came out and explained why he believes that to be the case. Becuase just about any A(G)I researcher would take issue with that statement...