In particular, Yudkowsky’s claim that a superintelligence is efficient wrt humanity on all cognitive tasks is IMO flat out infeasible/unattainable (insomuch as we include human aligned technology when evaluating the capabilities of humanity).
I agree, in a trivial sense: One can always construct trivial tasks that stump AI because AI, by definition cannot solve the problem, like being a closet.
But that’s the only case where I expect impossibility/infesability for AI.
In particular, I suspect that any attempt to extend it in non-trivial domains probably fails.
To respond to a footnote:
I agree, in a trivial sense: One can always construct trivial tasks that stump AI because AI, by definition cannot solve the problem, like being a closet.
But that’s the only case where I expect impossibility/infesability for AI.
In particular, I suspect that any attempt to extend it in non-trivial domains probably fails.