Just to clarify, your post’s bottomline is that AIs won’t be omnipotent, and this matters for AI because a lot of common real-life problems are NP-hard, but also that this doesn’t really matter (for us?) because there are ways around NP-hardness through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently, or (referencing James) by just finding a good-enough solution instead of an optimal one?
Just to clarify, your post’s bottomline is that AIs won’t be omnipotent, and this matters for AI because a lot of common real-life problems are NP-hard, but also that this doesn’t really matter (for us?) because there are ways around NP-hardness through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently, or (referencing James) by just finding a good-enough solution instead of an optimal one?