I mostly agree with Gwern; they’re very right that there are ways around complexity classes through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently.
They conclude their essay to say:
at most, they demonstrate that neither humans nor AIs are omnipotent
and I think that’s basically what I was trying to get at. Complexity provides a soft limit in some circumstances, and it’s helpful to understand which points in the world impose that limit and which don’t.
Just to clarify, your post’s bottomline is that AIs won’t be omnipotent, and this matters for AI because a lot of common real-life problems are NP-hard, but also that this doesn’t really matter (for us?) because there are ways around NP-hardness through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently, or (referencing James) by just finding a good-enough solution instead of an optimal one?
I mostly agree with Gwern; they’re very right that there are ways around complexity classes through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently.
They conclude their essay to say:
and I think that’s basically what I was trying to get at. Complexity provides a soft limit in some circumstances, and it’s helpful to understand which points in the world impose that limit and which don’t.
Just to clarify, your post’s bottomline is that AIs won’t be omnipotent, and this matters for AI because a lot of common real-life problems are NP-hard, but also that this doesn’t really matter (for us?) because there are ways around NP-hardness through cleverness and solving a different problem, or else by scaling hardware and writing programs more efficiently, or (referencing James) by just finding a good-enough solution instead of an optimal one?