If the UGC is true, then some classes of constraint satisfaction problems are hard to approximate in the worst case. There’s:
no reason to believe that “satisfying human values” in general falls into one of these classes;
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
no particularly compelling support for the UGC being true.
In addition, given that you haven’t made any concrete appeal to complexity theory, it seems unfair to rely on the mere mention of mathematics to lend support to your argument.
If the UGC is true, then some classes of constraint satisfaction problems are hard to approximate in the worst case.
What do you mean by classes? I’m pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
I agree. But what are the odds it’s best case, either? I’m not saying that we’re doomed and have no chance of evaluating things. I’m saying that this is a potential difficulty I hope someone looks into further.
What do you mean by classes? I’m pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
Some CSPs already have efficient exact or approximate algorithms. (For example, we can approximate Max-Cut to within a factor of 0.878, or test if a graph has a 2-coloring, in polynomial time.) The UGC is normally stated about a single class of CSPs, though I’m sure it has implications for some other classes as well.
I agree. But what are the odds it’s best case, either? I’m not saying that we’re doomed and have no chance of evaluating things. I’m saying that this is a potential difficulty I hope someone looks into further.
The impression I’ve always gotten was that average-case instances of many NP-complete problems are believed to be tractable. JoshuaZ points out that this may not be true, but it seems from some skimming that results along that direction are less like “a typical instance is hard” and more like “we can generate random hard instances semi-efficiently”.
This is interesting from a cryptographical point of view: it would be nice to have a one-way function whose hardness depended on P vs NP rather than a weaker assumption. But if we encounter a constraint satisfaction problem in the real world, it is not going to come even from an adversarially chosen random distribution.
It is, in fact, likely that such a problem will have some additional exploitable structure that will make it easier than the typical instance (which we already don’t know is hard). An example of this is metric TSP. This would be the “best case” you’re referring to.
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
The point of why UGC matters is that one can use it to show that for many instances simply approximating is tough. This together with (more widely believed) conjectures suggests that for many of those problems the average instance is almost as hard as the hardest instances. (I agree however with most of the rest of your comment.)
This is pushing the limits of my knowledge of the subject. Actually formalizing this is pretty difficult, and actually making such claims mathematically precise is currently open. See e.g. here(pdf), and there’s been more technical work like this(pdf). Right now, the main reasons for believing it are essentially empirical: that for most natural NP-complete problems, the average cases look to be about as hard as the worst cases. Given that one would therefore expect the same thing with the approximation problems. One direction for formalization is to use variants of the exponential time hypothesis, but one needs in this context to then distinguish between “the average is hard” and “random instances are hard.”
It is possible to use padding arguments to construct NP-complete problems where most random instances are fairly easy, but the set of hard instances of given size still has to have grow faster than any polynomial in the length of the input, or one would NP in P/Poly (by hard coding the solutions to the rare instances) and that’s strongly believed not to happen.
If the UGC is true, then some classes of constraint satisfaction problems are hard to approximate in the worst case. There’s:
no reason to believe that “satisfying human values” in general falls into one of these classes;
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
no particularly compelling support for the UGC being true.
In addition, given that you haven’t made any concrete appeal to complexity theory, it seems unfair to rely on the mere mention of mathematics to lend support to your argument.
What do you mean by classes? I’m pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
I agree. But what are the odds it’s best case, either? I’m not saying that we’re doomed and have no chance of evaluating things. I’m saying that this is a potential difficulty I hope someone looks into further.
Some CSPs already have efficient exact or approximate algorithms. (For example, we can approximate Max-Cut to within a factor of 0.878, or test if a graph has a 2-coloring, in polynomial time.) The UGC is normally stated about a single class of CSPs, though I’m sure it has implications for some other classes as well.
The impression I’ve always gotten was that average-case instances of many NP-complete problems are believed to be tractable. JoshuaZ points out that this may not be true, but it seems from some skimming that results along that direction are less like “a typical instance is hard” and more like “we can generate random hard instances semi-efficiently”.
This is interesting from a cryptographical point of view: it would be nice to have a one-way function whose hardness depended on P vs NP rather than a weaker assumption. But if we encounter a constraint satisfaction problem in the real world, it is not going to come even from an adversarially chosen random distribution.
It is, in fact, likely that such a problem will have some additional exploitable structure that will make it easier than the typical instance (which we already don’t know is hard). An example of this is metric TSP. This would be the “best case” you’re referring to.
The point of why UGC matters is that one can use it to show that for many instances simply approximating is tough. This together with (more widely believed) conjectures suggests that for many of those problems the average instance is almost as hard as the hardest instances. (I agree however with most of the rest of your comment.)
Can you elaborate on how we go from “approximating is hard” to “the average instance is hard”? (Or point me to somewhere I can read up on it?)
This is pushing the limits of my knowledge of the subject. Actually formalizing this is pretty difficult, and actually making such claims mathematically precise is currently open. See e.g. here(pdf), and there’s been more technical work like this(pdf). Right now, the main reasons for believing it are essentially empirical: that for most natural NP-complete problems, the average cases look to be about as hard as the worst cases. Given that one would therefore expect the same thing with the approximation problems. One direction for formalization is to use variants of the exponential time hypothesis, but one needs in this context to then distinguish between “the average is hard” and “random instances are hard.”
It is possible to use padding arguments to construct NP-complete problems where most random instances are fairly easy, but the set of hard instances of given size still has to have grow faster than any polynomial in the length of the input, or one would NP in P/Poly (by hard coding the solutions to the rare instances) and that’s strongly believed not to happen.