If the UGC is true, then some classes of constraint satisfaction problems are hard to approximate in the worst case.
What do you mean by classes? I’m pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
I agree. But what are the odds it’s best case, either? I’m not saying that we’re doomed and have no chance of evaluating things. I’m saying that this is a potential difficulty I hope someone looks into further.
What do you mean by classes? I’m pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
Some CSPs already have efficient exact or approximate algorithms. (For example, we can approximate Max-Cut to within a factor of 0.878, or test if a graph has a 2-coloring, in polynomial time.) The UGC is normally stated about a single class of CSPs, though I’m sure it has implications for some other classes as well.
I agree. But what are the odds it’s best case, either? I’m not saying that we’re doomed and have no chance of evaluating things. I’m saying that this is a potential difficulty I hope someone looks into further.
The impression I’ve always gotten was that average-case instances of many NP-complete problems are believed to be tractable. JoshuaZ points out that this may not be true, but it seems from some skimming that results along that direction are less like “a typical instance is hard” and more like “we can generate random hard instances semi-efficiently”.
This is interesting from a cryptographical point of view: it would be nice to have a one-way function whose hardness depended on P vs NP rather than a weaker assumption. But if we encounter a constraint satisfaction problem in the real world, it is not going to come even from an adversarially chosen random distribution.
It is, in fact, likely that such a problem will have some additional exploitable structure that will make it easier than the typical instance (which we already don’t know is hard). An example of this is metric TSP. This would be the “best case” you’re referring to.
What do you mean by classes? I’m pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
I agree. But what are the odds it’s best case, either? I’m not saying that we’re doomed and have no chance of evaluating things. I’m saying that this is a potential difficulty I hope someone looks into further.
Some CSPs already have efficient exact or approximate algorithms. (For example, we can approximate Max-Cut to within a factor of 0.878, or test if a graph has a 2-coloring, in polynomial time.) The UGC is normally stated about a single class of CSPs, though I’m sure it has implications for some other classes as well.
The impression I’ve always gotten was that average-case instances of many NP-complete problems are believed to be tractable. JoshuaZ points out that this may not be true, but it seems from some skimming that results along that direction are less like “a typical instance is hard” and more like “we can generate random hard instances semi-efficiently”.
This is interesting from a cryptographical point of view: it would be nice to have a one-way function whose hardness depended on P vs NP rather than a weaker assumption. But if we encounter a constraint satisfaction problem in the real world, it is not going to come even from an adversarially chosen random distribution.
It is, in fact, likely that such a problem will have some additional exploitable structure that will make it easier than the typical instance (which we already don’t know is hard). An example of this is metric TSP. This would be the “best case” you’re referring to.