A “cheat” is a solution to a problem that is invariant to a wide range of scenarios for how the hard parts could be solved individually.
ML itself is a cheat. Even if we don’t understand the particulars of the information-processing task, we can just bonk it with an ML algorithm and it spits out a solution for us.
But in order to have a hope of finding an adequate cheat code, you need to have a good grasp of at least where the hard parts are even if you’re unsure about how they could be tackled individually. And constraining your expectation over what the possible subsolutions should look like expands the range of cheats you could apply, because now they need to be invariant to a smaller space of possible scenarios.[1]
Insofar as you’re saying that we can’t hope to find remotely adequate cheats unless we start with a rough understanding of what we even need to cheat over, I agree. I don’t think you’re saying that we shouldn’t be looking for cheats in the first place, but it could be interpreted that way. Yes, it has the problem that it doesn’t build upon itself as well as directly challenging the hard parts, but, realistically, I think the solution has to look like some kind of cheat.
There’s this funny dynamic where if you expand the range of plausible solutions you can search through (e.g. by constraining your expectation for what they need to be invariant to), it might become harder to locate a particular area of the search space. If effort spent on constraining expectation expands the search space, then it makes sense to at least confirm that there are no fully invariant solutions at the top layer before you iterate and search a broader range.
A “cheat” is a solution to a problem that is invariant to a wide range of scenarios for how the hard parts could be solved individually.
ML itself is a cheat. Even if we don’t understand the particulars of the information-processing task, we can just bonk it with an ML algorithm and it spits out a solution for us.
But in order to have a hope of finding an adequate cheat code, you need to have a good grasp of at least where the hard parts are even if you’re unsure about how they could be tackled individually. And constraining your expectation over what the possible subsolutions should look like expands the range of cheats you could apply, because now they need to be invariant to a smaller space of possible scenarios.[1]
Insofar as you’re saying that we can’t hope to find remotely adequate cheats unless we start with a rough understanding of what we even need to cheat over, I agree. I don’t think you’re saying that we shouldn’t be looking for cheats in the first place, but it could be interpreted that way. Yes, it has the problem that it doesn’t build upon itself as well as directly challenging the hard parts, but, realistically, I think the solution has to look like some kind of cheat.
There’s this funny dynamic where if you expand the range of plausible solutions you can search through (e.g. by constraining your expectation for what they need to be invariant to), it might become harder to locate a particular area of the search space. If effort spent on constraining expectation expands the search space, then it makes sense to at least confirm that there are no fully invariant solutions at the top layer before you iterate and search a broader range.