Project idea[1]: a more holistic analysis of complexity. Roughly:
Many problems are worst case NP-hard
We nonetheless routinely solve particular instances of those problems (at least approximately)
So a measure of complexity that will be more useful for the real world will need to account for:
Empirical probability distributions over inputs
Weighting problems by their economic (or other measures of) importance[2]
Approximate solutions
Probabilistic solutions
Etc.
These all complicate the analysis (you’d probably want a framework for determining complexity that natively handled probabilistic/approximate solutions. My current idea would be to expand the notion of problem size to “input size in bits” and “work done by optimisation in bits to attain a particular solution” [i.e. how many bits of selection are needed to get a solution that is at least as good as a particular target solution]).
I think that even with all these considerations, one can still define coherent notions of “complexity” and that said notions will be much more useful for measuring complexity of problems in the real world.
This is something that feels relevant to deconfusing myself in a manner useful for alignment theory. I might take it up as a school project if I get permission (my mentor thinks it’s overly ambitious, but I might persuade him otherwise or stubbornly insist).
This is not really an issue for theoretical computer science , but one of my underlying aims my main aim with this project is to eventually estimate the complexity of intelligence. Not all computational problems are made equal/equally important for effective action in the real world, so when determining the expected complexity of intelligence, we’ll want to weight tasks according to some sensible measure of importance.
Re: “my thoughts on the complexity of intelligence”:
Project idea[1]: a more holistic analysis of complexity. Roughly:
Many problems are worst case NP-hard
We nonetheless routinely solve particular instances of those problems (at least approximately)
So a measure of complexity that will be more useful for the real world will need to account for:
Empirical probability distributions over inputs
Weighting problems by their economic (or other measures of) importance[2]
Approximate solutions
Probabilistic solutions
Etc.
These all complicate the analysis (you’d probably want a framework for determining complexity that natively handled probabilistic/approximate solutions. My current idea would be to expand the notion of problem size to “input size in bits” and “work done by optimisation in bits to attain a particular solution” [i.e. how many bits of selection are needed to get a solution that is at least as good as a particular target solution]).
I think that even with all these considerations, one can still define coherent notions of “complexity” and that said notions will be much more useful for measuring complexity of problems in the real world.
This is something that feels relevant to deconfusing myself in a manner useful for alignment theory. I might take it up as a school project if I get permission (my mentor thinks it’s overly ambitious, but I might persuade him otherwise or stubbornly insist).
This is not really an issue for theoretical computer science , but
one of my underlying aimsmy main aim with this project is to eventually estimate the complexity of intelligence. Not all computational problems are made equal/equally important for effective action in the real world, so when determining the expected complexity of intelligence, we’ll want to weight tasks according to some sensible measure of importance.