I don’t find the “probability of inclusion in final solution” model very useful, compared to “probability of use in future work” (similarly for their expected value versions) because
I doubt that central problems are a good model for science or problem solving in general (or even in the navigation analogy).
I see value in impermanent improvements (e.g. current status of HIV/AIDS in rich countries) and in future-discounting our value estimations.
Even if a good description of a field as a central problem and satalite problems exists, we are unlikely to correctly estimate it apriori, or estimate the relevance of a solution to it. In comparison, predicting how useful a solution is to “nearby” work is easier (with the caveat that islands or cliques of only internaly-useful problems and solutions can arise, and do in practice).
Given my model, I think 20% generalizability is worth a person’s time. Given yours, I’d say 1% is enough.
How much would you say (3) supports (1) on your model? I’m still pretty new to AIS and am updating from your model.
I agree that marginal improvements are good for fields like medicine, and perhaps so too AIS. E.g. I can imagine self-other overlap scaling to near-ASI, though I’m doubtful about stability under reflection. I’ll put 35% we find a semi-robust solution sufficient to not kill everyone.
Given my model, I think 20% generalizability is worth a person’s time. Given yours, I’d say 1% is enough.
I think that the distribution of success probability of typical optimal-from-our-perspective solutions is very wide for both of the ways we describe generalizability; within that, we should weight generalizability heavier than my understanding of your model does.
Earlier:
Designing only best-worst-case subproblem solutions while waiting for Alice would be like restricting strategies in game to ones agnostic to the opponent’s moves
Is this saying people should coordinate in case valuable solutions aren’t in the apriori generalizable space?
Thank you, that was very informative.
I don’t find the “probability of inclusion in final solution” model very useful, compared to “probability of use in future work” (similarly for their expected value versions) because
I doubt that central problems are a good model for science or problem solving in general (or even in the navigation analogy).
I see value in impermanent improvements (e.g. current status of HIV/AIDS in rich countries) and in future-discounting our value estimations.
Even if a good description of a field as a central problem and satalite problems exists, we are unlikely to correctly estimate it apriori, or estimate the relevance of a solution to it. In comparison, predicting how useful a solution is to “nearby” work is easier (with the caveat that islands or cliques of only internaly-useful problems and solutions can arise, and do in practice).
Given my model, I think 20% generalizability is worth a person’s time. Given yours, I’d say 1% is enough.
How much would you say (3) supports (1) on your model? I’m still pretty new to AIS and am updating from your model.
I agree that marginal improvements are good for fields like medicine, and perhaps so too AIS. E.g. I can imagine self-other overlap scaling to near-ASI, though I’m doubtful about stability under reflection. I’ll put 35% we find a semi-robust solution sufficient to not kill everyone.
I think that the distribution of success probability of typical optimal-from-our-perspective solutions is very wide for both of the ways we describe generalizability; within that, we should weight generalizability heavier than my understanding of your model does.
Earlier:
Is this saying people should coordinate in case valuable solutions aren’t in the apriori generalizable space?