(i kinda made a mess out of that last paragraph; i’ve edited in a much more readable version of it)
This essay seemed straightforward until this last paragraph, because: the “problem and solution has alot of research in common” seems to directly contradict “research that is involved in the problem won’t particularly be helpful to research that is involved in the solution”.
okay, yeah i didn’t explain that super well. what i was trying to say was: we have found them to have a lot in common so far, in retrospect, but we shouldn’t have expected that in advance and we shouldn’t necessarily expect that in the future.
that’s a low probability miracle at this point
this isn’t to say that they’ll have nothing in common, hopefully we can reuse current ML tech for at least parts of FAS. but i think that that low-probability miracle is still our best bet, and a bottleneck to saving the world — i think other solutions either need to get there too eventually, or are even harder to turn into FAS. (i say that with not super strong confidence, however)
(i kinda made a mess out of that last paragraph; i’ve edited in a much more readable version of it)
okay, yeah i didn’t explain that super well. what i was trying to say was: we have found them to have a lot in common so far, in retrospect, but we shouldn’t have expected that in advance and we shouldn’t necessarily expect that in the future.
this isn’t to say that they’ll have nothing in common, hopefully we can reuse current ML tech for at least parts of FAS. but i think that that low-probability miracle is still our best bet, and a bottleneck to saving the world — i think other solutions either need to get there too eventually, or are even harder to turn into FAS. (i say that with not super strong confidence, however)