I think it’s very hard, to the extent that it’s not worth trying to build a FAI unless one was smarter than human, or had a lot of (subjective) time to work on the problem. I’d like to see if anyone has good arguments to the contrary.
Not worth trying in view of which tradeoffs? The obvious candidates are opportunity cost of not working more on dominating WBE/intelligence improvement tech (call both “intelligence tech”), and the potential increase in UFAI risk that would hurt a possible future FAI project after an intelligence tech shift. Both of these are only important to the extent that probability of winning on the second round is comparable to the probability of winning on the current round. The current round is at a disadvantage in that we only have so much time and human intelligence. The next round has to be reached before a catastrophe, and a sane FAI project has to dominate it. Both seem rather unlikely, and since I don’t see why the second round is any better than the first, saving more of the second round at the expense of the first doesn’t seem like a clearly good move. (This has to be explored in more detail, our recent conversations at least painted a clearer picture for me.)
The argument to the contrary is that people created some very impressive pieces of theory on the order of decades, so not seeing how something can be done is weak evidence for it not being doable in several decades with at least low probability. It’ll probably get clearer in about 50 years, when less time is left until an intelligence tech shift (assuming no disruptions), but then it’ll probably be too late to start working on the problem (and have any chance of winning on this round).
Not worth trying in view of which tradeoffs? The obvious candidates are opportunity cost of not working more on dominating WBE/intelligence improvement tech (call both “intelligence tech”), and the potential increase in UFAI risk that would hurt a possible future FAI project after an intelligence tech shift. Both of these are only important to the extent that probability of winning on the second round is comparable to the probability of winning on the current round. The current round is at a disadvantage in that we only have so much time and human intelligence. The next round has to be reached before a catastrophe, and a sane FAI project has to dominate it. Both seem rather unlikely, and since I don’t see why the second round is any better than the first, saving more of the second round at the expense of the first doesn’t seem like a clearly good move. (This has to be explored in more detail, our recent conversations at least painted a clearer picture for me.)
The argument to the contrary is that people created some very impressive pieces of theory on the order of decades, so not seeing how something can be done is weak evidence for it not being doable in several decades with at least low probability. It’ll probably get clearer in about 50 years, when less time is left until an intelligence tech shift (assuming no disruptions), but then it’ll probably be too late to start working on the problem (and have any chance of winning on this round).