Smaller operations could be chained, parallelized (with limited thinking time and capacity per unit), used to check on each other in tandem with random human monitoring and processing, and otherwise leveraged to minimize the human bottleneck element.
This strikes me as quite risky, as the amount of human monitoring has to be really minimal in order to solve a 50-year problem in 1 month, and earlier experiences with slower and less capable AIs seem unlikely to adequately prepare the human designers to come up with fully robust control schemes, especially if you are talking about a time scale of months. Can you say a bit more about the conditions you envision where this proposal would be expected to make a positive impact? It seems to me like it might be a very narrow range of conditions. For example if the degree of international peace and cooperation is very high, then a better alternative may be an international agreement to develop WBE tech while delaying AGI, or an international team to take as much time as needed to build FAI while delaying other forms of AGI.
I tend to think that such high degrees of global coordination are implausible, and therefore put most of my hope in scenarios where some group manages to obtain a large tech lead over the rest of the world and are thereby granted a measure of strategic initiative in choosing how best to navigate the intelligence explosion. Your proposal might be useful in such a scenario, if other seemingly safer alternatives (like going for WBE, or having genetically enhanced humans build FAI with minimal AGI assistance) are out of reach due to time or resource constraints. It’s still unclear to me why you called your point “strategy-swallowing” though, or what that phrase means exactly. Can you please explain?
I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk. In the full-length discussion of it, I pair it with discussion of historical lags in tech development between leader and follower in technological arms races (longer than one month) and factors relative to corporate and international espionage, raise the possibility of global coordination (or at least between the leader and next closest follower), and so on.
It also interacts with technical achievements in producing ‘domesticity’ short of exact unity of will.
It’s still unclear to me why you called your point “strategy-swallowing” though, or what that phrase means exactly.
When strategy A to a large extent can capture the impacts of strategy B.
I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk.
If you’re making the point as part of an argument against “either Eliezer’s FAI plan succeeds, or the world dies” then ok, that makes sense. ETA: But it seems like it would be very easy to take “if humans can do it, then not very superintelligent AIs can” out of context, so I’d suggest some other way of making this point.
When strategy A to a large extent can capture the impacts of strategy.
Sorry, I’m still not getting it. What does “impacts of strategy” mean here?
This strikes me as quite risky, as the amount of human monitoring has to be really minimal in order to solve a 50-year problem in 1 month, and earlier experiences with slower and less capable AIs seem unlikely to adequately prepare the human designers to come up with fully robust control schemes, especially if you are talking about a time scale of months. Can you say a bit more about the conditions you envision where this proposal would be expected to make a positive impact? It seems to me like it might be a very narrow range of conditions. For example if the degree of international peace and cooperation is very high, then a better alternative may be an international agreement to develop WBE tech while delaying AGI, or an international team to take as much time as needed to build FAI while delaying other forms of AGI.
I tend to think that such high degrees of global coordination are implausible, and therefore put most of my hope in scenarios where some group manages to obtain a large tech lead over the rest of the world and are thereby granted a measure of strategic initiative in choosing how best to navigate the intelligence explosion. Your proposal might be useful in such a scenario, if other seemingly safer alternatives (like going for WBE, or having genetically enhanced humans build FAI with minimal AGI assistance) are out of reach due to time or resource constraints. It’s still unclear to me why you called your point “strategy-swallowing” though, or what that phrase means exactly. Can you please explain?
I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk. In the full-length discussion of it, I pair it with discussion of historical lags in tech development between leader and follower in technological arms races (longer than one month) and factors relative to corporate and international espionage, raise the possibility of global coordination (or at least between the leader and next closest follower), and so on.
It also interacts with technical achievements in producing ‘domesticity’ short of exact unity of will.
When strategy A to a large extent can capture the impacts of strategy B.
If you’re making the point as part of an argument against “either Eliezer’s FAI plan succeeds, or the world dies” then ok, that makes sense. ETA: But it seems like it would be very easy to take “if humans can do it, then not very superintelligent AIs can” out of context, so I’d suggest some other way of making this point.
Sorry, I’m still not getting it. What does “impacts of strategy” mean here?