A non-exhaustive list of some reasons why I strongly disagree with this combination of views:
AI which is not vastly superhuman can be restrained from crime, because humans can be so restrained, and with AI designers have the benefits of the ability to alter the mind’s parameters (desires, intuitions, capability for action, duration of extended thought, etc) inhibitions, test copies in detail, read out its internal states, and so on, making the problem vastly easier (although control may need to be tight if one is holding back an intelligence explosion while this is going on)
If 10-50 humans can solve AI safety (and build AGI!) in less than 50 years, then 100-500 not very superhuman AIs at 1200x speedup should be able to do so in less than a month
There are a variety of mechanisms by which humans could monitor, test, and verify the work conducted by such systems
The AIs can also work on incremental improvements to the control mechanisms being used initially, with steady progress allowing greater AI capabilities to develop better safety measures, until one approaches perfect safety
If a small group can solve all the relevant problems over a few decades, then probably a large portion of the AI community (and beyond) can solve the problems in a fraction of the time if mobilized
As AI becomes visibly closer such mobilization becomes more likely
Developments in other fields may make things much easier: better forecasting, cognitive enhancement, global governance, brain emulations coming first, global peace/governance
The broad shape of AI risk is known and considered much more widely than MIRI: people like Bill Gates and Peter Norvig consider it, but think that acting on it now is premature; if they saw AGI as close, or were creating it themselves, they would attend to the control problems
Paul Christiano, and now you, have started using the phrase “AI control problems”. I’ve gone along with it in my discussions with Paul, but before many people start adopting it maybe we ought to talk about whether it makes sense to frame the problem that way (as opposed to “Friendly AI”). I see a number of problems with it:
Control != Safe or Friendly. An AI can be perfectly controlled by a human and be extremely dangerous, because most humans aren’t very altruistic or rational.
The framing implicitly suggests (and you also explicitly suggest) that the control problem can be solved incrementally. But I think we have reason to believe this is not the case, that in short “safety for superintelligent AIs” = “solving philosophy/metaphilosophy” which can’t be done by “incremental improvements to the control mechanisms being used initially”.
“Control” suggests that the problem falls in the realm of engineering (i.e., belongs to the reference class of “control problems” in engineering, such as “aircraft flight control”), whereas, again, I think the real problem is one of philosophy (plus lots of engineering as well of course, but philosophy is where most of the difficulty lies). This makes a big difference in trying to predict the success of various potential attempts to solve the problem, and I’m concerned that people will underestimate the difficulty of the problem or overestimate the degree to which it’s parallelizable or generally amenable to scaling with financial/human resources, if the problem becomes known as “AI control”.
Do you disagree with this, on either the terminological issue (“AI control” suggests “incremental engineering problem”) or the substantive issue (the actual problem we face is more like philosophy than engineering)? If the latter, I’m surprised not to have seen you talk about your views on this topic earlier, unless you did and I missed it?
Nick Bostrom uses the term in his book, and it’s convenient for separating out pre-existing problems with “we don’t know what to do with our society long term, nor is it engineered to achieve that” and the particular issues raised by AI.
But I think we have reason to believe this is not the case, that in short “safety for superintelligent AIs” = “solving philosophy/metaphilosophy” which can’t be done by “incremental improvements to the control mechanisms being used initially”.
In the situation I mentioned, not vastly superintelligent initially (and capabilities can vary along multiple dimensions, e.g. one can have many compartmentalized copies of an AI system that collectively deliver a huge number of worker-years without any one of them possessing extraordinary capabilities.
What is your take on the strategy-swallowing point: if humans can do it, then not very superintelligent AIs can.
“Control” suggests that the problem falls in the realm of engineering (i.e., belongs to the reference class of “control problems” in engineering, such as “aircraft flight control”)...I’m concerned that people will underestimate the difficulty of the problem or overestimate the degree to which it’s parallelizable or generally amenable to scaling with financial/human resources, if the problem becomes known as “AI control”.
There is an ambiguity there. I’ll mention it to Nick. But, e.g. Friendliness just sounds silly. I use “safe” too, but safety can be achieved just by limiting capabilities, which doesn’t reflect the desire to realize the benefits.
What is your take on the strategy-swallowing point: if humans can do it, then not very superintelligent AIs can.
It’s easy to imagine AIXI-like Bayesian EU maximizers that are powerful optimizers but incapable of solving philosophical problems like consciousness, decision theory, and foundations of mathematics, which seem to be necessary in order to build an FAI. It’s possible that that’s wrong, that one can’t actually get to “not very superintelligent AIs” unless they possessed the same level of philosophical ability that humans have, but it certainly doesn’t seem safe to assume this.
BTW, what does “strategy-swallowing” mean? Just “strategically relevant”, or more than that?
But, e.g. Friendliness just sounds silly. I use “safe” too, but safety can be achieved just by limiting capabilities, which doesn’t reflect the desire to realize the benefits.
I suggested “optimal AI” to Luke earlier, but he didn’t like that. Here are some more options to replace “Friendly AI” with: human-optimal AI, normative AI (rename what I called “normative AI” in this post to something else), AI normativity. It would be interesting and useful to know what options Eliezer considered and discarded before settling on “Friendly AI”, and what options Nick considered and discarded before settling on “AI control”.
(I wonder why Nick doesn’t like to blog. It seems like he’d want to run at least some of the more novel or potentially controversial ideas in his book by a wider audience, before committing them permanently to print.)
It’s easy to imagine AIXI-like Bayesian EU maximizers that are powerful optimizers but incapable of solving philosophical problems like consciousness, decision theory, and foundations of mathematics, which seem to be necessary in order to build an FAI. It’s possible that that’s wrong, that one can’t actually get to “not very superintelligent AIs” unless they possessed the same level of philosophical ability that humans have, but it certainly doesn’t seem safe to assume this.
Such systems, hemmed in and restrained, could certainly work on better AI designs, and predict human philosophical judgments. Predicting human philosophical judgments accurately and reporting those predictions is close enough.
Nick considered and discarded before settling on “AI control”.
“Control problem.”
It seems like he’d want to run at least some of the more novel or potentially controversial ideas in his book by a wider audience, before committing them permanently to print.)
He circulates them to reviewers, in wider circles as the book becomes more developed. And blogging half-finished idea on the internet is exactly what one shouldn’t do if one is worried about committing controversial ideas to print.
And blogging half-finished idea on the internet is exactly what one shouldn’t do if one is worried about committing controversial ideas to print.
In case this is why you don’t tend to talk about your ideas in public either, except in terse (and sometimes cryptic) comments or in fully polished papers, I wanted to note that I’ve never had a cause to regret blogging (or posting to mailing lists) any of my half-finished ideas. As long as your signal to noise ratio is fairly high, people will remember the stuff you get right and forget the stuff you get wrong. The problem I see with committing ideas to print (as in physical books) is that books don’t come with comments attached pointing out all the parts that are wrong or questionable.
Such systems, hemmed in and restrained, could certainly work on better AI designs, and predict human philosophical judgments. Predicting human philosophical judgments accurately and reporting those predictions is close enough.
If such a system is powerful enough to predict human philosophical judgments using its general intelligence, without specifically having been programmed with a correct solution for metaphilosophy, it seems very likely that it would already be strongly superintelligent in many other fields, and hence highly dangerous.
(Since you seem to state this confidently but don’t give much detail, I wonder if you’ve discussed the idea elsewhere at greater length. For example I’m assuming that you’d ask the AI to answer questions like “What would Eliezer conclude about second-order logic after thinking undisturbed about it for 100 years?” but maybe you have something else in mind?)
He circulates them to reviewers, in wider circles as the book becomes more developed. And blogging half-finished idea on the internet is exactly what one shouldn’t do if one is worried about committing controversial ideas to print.
I guess I actually meant “potentially wrong” rather than “controversial”, and I was suggesting that he blog about them after privately circulating to reviewers, but before publishing in print.
For example I’m assuming that you’d ask the AI to answer questions like “What would Eliezer conclude about second-order logic after thinking undisturbed about it for 100 years?” but maybe you have something else in mind?)
The thought is much more bite-sized and tractable questions to work with less individually capable systems (with shorter time horizons, etc) like: “find a machine-checkable proof of this lemma” or “I am going to read one of these 10 papers to try to shed light on my problem using random selection, score each with the predicted rating I will give the paper’s usefulness after reading it.” I discussed this in a presentation at the FHI (focused on WBE, where the issue of unbalanced abilities relative to humans does not apply), and the concepts will be discussed in Nick’s book.
Based on the two examples you give, which seem to suggest a workflow with a substantial portion still being done by humans (perhaps even the majority of the work in the case of the more philosophical parts of the problem), I don’t see how you’d arrive at this earlier conclusion:
If 10-50 humans can solve AI safety (and build AGI!) in less than 50 years, then 100-500 not very superhuman AIs at 1200x speedup should be able to do so in less than a month
Do you have any materials from the FHI presentation or any other writeup that you can share, that might shed more light? If not, I guess I can wait for the book...
It’s hard to discuss your specific proposal without understanding it in more detail, but in general I worry that the kind of AI you suggest would be much better at helping to improve AI capability than at helping to solve Friendliness, since solving technical problems is likely to be more of a strength for such an AI than predicting human philosophical judgments, and unless humanity develops much better coordination abilities than it has now (so that everyone can agree or be forced to refrain from trying to develop strongly superintelligent AIs until the Friendliness problem is solved), such an AI isn’t likely to ultimately contribute to a positive outcome.
Yes, the range of follow-up examples there was a bit too narrow, I was starting from the other end and working back. Smaller operations could be chained, parallelized (with limited thinking time and capacity per unit), used to check on each other in tandem with random human monitoring and processing, and otherwise leveraged to minimize the human bottleneck element.
solve Friendliness, since solving technical problems is likely to be more of a strength for such an AI than predicting human philosophical judgments,
A strong skew of abilities away from those directly useful for Friendliness development makes things worse, but leaves a lot of options. Solving technical problems can let you work to, e.g.
Create AIs with ability distributions directed more towards “philosophical” problems
Create AIs with simple sensory utility functions that are easier to ‘domesticate’ (short time horizons, satiability, dependency on irreplaceable cryptographic rewards that only the human creators can provide, etc)
Solve the technical problems of making a working brain emulation model
Create software to better detect and block unintended behavior,
coordination
Yes, that’s the biggest challenge for such bootstrapping approaches, which depends on the speedup in safety development one gets out of early models, the degree of international peace and cooperation, and so forth.
Smaller operations could be chained, parallelized (with limited thinking time and capacity per unit), used to check on each other in tandem with random human monitoring and processing, and otherwise leveraged to minimize the human bottleneck element.
This strikes me as quite risky, as the amount of human monitoring has to be really minimal in order to solve a 50-year problem in 1 month, and earlier experiences with slower and less capable AIs seem unlikely to adequately prepare the human designers to come up with fully robust control schemes, especially if you are talking about a time scale of months. Can you say a bit more about the conditions you envision where this proposal would be expected to make a positive impact? It seems to me like it might be a very narrow range of conditions. For example if the degree of international peace and cooperation is very high, then a better alternative may be an international agreement to develop WBE tech while delaying AGI, or an international team to take as much time as needed to build FAI while delaying other forms of AGI.
I tend to think that such high degrees of global coordination are implausible, and therefore put most of my hope in scenarios where some group manages to obtain a large tech lead over the rest of the world and are thereby granted a measure of strategic initiative in choosing how best to navigate the intelligence explosion. Your proposal might be useful in such a scenario, if other seemingly safer alternatives (like going for WBE, or having genetically enhanced humans build FAI with minimal AGI assistance) are out of reach due to time or resource constraints. It’s still unclear to me why you called your point “strategy-swallowing” though, or what that phrase means exactly. Can you please explain?
I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk. In the full-length discussion of it, I pair it with discussion of historical lags in tech development between leader and follower in technological arms races (longer than one month) and factors relative to corporate and international espionage, raise the possibility of global coordination (or at least between the leader and next closest follower), and so on.
It also interacts with technical achievements in producing ‘domesticity’ short of exact unity of will.
It’s still unclear to me why you called your point “strategy-swallowing” though, or what that phrase means exactly.
When strategy A to a large extent can capture the impacts of strategy B.
I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk.
If you’re making the point as part of an argument against “either Eliezer’s FAI plan succeeds, or the world dies” then ok, that makes sense. ETA: But it seems like it would be very easy to take “if humans can do it, then not very superintelligent AIs can” out of context, so I’d suggest some other way of making this point.
When strategy A to a large extent can capture the impacts of strategy.
Sorry, I’m still not getting it. What does “impacts of strategy” mean here?
it’s convenient for separating out pre-existing problems with “we don’t know what to do with our society long term, nor is it engineered to achieve that” and the particular issues raised by AI.
I don’t think that separation is a good idea. Not knowing what to do with our society long term is a relatively tolerable problem until an upcoming change raises a significant prospect of locking-in some particular vision of society’s future. (Wei-Dai raises similar points in your exchange of replies, but I thought this framing might still be helpful.)
If we are talking about goal definition evaluating AI (and Paul was probably thinking in the context of some sort of indirect normativity), “control” seems like a reasonable fit. The primary philosophical issue for that part of the problem is decision theory.
(I agree that it’s a bad term for referring to FAI itself, if we don’t presuppose a method of solution that is not Friendliness-specific.)
What do you think is MIRI’s probability of having been valuable, conditioned on a nice intergalactic future being true?
More than 10%, definitely. Maybe 50%?
A non-exhaustive list of some reasons why I strongly disagree with this combination of views
Not that it should be used to dismiss any of your arguments, but reading your other comments in this thread I thought you must be playing devil’s advocate. Your phrasing here seems to preclude that possibility.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I’m puzzled. Let’s consider an edge case: even if MIRI’s factual research turned out to be strictly non-contributing to an eventual solution, there’s no reasonable doubt that it has raised awareness of the issue significantly (in relative terms).
Would the current situation with the CSER or FHI be unchanged or better if MIRI had never existed? Do you think those have a good chance of being valuable in bringing about a good outcome? Answering ‘no’ to the former and ‘yes’ to the latter would transitively imply that MIRI is valuable as well.
I.e. that alone—nevermind actual research contributions—would make it valuable in hindsight, given an eventual positive outcome. Yet you’re strongly opposed to that view?
The “combination of views” includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I think MIRI’s expected impact is positive and worthwhile. I’m glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that “medium probability” is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.
Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won’t invent one, most political activists won’t achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won’t avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.
The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.
We’ve spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.
I see. We just used different thresholds for valuable, you used “high probability of MIRI making the counterfactual difference given survival”, while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).
Whoever would turn out to have solved the problem, it’s unlikely that their AI safety evaluation process (“Should I do this thing?”) would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI’s foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI’s legacy would at least provide some speed-up; a contribution which, again, I’d call valuable, even if it were unlikely to make or break the future.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
I used that threshold because the numbers being thrown around in the thread were along those lines, and are needed for the “medium probability” referred to in the OP. So counterfactual impact of MIRI never having existed on x-risk is the main measure under discussion here. I erred in quoting your sentence in a way that might have made that hard to interpret.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
That’s right, and one reason that I think that MIRI’s existence has reduced expected x-risk, although by less than a 10% probability.
The view presented by Furcas, of probable doom, and “[m]ore than 10%, definitely. Maybe 50%” probability that MIRI will be valuable given the avoidance of doom, which in the context of existential risk seems to mean averting the risk.
A non-exhaustive list of some reasons why I strongly disagree with this combination of views:
AI which is not vastly superhuman can be restrained from crime, because humans can be so restrained, and with AI designers have the benefits of the ability to alter the mind’s parameters (desires, intuitions, capability for action, duration of extended thought, etc) inhibitions, test copies in detail, read out its internal states, and so on, making the problem vastly easier (although control may need to be tight if one is holding back an intelligence explosion while this is going on)
If 10-50 humans can solve AI safety (and build AGI!) in less than 50 years, then 100-500 not very superhuman AIs at 1200x speedup should be able to do so in less than a month
There are a variety of mechanisms by which humans could monitor, test, and verify the work conducted by such systems
The AIs can also work on incremental improvements to the control mechanisms being used initially, with steady progress allowing greater AI capabilities to develop better safety measures, until one approaches perfect safety
If a small group can solve all the relevant problems over a few decades, then probably a large portion of the AI community (and beyond) can solve the problems in a fraction of the time if mobilized
As AI becomes visibly closer such mobilization becomes more likely
Developments in other fields may make things much easier: better forecasting, cognitive enhancement, global governance, brain emulations coming first, global peace/governance
The broad shape of AI risk is known and considered much more widely than MIRI: people like Bill Gates and Peter Norvig consider it, but think that acting on it now is premature; if they saw AGI as close, or were creating it themselves, they would attend to the control problems
Paul Christiano, and now you, have started using the phrase “AI control problems”. I’ve gone along with it in my discussions with Paul, but before many people start adopting it maybe we ought to talk about whether it makes sense to frame the problem that way (as opposed to “Friendly AI”). I see a number of problems with it:
Control != Safe or Friendly. An AI can be perfectly controlled by a human and be extremely dangerous, because most humans aren’t very altruistic or rational.
The framing implicitly suggests (and you also explicitly suggest) that the control problem can be solved incrementally. But I think we have reason to believe this is not the case, that in short “safety for superintelligent AIs” = “solving philosophy/metaphilosophy” which can’t be done by “incremental improvements to the control mechanisms being used initially”.
“Control” suggests that the problem falls in the realm of engineering (i.e., belongs to the reference class of “control problems” in engineering, such as “aircraft flight control”), whereas, again, I think the real problem is one of philosophy (plus lots of engineering as well of course, but philosophy is where most of the difficulty lies). This makes a big difference in trying to predict the success of various potential attempts to solve the problem, and I’m concerned that people will underestimate the difficulty of the problem or overestimate the degree to which it’s parallelizable or generally amenable to scaling with financial/human resources, if the problem becomes known as “AI control”.
Do you disagree with this, on either the terminological issue (“AI control” suggests “incremental engineering problem”) or the substantive issue (the actual problem we face is more like philosophy than engineering)? If the latter, I’m surprised not to have seen you talk about your views on this topic earlier, unless you did and I missed it?
Thanks for those thoughts.
Nick Bostrom uses the term in his book, and it’s convenient for separating out pre-existing problems with “we don’t know what to do with our society long term, nor is it engineered to achieve that” and the particular issues raised by AI.
In the situation I mentioned, not vastly superintelligent initially (and capabilities can vary along multiple dimensions, e.g. one can have many compartmentalized copies of an AI system that collectively deliver a huge number of worker-years without any one of them possessing extraordinary capabilities.
What is your take on the strategy-swallowing point: if humans can do it, then not very superintelligent AIs can.
There is an ambiguity there. I’ll mention it to Nick. But, e.g. Friendliness just sounds silly. I use “safe” too, but safety can be achieved just by limiting capabilities, which doesn’t reflect the desire to realize the benefits.
It’s easy to imagine AIXI-like Bayesian EU maximizers that are powerful optimizers but incapable of solving philosophical problems like consciousness, decision theory, and foundations of mathematics, which seem to be necessary in order to build an FAI. It’s possible that that’s wrong, that one can’t actually get to “not very superintelligent AIs” unless they possessed the same level of philosophical ability that humans have, but it certainly doesn’t seem safe to assume this.
BTW, what does “strategy-swallowing” mean? Just “strategically relevant”, or more than that?
I suggested “optimal AI” to Luke earlier, but he didn’t like that. Here are some more options to replace “Friendly AI” with: human-optimal AI, normative AI (rename what I called “normative AI” in this post to something else), AI normativity. It would be interesting and useful to know what options Eliezer considered and discarded before settling on “Friendly AI”, and what options Nick considered and discarded before settling on “AI control”.
(I wonder why Nick doesn’t like to blog. It seems like he’d want to run at least some of the more novel or potentially controversial ideas in his book by a wider audience, before committing them permanently to print.)
Such systems, hemmed in and restrained, could certainly work on better AI designs, and predict human philosophical judgments. Predicting human philosophical judgments accurately and reporting those predictions is close enough.
“Control problem.”
He circulates them to reviewers, in wider circles as the book becomes more developed. And blogging half-finished idea on the internet is exactly what one shouldn’t do if one is worried about committing controversial ideas to print.
In case this is why you don’t tend to talk about your ideas in public either, except in terse (and sometimes cryptic) comments or in fully polished papers, I wanted to note that I’ve never had a cause to regret blogging (or posting to mailing lists) any of my half-finished ideas. As long as your signal to noise ratio is fairly high, people will remember the stuff you get right and forget the stuff you get wrong. The problem I see with committing ideas to print (as in physical books) is that books don’t come with comments attached pointing out all the parts that are wrong or questionable.
If such a system is powerful enough to predict human philosophical judgments using its general intelligence, without specifically having been programmed with a correct solution for metaphilosophy, it seems very likely that it would already be strongly superintelligent in many other fields, and hence highly dangerous.
(Since you seem to state this confidently but don’t give much detail, I wonder if you’ve discussed the idea elsewhere at greater length. For example I’m assuming that you’d ask the AI to answer questions like “What would Eliezer conclude about second-order logic after thinking undisturbed about it for 100 years?” but maybe you have something else in mind?)
I guess I actually meant “potentially wrong” rather than “controversial”, and I was suggesting that he blog about them after privately circulating to reviewers, but before publishing in print.
The thought is much more bite-sized and tractable questions to work with less individually capable systems (with shorter time horizons, etc) like: “find a machine-checkable proof of this lemma” or “I am going to read one of these 10 papers to try to shed light on my problem using random selection, score each with the predicted rating I will give the paper’s usefulness after reading it.” I discussed this in a presentation at the FHI (focused on WBE, where the issue of unbalanced abilities relative to humans does not apply), and the concepts will be discussed in Nick’s book.
Based on the two examples you give, which seem to suggest a workflow with a substantial portion still being done by humans (perhaps even the majority of the work in the case of the more philosophical parts of the problem), I don’t see how you’d arrive at this earlier conclusion:
Do you have any materials from the FHI presentation or any other writeup that you can share, that might shed more light? If not, I guess I can wait for the book...
It’s hard to discuss your specific proposal without understanding it in more detail, but in general I worry that the kind of AI you suggest would be much better at helping to improve AI capability than at helping to solve Friendliness, since solving technical problems is likely to be more of a strength for such an AI than predicting human philosophical judgments, and unless humanity develops much better coordination abilities than it has now (so that everyone can agree or be forced to refrain from trying to develop strongly superintelligent AIs until the Friendliness problem is solved), such an AI isn’t likely to ultimately contribute to a positive outcome.
Yes, the range of follow-up examples there was a bit too narrow, I was starting from the other end and working back. Smaller operations could be chained, parallelized (with limited thinking time and capacity per unit), used to check on each other in tandem with random human monitoring and processing, and otherwise leveraged to minimize the human bottleneck element.
A strong skew of abilities away from those directly useful for Friendliness development makes things worse, but leaves a lot of options. Solving technical problems can let you work to, e.g.
Create AIs with ability distributions directed more towards “philosophical” problems
Create AIs with simple sensory utility functions that are easier to ‘domesticate’ (short time horizons, satiability, dependency on irreplaceable cryptographic rewards that only the human creators can provide, etc)
Solve the technical problems of making a working brain emulation model
Create software to better detect and block unintended behavior,
Yes, that’s the biggest challenge for such bootstrapping approaches, which depends on the speedup in safety development one gets out of early models, the degree of international peace and cooperation, and so forth.
This strikes me as quite risky, as the amount of human monitoring has to be really minimal in order to solve a 50-year problem in 1 month, and earlier experiences with slower and less capable AIs seem unlikely to adequately prepare the human designers to come up with fully robust control schemes, especially if you are talking about a time scale of months. Can you say a bit more about the conditions you envision where this proposal would be expected to make a positive impact? It seems to me like it might be a very narrow range of conditions. For example if the degree of international peace and cooperation is very high, then a better alternative may be an international agreement to develop WBE tech while delaying AGI, or an international team to take as much time as needed to build FAI while delaying other forms of AGI.
I tend to think that such high degrees of global coordination are implausible, and therefore put most of my hope in scenarios where some group manages to obtain a large tech lead over the rest of the world and are thereby granted a measure of strategic initiative in choosing how best to navigate the intelligence explosion. Your proposal might be useful in such a scenario, if other seemingly safer alternatives (like going for WBE, or having genetically enhanced humans build FAI with minimal AGI assistance) are out of reach due to time or resource constraints. It’s still unclear to me why you called your point “strategy-swallowing” though, or what that phrase means exactly. Can you please explain?
I certainly didn’t say that would be risk-free, but it interacts with other drag factors on very high estimates of risk. In the full-length discussion of it, I pair it with discussion of historical lags in tech development between leader and follower in technological arms races (longer than one month) and factors relative to corporate and international espionage, raise the possibility of global coordination (or at least between the leader and next closest follower), and so on.
It also interacts with technical achievements in producing ‘domesticity’ short of exact unity of will.
When strategy A to a large extent can capture the impacts of strategy B.
If you’re making the point as part of an argument against “either Eliezer’s FAI plan succeeds, or the world dies” then ok, that makes sense. ETA: But it seems like it would be very easy to take “if humans can do it, then not very superintelligent AIs can” out of context, so I’d suggest some other way of making this point.
Sorry, I’m still not getting it. What does “impacts of strategy” mean here?
I don’t think that separation is a good idea. Not knowing what to do with our society long term is a relatively tolerable problem until an upcoming change raises a significant prospect of locking-in some particular vision of society’s future. (Wei-Dai raises similar points in your exchange of replies, but I thought this framing might still be helpful.)
If we are talking about goal definition evaluating AI (and Paul was probably thinking in the context of some sort of indirect normativity), “control” seems like a reasonable fit. The primary philosophical issue for that part of the problem is decision theory.
(I agree that it’s a bad term for referring to FAI itself, if we don’t presuppose a method of solution that is not Friendliness-specific.)
Not that it should be used to dismiss any of your arguments, but reading your other comments in this thread I thought you must be playing devil’s advocate. Your phrasing here seems to preclude that possibility.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I’m puzzled. Let’s consider an edge case: even if MIRI’s factual research turned out to be strictly non-contributing to an eventual solution, there’s no reasonable doubt that it has raised awareness of the issue significantly (in relative terms).
Would the current situation with the CSER or FHI be unchanged or better if MIRI had never existed? Do you think those have a good chance of being valuable in bringing about a good outcome? Answering ‘no’ to the former and ‘yes’ to the latter would transitively imply that MIRI is valuable as well.
I.e. that alone—nevermind actual research contributions—would make it valuable in hindsight, given an eventual positive outcome. Yet you’re strongly opposed to that view?
The “combination of views” includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.
I think MIRI’s expected impact is positive and worthwhile. I’m glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that “medium probability” is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.
Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won’t invent one, most political activists won’t achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won’t avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.
The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.
We’ve spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.
I see. We just used different thresholds for valuable, you used “high probability of MIRI making the counterfactual difference given survival”, while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).
Whoever would turn out to have solved the problem, it’s unlikely that their AI safety evaluation process (“Should I do this thing?”) would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI’s foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI’s legacy would at least provide some speed-up; a contribution which, again, I’d call valuable, even if it were unlikely to make or break the future.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
I used that threshold because the numbers being thrown around in the thread were along those lines, and are needed for the “medium probability” referred to in the OP. So counterfactual impact of MIRI never having existed on x-risk is the main measure under discussion here. I erred in quoting your sentence in a way that might have made that hard to interpret.
That’s right, and one reason that I think that MIRI’s existence has reduced expected x-risk, although by less than a 10% probability.
Sorry hard to tell from the thread which combination of views. Eliezer’s?
The view presented by Furcas, of probable doom, and “[m]ore than 10%, definitely. Maybe 50%” probability that MIRI will be valuable given the avoidance of doom, which in the context of existential risk seems to mean averting the risk.