In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
No, UFAI destroying civilization is the thing that is being prevented. Also, the number of attempts at once doesn’t change the odds of the first one being Friendly, if all the attempts are the same quality. If any given project is more likely than 50% (or perhaps just more likely than average) to produce FAI, it should be supported. Otherwise, it should be suppressed.
Also, the number of attempts at once doesn’t change the odds of the first one being Friendly, if all the attempts are the same quality.
First, the odds of the first one being Friendly are not especially important unless you assume FOOM is the only possible case.
Second, the number of attempts does change the odds of the first one being Friendly, unless you believe that hurried projects are as likely to be Friendly as slow, cautious projects.
My intuition is that the really high expected utility of a positive FOOM and the really low expected utility of a bad one make Friendliness important if it gets even a fairly low probability. But it’s true that if all the AIs developed within, say 5 years of the first one get a substantial influence then the situation changes.
If FAI is as much better than what we have now as UFAI is worse, than only projects that are more likely to produce FAI than UFAI should be encouraged. So it’s 50% conditional on that the project succeeds. A project more likely to produce UFAI than FAI has negative expected payoff; a project more likely to produce FAI has positive expected payoff. If the damage from a failure is not as bad as the gain from success, then the cutoff value is lower than 50%. If the damage from failure is worse, then it’s higher.
Is that any more clear?
Your argument compares each project to the possibility ‘no AI built’. I think it would be better to compare each of them to the possibility ‘one of the other projects builds an AI’ which means you should make your cut-off point the average project (or a weighted average with weights based on probability of being first).
The notion of “cutoff value” for a decision doesn’t make sense. You maximize expected utility, no matter what the absolute value. Also, “what we have now” is not an option on the table, which is exactly the problem.
By “cutoff value,” I mean the likelihood of a project’s resulting in FAI that makes it utility-maximizing to support the project. If UFAI has −1000 utility, and FAI has 1000 utility, you should only support a project more likely to produce FAI than UFAI. If UFAI has −4000 utility and FAI only has 1000, then a project with a 51% chance of being friendly is a bad bet, and you should only support one with a > 80% chance of success.
No, UFAI destroying civilization is the thing that is being prevented.
No, UFAI destroying all life is the thing that is being prevented.
The post suggests that guaranteeing continued life (humans and other animals) with low tech may be better than keeping our high tech but risking total extinction.
No, UFAI destroying civilization is the thing that is being prevented. Also, the number of attempts at once doesn’t change the odds of the first one being Friendly, if all the attempts are the same quality. If any given project is more likely than 50% (or perhaps just more likely than average) to produce FAI, it should be supported. Otherwise, it should be suppressed.
First, the odds of the first one being Friendly are not especially important unless you assume FOOM is the only possible case.
Second, the number of attempts does change the odds of the first one being Friendly, unless you believe that hurried projects are as likely to be Friendly as slow, cautious projects.
My intuition is that the really high expected utility of a positive FOOM and the really low expected utility of a bad one make Friendliness important if it gets even a fairly low probability. But it’s true that if all the AIs developed within, say 5 years of the first one get a substantial influence then the situation changes.
Where did that come from?
If FAI is as much better than what we have now as UFAI is worse, than only projects that are more likely to produce FAI than UFAI should be encouraged. So it’s 50% conditional on that the project succeeds. A project more likely to produce UFAI than FAI has negative expected payoff; a project more likely to produce FAI has positive expected payoff. If the damage from a failure is not as bad as the gain from success, then the cutoff value is lower than 50%. If the damage from failure is worse, then it’s higher. Is that any more clear?
Your argument compares each project to the possibility ‘no AI built’. I think it would be better to compare each of them to the possibility ‘one of the other projects builds an AI’ which means you should make your cut-off point the average project (or a weighted average with weights based on probability of being first).
That’s a good point and makes my last two comments kind of moot. Is that why the grandparent was voted down?
The notion of “cutoff value” for a decision doesn’t make sense. You maximize expected utility, no matter what the absolute value. Also, “what we have now” is not an option on the table, which is exactly the problem.
By “cutoff value,” I mean the likelihood of a project’s resulting in FAI that makes it utility-maximizing to support the project. If UFAI has −1000 utility, and FAI has 1000 utility, you should only support a project more likely to produce FAI than UFAI. If UFAI has −4000 utility and FAI only has 1000, then a project with a 51% chance of being friendly is a bad bet, and you should only support one with a > 80% chance of success.
No, UFAI destroying all life is the thing that is being prevented.
The post suggests that guaranteeing continued life (humans and other animals) with low tech may be better than keeping our high tech but risking total extinction.