If FAI is as much better than what we have now as UFAI is worse, than only projects that are more likely to produce FAI than UFAI should be encouraged. So it’s 50% conditional on that the project succeeds. A project more likely to produce UFAI than FAI has negative expected payoff; a project more likely to produce FAI has positive expected payoff. If the damage from a failure is not as bad as the gain from success, then the cutoff value is lower than 50%. If the damage from failure is worse, then it’s higher.
Is that any more clear?
Your argument compares each project to the possibility ‘no AI built’. I think it would be better to compare each of them to the possibility ‘one of the other projects builds an AI’ which means you should make your cut-off point the average project (or a weighted average with weights based on probability of being first).
The notion of “cutoff value” for a decision doesn’t make sense. You maximize expected utility, no matter what the absolute value. Also, “what we have now” is not an option on the table, which is exactly the problem.
By “cutoff value,” I mean the likelihood of a project’s resulting in FAI that makes it utility-maximizing to support the project. If UFAI has −1000 utility, and FAI has 1000 utility, you should only support a project more likely to produce FAI than UFAI. If UFAI has −4000 utility and FAI only has 1000, then a project with a 51% chance of being friendly is a bad bet, and you should only support one with a > 80% chance of success.
If FAI is as much better than what we have now as UFAI is worse, than only projects that are more likely to produce FAI than UFAI should be encouraged. So it’s 50% conditional on that the project succeeds. A project more likely to produce UFAI than FAI has negative expected payoff; a project more likely to produce FAI has positive expected payoff. If the damage from a failure is not as bad as the gain from success, then the cutoff value is lower than 50%. If the damage from failure is worse, then it’s higher. Is that any more clear?
Your argument compares each project to the possibility ‘no AI built’. I think it would be better to compare each of them to the possibility ‘one of the other projects builds an AI’ which means you should make your cut-off point the average project (or a weighted average with weights based on probability of being first).
That’s a good point and makes my last two comments kind of moot. Is that why the grandparent was voted down?
The notion of “cutoff value” for a decision doesn’t make sense. You maximize expected utility, no matter what the absolute value. Also, “what we have now” is not an option on the table, which is exactly the problem.
By “cutoff value,” I mean the likelihood of a project’s resulting in FAI that makes it utility-maximizing to support the project. If UFAI has −1000 utility, and FAI has 1000 utility, you should only support a project more likely to produce FAI than UFAI. If UFAI has −4000 utility and FAI only has 1000, then a project with a 51% chance of being friendly is a bad bet, and you should only support one with a > 80% chance of success.