Yudkowsky may think that the plan ‘Avert all creation of superintelligence in the near and medium term — augment human intelligence’ has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.
Yudkowsky may think that the plan ‘Avert all creation of superintelligence in the near and medium term — augment human intelligence’ has <5% chance of success, but your plan has <<1% chance. Obviously, you and he disagree not only on conclusions, but also on models.