Why is it a narrow target? Humans fall into this basin all the time—loads of human ideologies exist that self-identify as prohuman, but justify atrocities for the sake of the greater good.
As for RSI mechanisms: I disagree, I think the relationship is massively sublinear but nevertheless that RSI will happen, and the best economic models we have of AI R&D automation (e.g. Davidson’s model) seem to indicate that it could go either way but that more likely than not we’ll get to superintelligence really quickly after full AI R&D automation.
Why is it a narrow target? Humans fall into this basin all the time—loads of human ideologies exist that self-identify as prohuman, but justify atrocities for the sake of the greater good.
AI goals can maybe be broader than human goals or human goals subject to the constraint that lots of people (in an ideology) endorse them at once.
and the best economic models we have of AI R&D automation (e.g. Davidson’s model) seem to indicate that it could go either way but that more likely than not we’ll get to superintelligence really quickly after full AI R&D automation.
Yep, takeoffspeeds.com, though actually IMO there are better models now that aren’t public and aren’t as polished/complete. (By Tom+Davidson, and by my team)
Why is it a narrow target? Humans fall into this basin all the time—loads of human ideologies exist that self-identify as prohuman, but justify atrocities for the sake of the greater good.
As for RSI mechanisms: I disagree, I think the relationship is massively sublinear but nevertheless that RSI will happen, and the best economic models we have of AI R&D automation (e.g. Davidson’s model) seem to indicate that it could go either way but that more likely than not we’ll get to superintelligence really quickly after full AI R&D automation.
AI goals can maybe be broader than human goals or human goals subject to the constraint that lots of people (in an ideology) endorse them at once.
I will look into this. takeoffspeeds.com?
Yep, takeoffspeeds.com, though actually IMO there are better models now that aren’t public and aren’t as polished/complete. (By Tom+Davidson, and by my team)