Well before AGI is super-intelligent, weaker AGI and stronger narrow AI will likely lead to a hugely disruptive socio-economic disruption. This isn’t being discussed much...
Right at the cusp of change, humans will have tremendous ability to exercise choices about where this all ends up.
What will the masses do when most people have real economic value that is well below subsistence?
Often the choices of the masses in a crisis result in more crisis. I think it’s wisest to completely avoid situations where the masses are making choices in crisis.
I have noticed it too: http://lesswrong.com/lw/cxi/peter_thiels_agi_discussion_in_his_startups_class/6s1k
Right at the cusp of change, humans will have tremendous ability to exercise choices about where this all ends up.
Often the choices of the masses in a crisis result in more crisis. I think it’s wisest to completely avoid situations where the masses are making choices in crisis.