To expand on the idea of meta-systems and their capability: Similarly to discussing brain efficiency, we could ask about the efficiency of our civilization (in the sense of being able to point its capability to a unified goal), among all possible ways of organising civilisations. If our civilisation is very inefficient, AI could figure out a better design and foom that way.
Primarily, I think the question of our civilization’s efficiency is unclear. My intuition is that our civilization is quite inefficient, with the following points serving as weak evidence:
Civilization hasn’t been around that long, and has therefore not been optimised much.
The point (1) gets even more pronounced as you go from “designs for cooperation among a small group” to “designs for cooperation among milions”, or even billions. (Because fewer of these were running in parallel, and for a shorter time.)
The fact that civilization runs on humans, who are selfish etc, might severely limit the space of designs that have been tried.
As a lower bound, it seems that something like Yudkowsky’s ideas about dath ilan might work. (Not to be mistaken with “we can get there from here”, “works for humans”, or “none of Yudkowsky’s ideas have holes in them”.)
None of this contradicts your arguments, but it adds uncertainty and should make us more cautios about AI. (Not that I interpret the post as advocating against caution.)
To expand on the idea of meta-systems and their capability: Similarly to discussing brain efficiency, we could ask about the efficiency of our civilization (in the sense of being able to point its capability to a unified goal), among all possible ways of organising civilisations. If our civilisation is very inefficient, AI could figure out a better design and foom that way.
Primarily, I think the question of our civilization’s efficiency is unclear. My intuition is that our civilization is quite inefficient, with the following points serving as weak evidence:
Civilization hasn’t been around that long, and has therefore not been optimised much.
The point (1) gets even more pronounced as you go from “designs for cooperation among a small group” to “designs for cooperation among milions”, or even billions. (Because fewer of these were running in parallel, and for a shorter time.)
The fact that civilization runs on humans, who are selfish etc, might severely limit the space of designs that have been tried.
As a lower bound, it seems that something like Yudkowsky’s ideas about dath ilan might work. (Not to be mistaken with “we can get there from here”, “works for humans”, or “none of Yudkowsky’s ideas have holes in them”.)
None of this contradicts your arguments, but it adds uncertainty and should make us more cautios about AI. (Not that I interpret the post as advocating against caution.)