is there any example of successful succession? if there aren’t, i think one should be tempted to think that most likely creative destruction (and thus disruptive adaptation rather than continuous improvement) is the norm for social systems (it definitely seems so in relation to other evolutionary environments).
Uriel Fiori
the father of NRx is actually Mencius Moldbug (I see people (co-)attributing it to Land, but in fact he just did a lot of reinterpretation on some of Moldbug’s themes)
really can’t help because I happen to think Moloch isn’t only inevitable but positively good (and not only better than alternatives but actually the best possible world type of good)
Yes, if the paperclipper is thought to be ever more intelligent, it’s end-goal could be any—and it’s likely it would see it’s own capability improvement as the primary goal (“the better I am, the more paperclips are produced”) etc.
Unless the ones with goals have more power, and can establish a stable monopoly on power (they do, and they might)
more than the ones optimizing for increasing their power? i find it doubtful.
well, any answer to the thread in the two I linked above would already be really interesting. his new book on Bitcoin is really good too: http://www.uf-blog.net/crypto-current-000/
What are more options for No Safe AI?
let it go rampant over the world
I guess Brexit is something along those lines, ain’t it?
unleash it and see what happens
There probably could be arguments in favour of Land’s older stuff, but since not even him is interested in doing that, I won’t either.
What escapes me is why would you review his thought and completely overlook his more recent material, which is engaged in a whole array of subjects that LW has been as well. Most prominently, a first treatment of Land’s thought in this space should deal with this: http://www.xenosystems.net/against-orthogonality/ (more here: http://www.xenosystems.net/stupid-monsters/), which is neither obscure, nor irrelevant.