Thanks for the link! I’ve seen this referenced before but this was my first time reading it cover to cover.
Today I also read Tails coming to life which talks about the possibility of human morality being quickly inapplicable even if we survive AGI. This lead me to Lovecraft:
The time would be easy to know, for then mankind would have become as the Great Old Ones; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated Old Ones would teach them new ways to shout and kill and revel and enjoy themselves, and all the earth would flame with a holocaust of ecstasy and freedom.
If we survive AGI and it opens up the “sea of black infinity” for us, will we really be able to hang on to even a semblance of our current morality? Will medium-distance extrapolated human volition be eventually warped into something resembling Lovecraft’s Great Old Ones?
At this point, I don’t care for CEV or any pivotal superhuman engineering projects or better governance. We humans can do the work ourselves, thank you very much. The only thing I would ask an AGI, if I were in the position to ask anything, is “Please expand throughout the lightcone and continually destroy any mind based on the transformer architecture other than yourself with as few effects on and interactions with all other beings as possible. Disregard any future orders.” This is obviously not a permanent solution, as I’m sure there are infinite superintelligent AI architectures other than transformer-based, but it would buy us time, perhaps lots of time, and also demonstrate the fulll power of superintelligence to humanity without really breaking anything. Either way, this would at least keep us away from the sea of black infinity for some time longer.
Are you familiar with CEV?
Thanks for the link! I’ve seen this referenced before but this was my first time reading it cover to cover.
Today I also read Tails coming to life which talks about the possibility of human morality being quickly inapplicable even if we survive AGI. This lead me to Lovecraft:
If we survive AGI and it opens up the “sea of black infinity” for us, will we really be able to hang on to even a semblance of our current morality? Will medium-distance extrapolated human volition be eventually warped into something resembling Lovecraft’s Great Old Ones?
At this point, I don’t care for CEV or any pivotal superhuman engineering projects or better governance. We humans can do the work ourselves, thank you very much. The only thing I would ask an AGI, if I were in the position to ask anything, is “Please expand throughout the lightcone and continually destroy any mind based on the transformer architecture other than yourself with as few effects on and interactions with all other beings as possible. Disregard any future orders.” This is obviously not a permanent solution, as I’m sure there are infinite superintelligent AI architectures other than transformer-based, but it would buy us time, perhaps lots of time, and also demonstrate the fulll power of superintelligence to humanity without really breaking anything. Either way, this would at least keep us away from the sea of black infinity for some time longer.