there are various proposals (not from Tyler!) for ‘succession,’ of passing control over to the AIs intentionally, either because people prefer it (as many do!) or because it is inevitable regardless so managing it would help it go better. I have yet to see such a proposal that has much chance of not bringing about human extinction, or that I expect to meaningfully preserve value in the universe. As I usually say, if this is your plan, Please Speak Directly Into the Microphone.
[Meditations on Moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch)
I am a transhumanist and I really do want to rule the universe.
Not personally – I mean, I wouldn’t object if someone personally offered me the job, but I don’t expect anyone will. I would like humans, or something that respects humans, or at least gets along with humans – to have the job.
But the current rulers of the universe – call them what you want, Moloch, Gnon, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.
The opposite of a trap is a garden. The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.
And the whole point of Bostrom’s Superintelligence is that this is within our reach. Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time. If multiple competing entities were likely to do that at once, we would be super-doomed. But the sheer speed of the cycle makes it possible that we will end up with one entity light-years ahead of the rest of civilization, so much so that it can suppress any competition – including competition for its title of most powerful entity – permanently. In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it’s on our side, it can kill Moloch dead.
And if that entity shares human values, it can allow human values to flourish unconstrained by natural law.
If the goal is maximizing skill at writing, one should use LLMs a lot. What you wrote about likely failure modes of doing so is true, but not an inevitable outcome. If Language Models are useful tools for writing, avoiding their use due to concerns about being unable to handle them is a mistake regardless of whether these concerns are warranted. Why?
Having aptitude necessary to “make a splash” is very rare. Not taking chances probably means one won’t reach the top. Especially if competent LLM use raises the ceiling of human capability.
Note that by competent use I mean something like cyborgism: https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism