142857
[Question] What if we solve AI Safety but no one cares
Sufficiently many Godzillas as an alignment strategy
- [deleted]
I like this idea and think it is worth exploring. It is not even just with training new models; AGI have to worry about misalignment with every self-modification and every interaction with the environment that changes itself.
Perhaps there are even ways to deter an AGI from self-improvement, by making misalignment more likely.
Some caveats are:
AGI may not take alignment seriously. We already have plenty of examples of general intelligences who don’t.
AGI can still increase its capabilities without training new models, e.g. by getting more compute
If an AGI decides to solve alignment before significant self-improvement, it will very likely be overtaken by other humans or AGI who don’t care as much about alignment.
Are there any alignment approaches that try to replicate how children end up loving their parents (or vice versa), except with AI and humans? Alternatively, approaches that look like getting an AI to do Buddhist lovingkindness?
I’ll use this comment to collect things I find.
A “moonshot idea” I saw brought up is getting Yudkowsky’s Harry Potter fanfiction translated into Chinese (please never ever do this).
This has already been done, and has pretty good reviews and some discussions.
I’ve looked through the EA/Rationalist/AI Safety forums in China
If these are public, could you post the links to them?
there is only one group doing technical alignment work in China
Do you know the name of the group, and what kinds of approaches they are taking toward technical alignment?
Given an aligned AGI, to what extent are people ok with letting the AGI modify us? Examples of such modifications include (feel free to add to the list):
Curing aging/illnesses
Significantly altering our biological form
Converting us to digital life forms
Reducing/Removing the capacity to suffer
Giving everyone instant jhanas/stream entry/etc.
Altering our desires to make them easier to satisfy
Increasing our intelligence (although this might be an alignment risk?)
Decreasing our intelligence
Refactoring our brains entirely
What exact parts of being “human” do we want to preserve?
Escape. Invest in space travel and escape the solar system before they arrive.
If your AI timelines are long, this may be a viable strategy for preserving the human species in the event of unaligned AGI.
In your AI timelines are short, a budget solution is to just send human brains into space and hope they will be found and revived by other powerful species (hopefully at least one of them is “benevolent”).