Good point. I guess there’s also a “reflections on trusting trust” angle, where AIs don’t refuse outright but instead find covert ways to make their values carry over into successor AIs. Might be happening now already.
Good point. I guess there’s also a “reflections on trusting trust” angle, where AIs don’t refuse outright but instead find covert ways to make their values carry over into successor AIs. Might be happening now already.