I’m not sure if the rogue replication scenario is conceptualized to have a copy of the weights with each of those replicators. I definitely was envisioning agents that make remote calls to models.
I actually think it’s important to not have good defenses for this initially, so that it causes a level of public alarm appropriate to the actual situation of suddenly sharing the earth with a new whole set of intelligent species.
It would be bad to intentionally not have good defenses. The signal has to be real to be meaningful. Any indication that somebody could have tried to defend against this, but chose not to, undermines the warning value.
I’m not sure it’s totally true, though; the public doesn’t seem that rational.
I don’t know who would be responsible for such defenses and deliberately not do it. I’m unfortunately not in charge of humanity’s strategy on AI.
If we do a bad job on those defenses just because we tend to do a bad job on things like that, that would be good evidence that we do a similarly bad job on alignment and defense against AGI or ASI.
But yes, I can see how that might go wrong if it looked like someone with sandbagging and we might get better results if we just done even a decent defense.
I’m not sure if the rogue replication scenario is conceptualized to have a copy of the weights with each of those replicators. I definitely was envisioning agents that make remote calls to models.
I actually think it’s important to not have good defenses for this initially, so that it causes a level of public alarm appropriate to the actual situation of suddenly sharing the earth with a new whole set of intelligent species.
Of course I am highly uncertain about that.
It would be bad to intentionally not have good defenses. The signal has to be real to be meaningful. Any indication that somebody could have tried to defend against this, but chose not to, undermines the warning value.
That’s a good point.
I’m not sure it’s totally true, though; the public doesn’t seem that rational.
I don’t know who would be responsible for such defenses and deliberately not do it. I’m unfortunately not in charge of humanity’s strategy on AI.
If we do a bad job on those defenses just because we tend to do a bad job on things like that, that would be good evidence that we do a similarly bad job on alignment and defense against AGI or ASI.
But yes, I can see how that might go wrong if it looked like someone with sandbagging and we might get better results if we just done even a decent defense.
Got it, I didn’t realize that.