As part of the onboarding process for each new employee, someone sits down with him or her and says “you need to understand that [Company]’s default plan is to pause AI development at some point in the future. When we do that, the value of your equity might tank.”
An AGI/ASI pause doesn’t have to be a total AI pause. You can keep developing better protein-folding prediction AI, better self-driving car AI, etc. All the narrow-ish sorts of AI that are extremely unlikely to super-intelligently blow up in your face. Maybe there are insights gleaned from the lab’s past AGI-ward research that are applicable to narrow AI. Maybe you could also work on developing better products with existing tech. You just want to pause plausibly AGI-ward capabilities research.
(It’s very likely that some of the stuff that I would now consider “extremely unlikely to super-intelligently blow up in your face” would actually have a good chance of superintelligently blowing up in your face. So what sort of research is safe to do in that regard should also be an area of research.)
(There’s also the caveat consideration that those narrow applications can find some pieces for building AGI.)
This sort of pivot might make the prospects of pausing more palatable to other companies. Plausible that that’s what you had in mind all along, but I think it’s better for this to be very explicit.
Totally. You can pause frontier capability development without pausing applications and commercialization, or non-general capability developments.
But, realistically, a company saying “we’ve crossed the line, we think it’s irresponsible to scale further”, especially if other companies don’t respond in kind, will cause the stock price to fall?
But, realistically, a company saying “we’ve crossed the line, we think it’s irresponsible to scale further”, especially if other companies don’t respond in kind, will cause the stock price to fall?
To my mind, the market appears to be pricing in some probability of significant advancements. To be told “actually, we’re not doing more advancements right now” might mean that money was in the wrong place. If that money came to believe it would not be financially beneficial to see those advancements anyway, eg because the resulting AIs would become unteachable faster than they became aligned to that money, that might also mean it’s in the wrong place. The latter seems necessary for a pause—you need to convince the market, which means convincing a lot of investors to lose a lot of money and become very sad. That money was very excited.
An AGI/ASI pause doesn’t have to be a total AI pause. You can keep developing better protein-folding prediction AI, better self-driving car AI, etc. All the narrow-ish sorts of AI that are extremely unlikely to super-intelligently blow up in your face. Maybe there are insights gleaned from the lab’s past AGI-ward research that are applicable to narrow AI. Maybe you could also work on developing better products with existing tech. You just want to pause plausibly AGI-ward capabilities research.
(It’s very likely that some of the stuff that I would now consider “extremely unlikely to super-intelligently blow up in your face” would actually have a good chance of superintelligently blowing up in your face. So what sort of research is safe to do in that regard should also be an area of research.)
(There’s also the caveat consideration that those narrow applications can find some pieces for building AGI.)
This sort of pivot might make the prospects of pausing more palatable to other companies. Plausible that that’s what you had in mind all along, but I think it’s better for this to be very explicit.
Totally. You can pause frontier capability development without pausing applications and commercialization, or non-general capability developments.
But, realistically, a company saying “we’ve crossed the line, we think it’s irresponsible to scale further”, especially if other companies don’t respond in kind, will cause the stock price to fall?
To my mind, the market appears to be pricing in some probability of significant advancements. To be told “actually, we’re not doing more advancements right now” might mean that money was in the wrong place. If that money came to believe it would not be financially beneficial to see those advancements anyway, eg because the resulting AIs would become unteachable faster than they became aligned to that money, that might also mean it’s in the wrong place. The latter seems necessary for a pause—you need to convince the market, which means convincing a lot of investors to lose a lot of money and become very sad. That money was very excited.