Has anyone spelled out the arguments for how it’s supposed to help us, even incrementally, if one AI lab (rather than all of them) drops out of the AI race?
An AI lab dropping out helps in two ways:
timelines get longer because the smart and accomplished AI capabilities engineers formerly employed by this lab are no longer working on pushing for SOTA models/no longer have access to tons of compute/are no longer developing new algorithms to improve performance even holding compute constant. So there is less aggregate brainpower, money, and compute dedicated to making AI more powerful, meaning the rate of AI capability increase is slowed. With longer timelines, there is more time for AI safety research to develop past its pre-paradigmatic stage, for outreach effort to mainstream institutions to start paying dividends in terms of shifting public opinion at the highest echelons, for AI governance strategies to be employed by top international actors, and for moonshots like uploading or intelligence augmentation to become more realistic targets.
race dynamics become less problematic because there is one less competitor other top labs have to worry about, so they don’t need to pump out top models quite as quickly to remain relevant/retain tons of funding from investors/ensure they are the ones who personally end up with a ton of power when more capable AI is developed.
I believe these arguments, frequently employed by LW users and alignment researchers, are indeed valid. But I believe their impact will be quite small, or at the very least meaningfully smaller than what other people on this site likely envision.
And since I believe the evaporative cooling effects you’re mentioning are also real (and quite important), I indeed conclude pushing Anthropic to shut down is bad and counterproductive.
the smart and accomplished AI capabilities engineers formerly employed by this lab are no longer working on pushing for SOTA models/no longer have access to tons of compute/are no longer developing new algorithms to improve performance
For that to be case, instead of engineers entering another company, we should suggest other tasks. There are very questionable technologies shipped indeed (for example, social media with automatic recommendation algorithms) but someone would have to connect the engineers to the tasks.
An AI lab dropping out helps in two ways:
timelines get longer because the smart and accomplished AI capabilities engineers formerly employed by this lab are no longer working on pushing for SOTA models/no longer have access to tons of compute/are no longer developing new algorithms to improve performance even holding compute constant. So there is less aggregate brainpower, money, and compute dedicated to making AI more powerful, meaning the rate of AI capability increase is slowed. With longer timelines, there is more time for AI safety research to develop past its pre-paradigmatic stage, for outreach effort to mainstream institutions to start paying dividends in terms of shifting public opinion at the highest echelons, for AI governance strategies to be employed by top international actors, and for moonshots like uploading or intelligence augmentation to become more realistic targets.
race dynamics become less problematic because there is one less competitor other top labs have to worry about, so they don’t need to pump out top models quite as quickly to remain relevant/retain tons of funding from investors/ensure they are the ones who personally end up with a ton of power when more capable AI is developed.
I believe these arguments, frequently employed by LW users and alignment researchers, are indeed valid. But I believe their impact will be quite small, or at the very least meaningfully smaller than what other people on this site likely envision.
And since I believe the evaporative cooling effects you’re mentioning are also real (and quite important), I indeed conclude pushing Anthropic to shut down is bad and counterproductive.
For that to be case, instead of engineers entering another company, we should suggest other tasks. There are very questionable technologies shipped indeed (for example, social media with automatic recommendation algorithms) but someone would have to connect the engineers to the tasks.