Huh? No it doesn’t capture much of the benefits. I would have guessed it captures a tiny fraction of the benefits for advanced AI, even for AIs around the level where you might want to pause at human level.
Where do you think that most of the benefits come from?
Edit: My personal consumption patterns are mostly not relevant to this question, so I moved what was formally the rest of this comment to a footnote.[1]
Perhaps I am dumb or my personal priorities are different than most people’s, but I expect a large share of the benefits from AI, to my life, personally, are going to be biotech advances, that eg could extend my life or make me smarter.
Like basically the things that could make my life better are 1) somehow being introduced to a compatible romantic partner, 2) cheaper housing, 3) biotech stuff. There isn’t much else.
I guess self-driving cars might make travel easier? But most of the cost of travel is housing.
I care a lot about ending factory farming, but that’s biotechnology again.
I guess AI, if it was trustworthy, could also substantially improve governance, which could have huge benefits to society.
I don’t think you get radical reductions in mortality and radical life extension with (advanced) narrow bio AI without highly capable general AI. (It might be that a key strategy for unlocking much better biotech is highly capable general AIs creating extremely good narrow bio AI, but I don’t think the narrow bio AI which humans will create over the next ~30 years is very likely to suffice.) Like narrow bio AI isn’t going to get you (arbitrarily good) biotech nearly as fast as building generally capable AI would. This seems especially true given that much, much better biotech might require much higher GDP for e.g. running vastly more experiments and using vastly more compute. (TBC, I don’t agree with all aspects of the linked post.)
I also think people care about radical increases in material abundance which you also don’t get with narrow bio AI.
And the same for entertainment, antidepressants (and other drugs/modifications that might massively improve quality of life by giving people much more control over mood, experiences, etc), and becoming an upload such that you can live a radically different life if you want.
You also don’t have the potential for huge improvements in animal welfare (due to making meat alternatives cheaper, allowing for engineering away suffering in livestock animals, making people wiser, etc.)
I’m focusing on neartermist-style benefits; as in, immediate benefits to currently alive (or soon to be born by default) humans or animals. Of course, powerful AI could result in huge numbers of digitial minds in the short run and probably is needed for getting to a great future (with a potentially insane amount of digital minds and good utilization of the cosmic endowment etc.) longer term. The first order effects on benefits of delaying don’t matter that much from a longtermist perspective of course, so I assumed we were fully operating a neartermist-style frame when talking about benefits.
It seems like I misunderstood your reading of Ray’s claim.
I read Ray as saying “a large fraction of the benefits of advanced AI are only in the biotech sector, and so we could get a large fraction of the benefits by pushing forward on only AI for biotech.”
It sounds like you’re pointing at a somewhat different axis, in response, saying “we won’t get anything close to the benefits of advanced AI agents with only narrow AI systems, because narrow AI systems are just much less helpful.”
(And implicitly, the biotech AIs are either narrow AIs (and therefore not very helpful) or they’re general AIs that are specialized on biotech, in which case you’re not getting the the safety benefits, you’re imagining getting by only focusing biotech.)
Ah, I had also misintepreted Ryans response here. “What actually is practical here?” makes sense as a question and I’m not sure about the answers.
I think one of the MIRI angles here is variants of STEM AI, which might be more general, but whose training set is filtered to be only materials about bio + some related science (and avoiding as much as possible that’d point towards human psychology, geopolitics, programming, ai hardware, etc). So it both will have less propensity to take over, and be less good at it relative to it’s power level at bio.
I wasn’t thinking about this when I wrote the previous comment, I’d have phrased it differently if I were. I agree it’s an open question whether this works. But I feel more optimistic about controlled-takeoff world that’s taking a step back from “LLMs are trained on the whole internet.”
Also, noting: I don’t believing in a safe, full handoff to artificial AI alignment researchers (because of gradual disempowerment reasons). But, fwiw I think I’d feel pretty good about STEM AI that’s focused on various flavors of math and conceptual reasoning that somehow avoids human psychology, hardware, and geopolitics, which you don’t do a full handoff to, but, it’s able to assist pretty substantially with larger subproblems that come up.
Where do you think that most of the benefits come from?
Edit: My personal consumption patterns are mostly not relevant to this question, so I moved what was formally the rest of this comment to a footnote.[1]
Perhaps I am dumb or my personal priorities are different than most people’s, but I expect a large share of the benefits from AI, to my life, personally, are going to be biotech advances, that eg could extend my life or make me smarter.
Like basically the things that could make my life better are 1) somehow being introduced to a compatible romantic partner, 2) cheaper housing, 3) biotech stuff. There isn’t much else.
I guess self-driving cars might make travel easier? But most of the cost of travel is housing.
I care a lot about ending factory farming, but that’s biotechnology again.
I guess AI, if it was trustworthy, could also substantially improve governance, which could have huge benefits to society.
I don’t think you get radical reductions in mortality and radical life extension with (advanced) narrow bio AI without highly capable general AI. (It might be that a key strategy for unlocking much better biotech is highly capable general AIs creating extremely good narrow bio AI, but I don’t think the narrow bio AI which humans will create over the next ~30 years is very likely to suffice.) Like narrow bio AI isn’t going to get you (arbitrarily good) biotech nearly as fast as building generally capable AI would. This seems especially true given that much, much better biotech might require much higher GDP for e.g. running vastly more experiments and using vastly more compute. (TBC, I don’t agree with all aspects of the linked post.)
I also think people care about radical increases in material abundance which you also don’t get with narrow bio AI.
And the same for entertainment, antidepressants (and other drugs/modifications that might massively improve quality of life by giving people much more control over mood, experiences, etc), and becoming an upload such that you can live a radically different life if you want.
You also don’t have the potential for huge improvements in animal welfare (due to making meat alternatives cheaper, allowing for engineering away suffering in livestock animals, making people wiser, etc.)
I’m focusing on neartermist-style benefits; as in, immediate benefits to currently alive (or soon to be born by default) humans or animals. Of course, powerful AI could result in huge numbers of digitial minds in the short run and probably is needed for getting to a great future (with a potentially insane amount of digital minds and good utilization of the cosmic endowment etc.) longer term. The first order effects on benefits of delaying don’t matter that much from a longtermist perspective of course, so I assumed we were fully operating a neartermist-style frame when talking about benefits.
It seems like I misunderstood your reading of Ray’s claim.
I read Ray as saying “a large fraction of the benefits of advanced AI are only in the biotech sector, and so we could get a large fraction of the benefits by pushing forward on only AI for biotech.”
It sounds like you’re pointing at a somewhat different axis, in response, saying “we won’t get anything close to the benefits of advanced AI agents with only narrow AI systems, because narrow AI systems are just much less helpful.”
(And implicitly, the biotech AIs are either narrow AIs (and therefore not very helpful) or they’re general AIs that are specialized on biotech, in which case you’re not getting the the safety benefits, you’re imagining getting by only focusing biotech.)
Ah, I had also misintepreted Ryans response here. “What actually is practical here?” makes sense as a question and I’m not sure about the answers.
I think one of the MIRI angles here is variants of STEM AI, which might be more general, but whose training set is filtered to be only materials about bio + some related science (and avoiding as much as possible that’d point towards human psychology, geopolitics, programming, ai hardware, etc). So it both will have less propensity to take over, and be less good at it relative to it’s power level at bio.
I wasn’t thinking about this when I wrote the previous comment, I’d have phrased it differently if I were. I agree it’s an open question whether this works. But I feel more optimistic about controlled-takeoff world that’s taking a step back from “LLMs are trained on the whole internet.”
Also, noting: I don’t believing in a safe, full handoff to artificial AI alignment researchers (because of gradual disempowerment reasons). But, fwiw I think I’d feel pretty good about STEM AI that’s focused on various flavors of math and conceptual reasoning that somehow avoids human psychology, hardware, and geopolitics, which you don’t do a full handoff to, but, it’s able to assist pretty substantially with larger subproblems that come up.