If the question is ‘what’s one experiment that would drop your p(doom) to under 1%?’ then I can’t think of such an experiment that would provide that many bits of data, without also being one where getting the good news seems absurd or being super dangerous.
Not quite an experiment, but to give an explicit test: if we get to the point where an AI can write non-trivial scientific papers in physics and math, and we then aren’t all dead within 6 months, I’ll be convinced that p(doom) < 0.01, and that something was very deeply wrong with my model of the world.
If that evidence would update you that far, then your space of doom hypotheses seems far too narrow. There is so much that we don’t know about strong AI. A failure to be rapidly killed only seems to rule out some of the highest-risk hypotheses, while still leaving plenty of hypotheses in which doom is still highly likely but slower.
If we get to that point of AI capabilities, we will likely be able to make 50 years of scientific progress in a matter of months for domains which are not too constrained by physical experimentation (just run more compute for LLMs), and I’d expect AI safety to be one of those. So either we die quickly thereafter, or we’ve solved AI safety. Getting LLMs to do scientific progress basically telescopes the future.
Are you assuming that there will be a sudden jump in AI scientific research capability from subhuman to strongly superhuman? It is one possibility, sure. Another is that the first AIs capable of writing research papers won’t be superhumanly good at it, and won’t advance research very far or even in a useful direction. It seems to me quite likely that this state of affairs will persist for at least six months.
Do you give the latter scenario less than 0.01 probability? That seems extremely confident to me.
I don’t think we need superhuman capability here for stuff to get crazy, pure volume of papers could substitute for that. If you can write a mediocre but logically correct paper with $50 of compute instead of with $10k of graduate student salary, that accelerates the pace of progress by a factor of 200, which seems enough for me to enable a whole bunch of other advances which will feed into AI research and make the models even better.
That’s not a math or physics paper, and it includes a bit more “handholding” in the form of an explicit database than would really make me update. The style of scientific papers is obviously very easy to copy for current LLMs, what I’m trying to get at is that if LLMs can start to make genuinely novel contributions at a slightly below-human level and learn from the mediocre article they write, pure volume of papers can make up for quality.
“Non-trivial” is a pretty soft word to include in this sort of prediction, in my opinion.
I think I’d disagree if you had said “purely AI-written paper resolves an open millennium prize problem”, but as written I’m saying to myself “hrm, I don’t know how to engage with this in a way that will actually pin down the prediction”.
I think it’s well enough established that long form internally coherent content is within the capabilities of a sufficiently large language model. I think the bottleneck on it being scary (or rather, it being not long before The End) is the LLM being responsible for the inputs to the research.
Fair point, “non-trivial” is too subjective, the intuition that I meant to convey was that if we get to the point where LLMs can do the sort of pure-thinking research in math and physics at a level where the papers build on top of one another in a coherent way, then I’d expect us to be close to the end.
Said another way, if theoretical physicists and mathematicians get automated, then we ought to be fairly close to the end. If in addition to that the physical research itself gets automated, such that LLMs write their own code to do experiments (or run the robotic arms that manipulate real stuff) and publish the results, then we’re *really* close to the end.
Not quite an experiment, but to give an explicit test: if we get to the point where an AI can write non-trivial scientific papers in physics and math, and we then aren’t all dead within 6 months, I’ll be convinced that p(doom) < 0.01, and that something was very deeply wrong with my model of the world.
If that evidence would update you that far, then your space of doom hypotheses seems far too narrow. There is so much that we don’t know about strong AI. A failure to be rapidly killed only seems to rule out some of the highest-risk hypotheses, while still leaving plenty of hypotheses in which doom is still highly likely but slower.
If we get to that point of AI capabilities, we will likely be able to make 50 years of scientific progress in a matter of months for domains which are not too constrained by physical experimentation (just run more compute for LLMs), and I’d expect AI safety to be one of those. So either we die quickly thereafter, or we’ve solved AI safety. Getting LLMs to do scientific progress basically telescopes the future.
Are you assuming that there will be a sudden jump in AI scientific research capability from subhuman to strongly superhuman? It is one possibility, sure. Another is that the first AIs capable of writing research papers won’t be superhumanly good at it, and won’t advance research very far or even in a useful direction. It seems to me quite likely that this state of affairs will persist for at least six months.
Do you give the latter scenario less than 0.01 probability? That seems extremely confident to me.
I don’t think we need superhuman capability here for stuff to get crazy, pure volume of papers could substitute for that. If you can write a mediocre but logically correct paper with $50 of compute instead of with $10k of graduate student salary, that accelerates the pace of progress by a factor of 200, which seems enough for me to enable a whole bunch of other advances which will feed into AI research and make the models even better.
So you’re now strongly expecting to die in less than 6 months? (Assuming that the tweet is not completely false)
That’s not a math or physics paper, and it includes a bit more “handholding” in the form of an explicit database than would really make me update. The style of scientific papers is obviously very easy to copy for current LLMs, what I’m trying to get at is that if LLMs can start to make genuinely novel contributions at a slightly below-human level and learn from the mediocre article they write, pure volume of papers can make up for quality.
“Non-trivial” is a pretty soft word to include in this sort of prediction, in my opinion.
I think I’d disagree if you had said “purely AI-written paper resolves an open millennium prize problem”, but as written I’m saying to myself “hrm, I don’t know how to engage with this in a way that will actually pin down the prediction”.
I think it’s well enough established that long form internally coherent content is within the capabilities of a sufficiently large language model. I think the bottleneck on it being scary (or rather, it being not long before The End) is the LLM being responsible for the inputs to the research.
Fair point, “non-trivial” is too subjective, the intuition that I meant to convey was that if we get to the point where LLMs can do the sort of pure-thinking research in math and physics at a level where the papers build on top of one another in a coherent way, then I’d expect us to be close to the end.
Said another way, if theoretical physicists and mathematicians get automated, then we ought to be fairly close to the end. If in addition to that the physical research itself gets automated, such that LLMs write their own code to do experiments (or run the robotic arms that manipulate real stuff) and publish the results, then we’re *really* close to the end.