I wonder if someone could create a similar structured argument for the opposite viewpoint.
(Disclaimer: I do not endorse a mirrored argument of OP’s argument)
You could start with “People who believe there is a >50% possibility of humanity’s survival in the next 50 years or so strike me as overconfident.”, and then point out that for every plan of humanity’s survival, there are a lot of things that could potentially go wrong.
The analogy is not perfect, but to a first approximation, we should expect that things can go wrong in both directions.
It’s not symmetric in my view: The person positing a specific non-baseline thing has the burden of proof, and the more elaborate the claim, the higher the burden of proof.
“AI will become a big deal!” faces fewer problems than “AI will change our idea of humanity!” faces fewer problems than “AI will kill us all!” faces fewer problems than “AI will kill us all with nanotechnology!”
He who gets to choose which thing is baseline and which thing gets the burden of proof, is the sovereign.
(That said I agree that burden of proof is on people claiming AGI is a thing, that it is happening soon probably, and that it’ll probably be existential catastrophe. But I think the burden of proof is much lighter than the weight of arguments and evidence that has accumulated so far to meet it.)
I’d be interested to hear your take on this article.
Yeah I totally agree with that article—it’s almost tautologically correct in my view, and I agree that the implications are wild
I’m specifically pushing back on the ppl saying it is likely that humanity ends during my daughter’s lifetime-- I think that claim specifically is overconfident. If we extend the timeline than my objection collapses.
OK, fair. Well, as I always say these days, quite a lot of my views flow naturally from my AGI timelines. It’s reasonable to be skeptical that AGI is coming in about 4 years, but once you buy that premise, basically everything else I believe becomes pretty plausible. In particular, if you think AGI is coming in 2027, it probably seems pretty plausible that humanity will be unprepared & more likely than not that things will go very badly. Would you agree?
It depends on what you mean by “go very badly” but I think I do disagree.
Again, I don’t know what I’m talking about, but “AGI” is a little too broad for me. If you told me that you could more or less simulate my brain in a computer program and that this brain had the same allegiances to other AIs and itself that I currently have for other humans, and the same allegiance to humans that I currently have for even dogs (which I absolutely love), then yes I think it’s all over and we die.
If you say to me, “FTPickle, I’m not going to define AGI. It is a promise that in 2027 an AGI emerges. Is it more likely than not that humanity is wiped out by this event?” I would gulp and pick ‘no.’
Difference between “plausible” and “likely” is huge I think. Again huge caveat that AGI may be more specifically defined than I am aware of.
I’m happy to define it more specifically—e.g. if you have time, check out What 2026 Looks Like and then imagine that in 2027 the chatbots finally become superhuman at all relevant intellectual domains (including agency / goal-directedness / coherence) whereas before they had been superhuman in some but subhuman in others. That’s the sort of scenario I think is likely. It’s a further question whether or not the AGIs would be aligned, to be fair. But much has been written on that topic as well.
I wonder if someone could create a similar structured argument for the opposite viewpoint. (Disclaimer: I do not endorse a mirrored argument of OP’s argument)
You could start with “People who believe there is a >50% possibility of humanity’s survival in the next 50 years or so strike me as overconfident.”, and then point out that for every plan of humanity’s survival, there are a lot of things that could potentially go wrong.
The analogy is not perfect, but to a first approximation, we should expect that things can go wrong in both directions.
It’s not symmetric in my view: The person positing a specific non-baseline thing has the burden of proof, and the more elaborate the claim, the higher the burden of proof.
“AI will become a big deal!” faces fewer problems than “AI will change our idea of humanity!” faces fewer problems than “AI will kill us all!” faces fewer problems than “AI will kill us all with nanotechnology!”
He who gets to choose which thing is baseline and which thing gets the burden of proof, is the sovereign.
(That said I agree that burden of proof is on people claiming AGI is a thing, that it is happening soon probably, and that it’ll probably be existential catastrophe. But I think the burden of proof is much lighter than the weight of arguments and evidence that has accumulated so far to meet it.)
I’d be interested to hear your take on this article.
Yeah I totally agree with that article—it’s almost tautologically correct in my view, and I agree that the implications are wild
I’m specifically pushing back on the ppl saying it is likely that humanity ends during my daughter’s lifetime-- I think that claim specifically is overconfident. If we extend the timeline than my objection collapses.
OK, fair. Well, as I always say these days, quite a lot of my views flow naturally from my AGI timelines. It’s reasonable to be skeptical that AGI is coming in about 4 years, but once you buy that premise, basically everything else I believe becomes pretty plausible. In particular, if you think AGI is coming in 2027, it probably seems pretty plausible that humanity will be unprepared & more likely than not that things will go very badly. Would you agree?
It depends on what you mean by “go very badly” but I think I do disagree.
Again, I don’t know what I’m talking about, but “AGI” is a little too broad for me. If you told me that you could more or less simulate my brain in a computer program and that this brain had the same allegiances to other AIs and itself that I currently have for other humans, and the same allegiance to humans that I currently have for even dogs (which I absolutely love), then yes I think it’s all over and we die.
If you say to me, “FTPickle, I’m not going to define AGI. It is a promise that in 2027 an AGI emerges. Is it more likely than not that humanity is wiped out by this event?” I would gulp and pick ‘no.’
Difference between “plausible” and “likely” is huge I think. Again huge caveat that AGI may be more specifically defined than I am aware of.
I’m happy to define it more specifically—e.g. if you have time, check out What 2026 Looks Like and then imagine that in 2027 the chatbots finally become superhuman at all relevant intellectual domains (including agency / goal-directedness / coherence) whereas before they had been superhuman in some but subhuman in others. That’s the sort of scenario I think is likely. It’s a further question whether or not the AGIs would be aligned, to be fair. But much has been written on that topic as well.