So I think what you are saying is an ultra-BS argument is one that you know is obviously wrong.
Yep, pretty much. Part of the technique is knowing the ins and outs of our own argument. As I use ultra-BS prominently in debate, I need to be able to rebut the argument when I’m inevitably forced to argue the other side. I thus draw the distinction between ultra-BS along these lines. If it’s not obviously wrong (to me, anyways) it’s speculation. I can thus say that extended Chinese real economic stagnation for the next 10 years is educated speculation, while imminent Chinese economic collapse is ultra-BS.
If you don’t know, you cannot justify a policy of preemptive nuclear war over AI. That’s kinda my point. I’m not even trying to say, object level, whether or not ASI actually will be a threat humans need to be willing to go to nuclear war over. I am saying the evidence right now does not support that conclusion. (it doesn’t support the conclusion that ASI is safe either, but it doesn’t justify the most extreme policy action)
So, this is where I withdraw into acknowledging my limits. I don’t believe I have read sufficient ASI literature to fully understand this point, so I’m not too comfortable offering any object level predictions or narrative assessments. I can agree that many ASI arguments follow the same narrative format as ultra-BS, and there are likely many bad ASI arguments which can be revealed as wrong through careful (or even cursory) research. However, I’m not sufficiently educated on the subject to actually evaluate the narrative, thus the unsatisfactory response of ‘I’m not sure, sorry’.
However, if your understanding of ASI is correct, and there indeed is insufficient provable evidence, then yes, I can agree ASI policies cannot be argued for with provable evidence. Note again, however, that this would essentially be me taking your word for everything, which I’m not comfortable doing.
Currently, my priors on ASI ruin are limited, and I’ll likely need to do more specific research on the topic.
However, if your understanding of ASI is correct, and there indeed is insufficient provable evidence, then yes, I can agree ASI policies cannot be argued for with provable evidence. Note again, however, that this would essentially be me taking your word for everything, which I’m not comfortable doing.
So in this particular scenario, those concerned about ASI doom aren’t asking for a small or reasonable policy action proportional to today’s uncertainty. They are asking for AI pauses and preemptive nuclear war.
AI pauses will cost an enormous amount of money, some of which is tax revenue.
Preemptive nuclear war is potential suicide. It’s asking for a country to risk the deaths of approximately 50% of it’s population in the near term, and to lose all it’s supply chains, turning it into broken third world country separated by radioactive craters on all the transit and food supply hubs, which would likely kill a large fraction of it’s remaining citizens.
To justify (1) you would need to have some level of evidence that the threat exists. To justify (2) I would expect you would need beyond a shadow of a doubt evidence that the threat exists.
So for (1) convincing evidence might be a weak ASI that is hostile needs to exist in the lab before the threat can be claimed to be real. For (2) researchers would need to have produced in an isolated lab strong ASI, demonstrated that they were hostile, and tried thousands of times to make a safe ASI with a 100% failure rate.
I think we could argue about the exact level of evidence needed, or briefly establish plausible ways that (1) and (2) could fail to show a threat, but in general I would say the onus is on AI doom advocates to prove the threat is real, not on advocates for “business as usual” technology development to prove it is not. I think this last part is the dark arts scam, that and other hidden assumptions that get treated as certainty. (a lot of the hidden assumptions are in the technical details of how an ASI is assumed to work by someone with less detailed technical knowledge, vs the way actual ML systems work today)
Another part of the scam is the whole calling this “rational”. If your evidence on any topic is uncertain, and you can’t prove your point, certainty is unjustified, and it’s not a valid “agree to disagree” opinion. See: https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem .
So with this said, it seems like all I would need to do is show with a cite that ASI don’t exist yet, and show with a cite a reason, any reason at all, that plausibly could mean ASI are unable to be a threat. I don’t have to prove the reason is anything but plausible.
It does bother me that my proposal for proving ASI might not be a threat is suspiciously similar to how tobacco companies delayed any action to ban cigarettes essentially forever, but first they started with shoddy science to show that maybe the cigarettes weren’t the reason people were dying. Or how fossil fuel advocates have pulled the same scam, amplifying any doubts over climate change and thus delaying meaningful action for decades. (meaningful action is to research alternatives, which did succeed, but also to price carbon, which https://www.barrons.com/articles/europe-carbon-tax-emissions-climate-policy-1653e360 doesn’t even start until 2026, 50 years after the discovery of climate change)
These historical examples lead to a conclusion as well, I will see if you realize what this means for AI.
Thanks for the update! I think this is probably something important to take into consideration when evaluating ASI arguments.
That said, I think we’re starting to stray from the original topic of the Dark Arts, as we’re focusing more on ASI specifically rather than the Dark Arts element of it. In the interest of maintaining discussion focus on this post, would you agree to continuing AGI discussion in private messages?
And I was trying to focus on the dark arts part of the arguments. Note I don’t make any arguments about ASI in the above, just state that fairly weak evidence should be needed to justify not doing anything drastic about it at this time, because the drastic actions have high measurable costs. It’s not provable at present to state that “ASI could find a way to take over the planet with limited resources, because we don’t have an ASI or know the intelligence ROI on a given amount of flops”, but it is provable to state that “an AI pause of 6 months would cost tens of billions, possibly hundreds of billions of dollars and would reduce the relative power of the pausing countries internationally”. It’s also provable to state the damage of a nuclear exchange.
Look how it’s voted down to −10 on agreement : others feel very strongly about this issue.
Yep, pretty much. Part of the technique is knowing the ins and outs of our own argument. As I use ultra-BS prominently in debate, I need to be able to rebut the argument when I’m inevitably forced to argue the other side. I thus draw the distinction between ultra-BS along these lines. If it’s not obviously wrong (to me, anyways) it’s speculation. I can thus say that extended Chinese real economic stagnation for the next 10 years is educated speculation, while imminent Chinese economic collapse is ultra-BS.
So, this is where I withdraw into acknowledging my limits. I don’t believe I have read sufficient ASI literature to fully understand this point, so I’m not too comfortable offering any object level predictions or narrative assessments. I can agree that many ASI arguments follow the same narrative format as ultra-BS, and there are likely many bad ASI arguments which can be revealed as wrong through careful (or even cursory) research. However, I’m not sufficiently educated on the subject to actually evaluate the narrative, thus the unsatisfactory response of ‘I’m not sure, sorry’.
However, if your understanding of ASI is correct, and there indeed is insufficient provable evidence, then yes, I can agree ASI policies cannot be argued for with provable evidence. Note again, however, that this would essentially be me taking your word for everything, which I’m not comfortable doing.
Currently, my priors on ASI ruin are limited, and I’ll likely need to do more specific research on the topic.
So in this particular scenario, those concerned about ASI doom aren’t asking for a small or reasonable policy action proportional to today’s uncertainty. They are asking for AI pauses and preemptive nuclear war.
Pause: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Nuclear war: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
AI pauses will cost an enormous amount of money, some of which is tax revenue.
Preemptive nuclear war is potential suicide. It’s asking for a country to risk the deaths of approximately 50% of it’s population in the near term, and to lose all it’s supply chains, turning it into broken third world country separated by radioactive craters on all the transit and food supply hubs, which would likely kill a large fraction of it’s remaining citizens.
To justify (1) you would need to have some level of evidence that the threat exists. To justify (2) I would expect you would need beyond a shadow of a doubt evidence that the threat exists.
So for (1) convincing evidence might be a weak ASI that is hostile needs to exist in the lab before the threat can be claimed to be real. For (2) researchers would need to have produced in an isolated lab strong ASI, demonstrated that they were hostile, and tried thousands of times to make a safe ASI with a 100% failure rate.
I think we could argue about the exact level of evidence needed, or briefly establish plausible ways that (1) and (2) could fail to show a threat, but in general I would say the onus is on AI doom advocates to prove the threat is real, not on advocates for “business as usual” technology development to prove it is not. I think this last part is the dark arts scam, that and other hidden assumptions that get treated as certainty. (a lot of the hidden assumptions are in the technical details of how an ASI is assumed to work by someone with less detailed technical knowledge, vs the way actual ML systems work today)
Another part of the scam is the whole calling this “rational”. If your evidence on any topic is uncertain, and you can’t prove your point, certainty is unjustified, and it’s not a valid “agree to disagree” opinion. See: https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem .
So with this said, it seems like all I would need to do is show with a cite that ASI don’t exist yet, and show with a cite a reason, any reason at all, that plausibly could mean ASI are unable to be a threat. I don’t have to prove the reason is anything but plausible.
It does bother me that my proposal for proving ASI might not be a threat is suspiciously similar to how tobacco companies delayed any action to ban cigarettes essentially forever, but first they started with shoddy science to show that maybe the cigarettes weren’t the reason people were dying. Or how fossil fuel advocates have pulled the same scam, amplifying any doubts over climate change and thus delaying meaningful action for decades. (meaningful action is to research alternatives, which did succeed, but also to price carbon, which https://www.barrons.com/articles/europe-carbon-tax-emissions-climate-policy-1653e360 doesn’t even start until 2026, 50 years after the discovery of climate change)
These historical examples lead to a conclusion as well, I will see if you realize what this means for AI.
Thanks for the update! I think this is probably something important to take into consideration when evaluating ASI arguments.
That said, I think we’re starting to stray from the original topic of the Dark Arts, as we’re focusing more on ASI specifically rather than the Dark Arts element of it. In the interest of maintaining discussion focus on this post, would you agree to continuing AGI discussion in private messages?
Sure. Feel free to PM.
And I was trying to focus on the dark arts part of the arguments. Note I don’t make any arguments about ASI in the above, just state that fairly weak evidence should be needed to justify not doing anything drastic about it at this time, because the drastic actions have high measurable costs. It’s not provable at present to state that “ASI could find a way to take over the planet with limited resources, because we don’t have an ASI or know the intelligence ROI on a given amount of flops”, but it is provable to state that “an AI pause of 6 months would cost tens of billions, possibly hundreds of billions of dollars and would reduce the relative power of the pausing countries internationally”. It’s also provable to state the damage of a nuclear exchange.
Look how it’s voted down to −10 on agreement : others feel very strongly about this issue.