[rhetorical pose]
We shouldn’t balance the risks and opportunities of AI. Enthusiasts for AI are biased. They under estimate the difficulties. They would not be so enthusiastic if they grasped how disappointing progress is likely to be. Detractors of AI are also biased. They under estimate the difficulties too. You will have a hard time convincing them of the difficulties, because you would be trying to pursuade them that they had been frightened of shadows.
So there are few opportunities which are likely to be altogether lost if we hang back through unnecessary fear.
[/rhetorical]
Well, I happen to believe the two paragraphs above, but distinct from the question of whether I am right or not is the question of whether the phrase “We need to balance the risks and opportunities of AI.” means something or whether it is merely an applause light.
I think it is trivially true that we need to balance the actual risks and actual opportunities of AI. There is room for disagreement about whether we need to balance the perceived risks and perceived opportunities. If perceptions are accurate we should, but there is scope to say, for example, that the common perception is wrong and a rogue AI will in fact be quite stupid and easily unplugged. This opens the way to a decoding of language in which
o We need to balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities correctly and
o We shouldn’t balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.
One question that I dwell on is “how do intelligent and well-intention persons fall to quarrelling?”. The idea of an Applause Light is illuminating, but I think it is also quite tangled. There is the ambiguity between whether a phrase is an Applause Light or a Policy Proposal. I suspect that the core problem is that it is awfully tempting to exploit this ambiguity rhetorically, deliberating coding ones policy proposals in language that also functions as an Applause Light so that they come across as obviously correct.
The fun starts when one does this subconsciously and some-one else thinks it is deliberate and takes offence. Once this happens there is little chance of discovering the actual disaggreement (which might be about the accuracy of risk assessments) for the conversation will be derailed into meta-conversations about empty phrases and rhetoric.
o We need to balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities correctly and
o We shouldn’t balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.
I don’t get that at all. If “We shouldn’t balance the risks and opportunities of AI” means they are being assessed incorrectly, isn’t that a part of balancing the risks and opportunities of AI? I don’t see how you can get that out of the statement. If they are being done incorrectly, then in the discussion of the risks and opportunities you say “No, you’re doing it wrong, you need to look at it like this blah blah blah”.
When you say “We shouldn’t balance the risks and opportunities of AI” it means to stop making an assessment altogether. It says nothing about continuing to go forward with the project or not. It doesn’t say “Stop the project! This is all wrong!” That would fall under balancing the risks and opportunities—an assessment that came against AI.
That’s foolishness, which is why no one would ever utter the phrase in the first place. That makes the prior phrase an applause phrase, because it is obvious to anyone involved that such an assessment is necessary. You’re only saying it because you know people will nod their head in agreement and possibly clap.
It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.
A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea.
A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it’s too late.
Either position would have a substantive position that you don’t need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.
Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?
It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.
Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?
It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.
Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can’t really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.
Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.
I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don’t act before X, the world blows up—obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.
Of course you can cease argument without consideration—if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can’t afford to weigh whether the idea would result in a worse situation somehow—beyond whatever assessment you have already made. At that point, the time for assessment is over.
For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement “We need to balance the risks and opportunities of AI”; just that they can sincerely say it, and even argue for it. This was a response to you saying that “no one would ever utter the phrase in the first place”. This just strikes me as false.
Never underestimate the power of human stupidity ;)
[rhetorical pose] We shouldn’t balance the risks and opportunities of AI. Enthusiasts for AI are biased. They under estimate the difficulties. They would not be so enthusiastic if they grasped how disappointing progress is likely to be. Detractors of AI are also biased. They under estimate the difficulties too. You will have a hard time convincing them of the difficulties, because you would be trying to pursuade them that they had been frightened of shadows.
So there are few opportunities which are likely to be altogether lost if we hang back through unnecessary fear. [/rhetorical]
Well, I happen to believe the two paragraphs above, but distinct from the question of whether I am right or not is the question of whether the phrase “We need to balance the risks and opportunities of AI.” means something or whether it is merely an applause light.
I think it is trivially true that we need to balance the actual risks and actual opportunities of AI. There is room for disagreement about whether we need to balance the perceived risks and perceived opportunities. If perceptions are accurate we should, but there is scope to say, for example, that the common perception is wrong and a rogue AI will in fact be quite stupid and easily unplugged. This opens the way to a decoding of language in which
o We need to balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities correctly and
o We shouldn’t balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.
One question that I dwell on is “how do intelligent and well-intention persons fall to quarrelling?”. The idea of an Applause Light is illuminating, but I think it is also quite tangled. There is the ambiguity between whether a phrase is an Applause Light or a Policy Proposal. I suspect that the core problem is that it is awfully tempting to exploit this ambiguity rhetorically, deliberating coding ones policy proposals in language that also functions as an Applause Light so that they come across as obviously correct.
The fun starts when one does this subconsciously and some-one else thinks it is deliberate and takes offence. Once this happens there is little chance of discovering the actual disaggreement (which might be about the accuracy of risk assessments) for the conversation will be derailed into meta-conversations about empty phrases and rhetoric.
I don’t get that at all. If “We shouldn’t balance the risks and opportunities of AI” means they are being assessed incorrectly, isn’t that a part of balancing the risks and opportunities of AI? I don’t see how you can get that out of the statement. If they are being done incorrectly, then in the discussion of the risks and opportunities you say “No, you’re doing it wrong, you need to look at it like this blah blah blah”.
When you say “We shouldn’t balance the risks and opportunities of AI” it means to stop making an assessment altogether. It says nothing about continuing to go forward with the project or not. It doesn’t say “Stop the project! This is all wrong!” That would fall under balancing the risks and opportunities—an assessment that came against AI.
That’s foolishness, which is why no one would ever utter the phrase in the first place. That makes the prior phrase an applause phrase, because it is obvious to anyone involved that such an assessment is necessary. You’re only saying it because you know people will nod their head in agreement and possibly clap.
It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.
A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea. A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it’s too late.
Either position would have a substantive position that you don’t need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.
Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?
It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.
Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?
It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.
Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can’t really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.
Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.
I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don’t act before X, the world blows up—obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.
Of course you can cease argument without consideration—if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can’t afford to weigh whether the idea would result in a worse situation somehow—beyond whatever assessment you have already made. At that point, the time for assessment is over.
For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement “We need to balance the risks and opportunities of AI”; just that they can sincerely say it, and even argue for it. This was a response to you saying that “no one would ever utter the phrase in the first place”. This just strikes me as false.
Never underestimate the power of human stupidity ;)
You’re right, in that regard I was certainly mistaken.
Upvoted for the “oops” moment.