I understand your argument and it has merit, but I think the reality of the situation is more nuanced.
Humanity has long build buildings and bridges without access to formal engineering methods for predicting the risk of collapse. We might regard it as unethical to build such a structure now without using the best practically available engineering knowledge, but we do not regard it as having been unethical to build buildings and bridges historically due to the lack of modern engineering materials and methods. They did their best, more or less, with the resources they had access to at the time.
AI is a domain where the current state of the art safety methods are in fact being applied by the major companies, as far as I know (and I’m completely open to being corrected on this). In this respect, safety standards in the AI field are comparable to those of other fields. The case for existential risk is approximately as qualitative and handwavey as the case for safety, and I think that both of these arguments need to be taken seriously, because they are the best we currently have. It is disappointing to see the cavalier attitude with which pro-AI pundits dismiss safety concerns, and obnoxious to see the overly confident rhetoric deployed by some in the safety world when they tweet about their p(doom). It is a weird and important time in technology, and I would like to see greater open-mindedness and thoughtfulness about the ways to make progress on all of these important issues.
I understand your argument and it has merit, but I think the reality of the situation is more nuanced.
Humanity has long build buildings and bridges without access to formal engineering methods for predicting the risk of collapse. We might regard it as unethical to build such a structure now without using the best practically available engineering knowledge, but we do not regard it as having been unethical to build buildings and bridges historically due to the lack of modern engineering materials and methods. They did their best, more or less, with the resources they had access to at the time.
AI is a domain where the current state of the art safety methods are in fact being applied by the major companies, as far as I know (and I’m completely open to being corrected on this). In this respect, safety standards in the AI field are comparable to those of other fields. The case for existential risk is approximately as qualitative and handwavey as the case for safety, and I think that both of these arguments need to be taken seriously, because they are the best we currently have. It is disappointing to see the cavalier attitude with which pro-AI pundits dismiss safety concerns, and obnoxious to see the overly confident rhetoric deployed by some in the safety world when they tweet about their p(doom). It is a weird and important time in technology, and I would like to see greater open-mindedness and thoughtfulness about the ways to make progress on all of these important issues.