Nature: “Stop talking about tomorrow’s AI doomsday when AI poses risks today”

Link post

Overall, a headline that seems counterproductive and needlessly divisive.

I worry very much that coverage like this has the potential to bring political polarization to AI risk and it would be extremely damaging for the prospect of regulation if one side of the US Congress/​Senate decided AI risk was something only their outgroup is concerned about, for nefarious reasons.

but in the spirit of charity, here are perhaps the strongest points of a weak article:

the spectre of AI as an all-powerful machine fuels competition between nations to develop AI so that they can benefit from and control it. This works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry. An actual arms race to produce next-generation AI-powered military technology is already under way, increasing the risk of catastrophic conflict

and

governments must establish appropriate legal and regulatory frameworks, as well as applying laws that already exist

and

Researchers must play their part by building a culture of responsible AI from the bottom up. In April, the big machine-learning meeting NeurIPS (Neural Information Processing Systems) announced its adoption of a code of ethics for meeting submissions. This includes an expectation that research involving human participants has been approved by an ethical or institutional review board (IRB)

This would be great if ethical or institutional review boards were willing to restrict research that might be dangerous, but it would require a substantial change in their approach to regulating AI research.

All researchers and institutions should follow this approach, and also ensure that IRBs — or peer-review panels in cases in which no IRB exists — have the expertise to examine potentially risky AI research.

Should people worried about AI existential risk be trying to create resources for IRBs to recognize harmful AI research?

Some ominous commentary from Tyler Cowen:

Many of you focused on AGI existential risk do not much like or agree with my criticisms of that position, or perhaps you do not understand my stance, as I have seen stated a few times on Twitter. But I am telling you—I take you far more seriously than does most of the mainstream. I keep on saying—publish, publish, peer review, peer review—a high mark of respect....

As it stands, contra that earlier tweet from Rob Wiblin (does anyone have a cite?), you have utterly and completely lost the mainstream debate, whether you admit it or not, whether you see this or not. (Given the large number of rationality community types who do not like to travel, it is no surprise this point is not better known internally.) You have lost the debate within scientific communities, within policymaker circles, and in international diplomacy, if it is not too much of an oxymoron to call it that.

I don’t really know what he is talking about because it does not seem like we’re losing the debate right now.