RSS

Reg­u­la­tion and AI Risk

TagLast edit: 24 Nov 2020 20:21 UTC by Ruby

Regulation and AI risk is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an AI arms race between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI. Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to skirt the regulations, and instead propose government funding (possibly in the form of an “AI Manhattan Project”) for AGI projects meeting particular criteria.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is—AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an AGI Sputnik moment—a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

See also

[Linkpost] Chi­nese gov­ern­ment’s guidelines on AI

RomanS10 Dec 2021 21:10 UTC
61 points
14 comments1 min readLW link

How ma­jor gov­ern­ments can help with the most im­por­tant century

HoldenKarnofsky24 Feb 2023 18:20 UTC
28 points
0 comments4 min readLW link
(www.cold-takes.com)

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

Self-reg­u­la­tion of safety in AI research

Gordon Seidoh Worley25 Feb 2018 23:17 UTC
12 points
6 comments2 min readLW link

An­titrust-Com­pli­ant AI In­dus­try Self-Regulation

Cullen_OKeefe7 Jul 2020 20:53 UTC
9 points
3 comments1 min readLW link
(cullenokeefe.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus Astra16 Apr 2020 0:50 UTC
58 points
27 comments89 min readLW link

Learn­ing so­cietal val­ues from law as part of an AGI al­ign­ment strategy

John Nay21 Oct 2022 2:03 UTC
3 points
18 comments54 min readLW link

Let’s think about slow­ing down AI

KatjaGrace22 Dec 2022 17:40 UTC
492 points
178 comments38 min readLW link
(aiimpacts.org)

AGI in sight: our look at the game board

18 Feb 2023 22:17 UTC
210 points
131 comments6 min readLW link
(andreamiotti.substack.com)

[Linkpost] Scott Alexan­der re­acts to OpenAI’s lat­est post

Akash11 Mar 2023 22:24 UTC
27 points
0 comments5 min readLW link
(astralcodexten.substack.com)

Mauhn Re­leases AI Safety Documentation

Berg Severens3 Jul 2021 21:23 UTC
4 points
0 comments1 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJules11 Nov 2021 7:04 UTC
2 points
2 comments1 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

Yitz10 Apr 2022 21:02 UTC
92 points
142 comments2 min readLW link

The Reg­u­la­tory Op­tion: A re­sponse to near 0% sur­vival odds

Matthew Lowenstein11 Apr 2022 22:00 UTC
48 points
21 comments6 min readLW link

Four So­cietal In­ter­ven­tions to Im­prove our AGI Position

Rafael Cosman6 Jul 2022 18:32 UTC
−6 points
2 comments6 min readLW link
(rafaelcosman.com)

A Cri­tique of AI Align­ment Pessimism

ExCeph19 Jul 2022 2:28 UTC
8 points
1 comment9 min readLW link

[Question] Would “Man­hat­tan Pro­ject” style be benefi­cial or dele­te­ri­ous for AI Align­ment?

Just Learning4 Aug 2022 19:12 UTC
5 points
1 comment1 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc1001414 Sep 2022 20:37 UTC
8 points
0 comments16 min readLW link

Lev­er­ag­ing Le­gal In­for­mat­ics to Align AI

John Nay18 Sep 2022 20:39 UTC
11 points
0 comments3 min readLW link
(forum.effectivealtruism.org)

Cryp­tocur­rency Ex­ploits Show the Im­por­tance of Proac­tive Poli­cies for AI X-Risk

eSpencer20 Sep 2022 17:53 UTC
1 point
0 comments4 min readLW link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony Barrett14 Oct 2022 20:27 UTC
2 points
0 comments2 min readLW link

The Slip­pery Slope from DALLE-2 to Deep­fake Anarchy

scasper5 Nov 2022 14:53 UTC
17 points
9 comments11 min readLW link

[Question] Is there any policy for a fair treat­ment of AIs whose friendli­ness is in doubt?

nahoj18 Nov 2022 19:01 UTC
15 points
9 comments1 min readLW link

Is­sues with un­even AI re­source distribution

User_Luke24 Dec 2022 1:18 UTC
3 points
9 comments5 min readLW link
(temporal.substack.com)

Who Aligns the Align­ment Re­searchers?

Ben Smith5 Mar 2023 23:22 UTC
40 points
0 comments11 min readLW link