RSS

Fu­ture Fund Wor­ld­view Prize

TagLast edit: 23 Sep 2022 23:17 UTC by interstice

How to Train Your AGI Dragon

Oren Montano21 Sep 2022 22:28 UTC
−1 points
3 comments5 min readLW link

“Cot­ton Gin” AI Risk

42317524 Sep 2022 21:26 UTC
7 points
3 comments2 min readLW link

AI coöper­a­tion is more pos­si­ble than you think

42317524 Sep 2022 21:26 UTC
7 points
0 comments2 min readLW link

P(mis­al­ign­ment x-risk|AGI) is small #[Fu­ture Fund wor­ld­view prize]

Dibbu Dibbu24 Sep 2022 23:54 UTC
−18 points
0 comments4 min readLW link

You are Un­der­es­ti­mat­ing The Like­li­hood That Con­ver­gent In­stru­men­tal Sub­goals Lead to Aligned AGI

Mark Neyer26 Sep 2022 14:22 UTC
3 points
6 comments3 min readLW link

Loss of Align­ment is not the High-Order Bit for AI Risk

yieldthought26 Sep 2022 21:16 UTC
14 points
18 comments2 min readLW link

Why I think strong gen­eral AI is com­ing soon

porby28 Sep 2022 5:40 UTC
325 points
139 comments34 min readLW link1 review

Will Values and Com­pe­ti­tion De­cou­ple?

interstice28 Sep 2022 16:27 UTC
15 points
11 comments17 min readLW link

AGI by 2050 prob­a­bil­ity less than 1%

fumin1 Oct 2022 19:45 UTC
−10 points
4 comments9 min readLW link
(docs.google.com)

Frontline of AGI Align­ment

SD Marlow4 Oct 2022 3:47 UTC
−10 points
0 comments1 min readLW link
(robothouse.substack.com)

Char­i­ta­ble Reads of Anti-AGI-X-Risk Ar­gu­ments, Part 1

sstich5 Oct 2022 5:03 UTC
3 points
4 comments3 min readLW link

AI Timelines via Cu­mu­la­tive Op­ti­miza­tion Power: Less Long, More Short

jacob_cannell6 Oct 2022 0:21 UTC
139 points
33 comments6 min readLW link

The prob­a­bil­ity that Ar­tifi­cial Gen­eral In­tel­li­gence will be de­vel­oped by 2043 is ex­tremely low.

cveres6 Oct 2022 18:05 UTC
−13 points
8 comments1 min readLW link

The Le­bowski The­o­rem — Char­i­ta­ble Reads of Anti-AGI-X-Risk Ar­gu­ments, Part 2

sstich8 Oct 2022 22:39 UTC
1 point
10 comments7 min readLW link

Don’t ex­pect AGI any­time soon

cveres10 Oct 2022 22:38 UTC
−14 points
6 comments1 min readLW link

Up­dates and Clarifications

SD Marlow11 Oct 2022 5:34 UTC
−5 points
1 comment1 min readLW link

My ar­gu­ment against AGI

cveres12 Oct 2022 6:33 UTC
7 points
5 comments1 min readLW link

A strange twist on the road to AGI

cveres12 Oct 2022 23:27 UTC
−8 points
0 comments1 min readLW link

Coun­ter­ar­gu­ments to the ba­sic AI x-risk case

KatjaGrace14 Oct 2022 13:00 UTC
368 points
124 comments34 min readLW link1 review
(aiimpacts.org)

“AGI soon, but Nar­row works Bet­ter”

AnthonyRepetto14 Oct 2022 21:35 UTC
1 point
9 comments2 min readLW link

“Origi­nal­ity is noth­ing but ju­di­cious imi­ta­tion”—Voltaire

Vestozia23 Oct 2022 19:00 UTC
0 points
0 comments13 min readLW link

AGI in our life­times is wish­ful thinking

niknoble24 Oct 2022 11:53 UTC
0 points
25 comments8 min readLW link

What does it take to defend the world against out-of-con­trol AGIs?

Steven Byrnes25 Oct 2022 14:47 UTC
187 points
46 comments30 min readLW link1 review

Why some peo­ple be­lieve in AGI, but I don’t.

cveres26 Oct 2022 3:09 UTC
−15 points
6 comments1 min readLW link

Wor­ld­view iPeo­ple—Fu­ture Fund’s AI Wor­ld­view Prize

Toni MUENDEL28 Oct 2022 1:53 UTC
−22 points
4 comments9 min readLW link

All life’s helpers’ beliefs

Tehdastehdas28 Oct 2022 5:47 UTC
−12 points
1 comment5 min readLW link

AI as a Civ­i­liza­tional Risk Part 1/​6: His­tor­i­cal Priors

PashaKamyshev29 Oct 2022 21:59 UTC
2 points
2 comments7 min readLW link

AI as a Civ­i­liza­tional Risk Part 2/​6: Be­hav­ioral Modification

PashaKamyshev30 Oct 2022 16:57 UTC
9 points
0 comments10 min readLW link

AI as a Civ­i­liza­tional Risk Part 3/​6: Anti-econ­omy and Sig­nal Pollution

PashaKamyshev31 Oct 2022 17:03 UTC
7 points
4 comments14 min readLW link

AI as a Civ­i­liza­tional Risk Part 4/​6: Bioweapons and Philos­o­phy of Modification

PashaKamyshev1 Nov 2022 20:50 UTC
7 points
1 comment8 min readLW link

AI X-risk >35% mostly based on a re­cent peer-re­viewed argument

michaelcohen2 Nov 2022 14:26 UTC
37 points
31 comments46 min readLW link

AI as a Civ­i­liza­tional Risk Part 5/​6: Re­la­tion­ship be­tween C-risk and X-risk

PashaKamyshev3 Nov 2022 2:19 UTC
2 points
0 comments7 min readLW link

Why do we post our AI safety plans on the In­ter­net?

Peter S. Park3 Nov 2022 16:02 UTC
4 points
4 comments11 min readLW link

AI as a Civ­i­liza­tional Risk Part 6/​6: What can be done

PashaKamyshev3 Nov 2022 19:48 UTC
2 points
4 comments4 min readLW link

Re­view of the Challenge

SD Marlow5 Nov 2022 6:38 UTC
−14 points
5 comments2 min readLW link

When can a mimic sur­prise you? Why gen­er­a­tive mod­els han­dle seem­ingly ill-posed problems

David Johnston5 Nov 2022 13:19 UTC
8 points
4 comments16 min readLW link

Loss of con­trol of AI is not a likely source of AI x-risk

squek7 Nov 2022 18:44 UTC
−6 points
0 comments5 min readLW link

How likely are ma­lign pri­ors over ob­jec­tives? [aborted WIP]

David Johnston11 Nov 2022 5:36 UTC
−1 points
0 comments8 min readLW link

AI will change the world, but won’t take it over by play­ing “3-di­men­sional chess”.

22 Nov 2022 18:57 UTC
133 points
98 comments24 min readLW link

AGI Im­pos­si­ble due to En­ergy Constrains

TheKlaus30 Nov 2022 18:48 UTC
−11 points
13 comments1 min readLW link

A Fal­li­bil­ist Wordview

Toni MUENDEL7 Dec 2022 20:59 UTC
−13 points
2 comments13 min readLW link

AGI is here, but no­body wants it. Why should we even care?

MGow20 Dec 2022 19:14 UTC
−22 points
0 comments17 min readLW link

Is­sues with un­even AI re­source distribution

User_Luke24 Dec 2022 1:18 UTC
3 points
9 comments5 min readLW link
(temporal.substack.com)

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted Sanders6 Jun 2023 17:36 UTC
34 points
115 comments5 min readLW link
(arxiv.org)