Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Future Fund Worldview Prize
Tag
Last edit:
23 Sep 2022 23:17 UTC
by
interstice
Relevant
New
Old
How to Train Your AGI Dragon
Oren Montano
21 Sep 2022 22:28 UTC
−1
points
3
comments
5
min read
LW
link
“Cotton Gin” AI Risk
423175
24 Sep 2022 21:26 UTC
7
points
3
comments
2
min read
LW
link
AI coöperation is more possible than you think
423175
24 Sep 2022 21:26 UTC
7
points
0
comments
2
min read
LW
link
P(misalignment x-risk|AGI) is small #[Future Fund worldview prize]
Dibbu Dibbu
24 Sep 2022 23:54 UTC
−18
points
0
comments
4
min read
LW
link
You are Underestimating The Likelihood That Convergent Instrumental Subgoals Lead to Aligned AGI
Mark Neyer
26 Sep 2022 14:22 UTC
3
points
6
comments
3
min read
LW
link
Loss of Alignment is not the High-Order Bit for AI Risk
yieldthought
26 Sep 2022 21:16 UTC
14
points
18
comments
2
min read
LW
link
Why I think strong general AI is coming soon
porby
28 Sep 2022 5:40 UTC
325
points
139
comments
34
min read
LW
link
1
review
Will Values and Competition Decouple?
interstice
28 Sep 2022 16:27 UTC
15
points
11
comments
17
min read
LW
link
AGI by 2050 probability less than 1%
fumin
1 Oct 2022 19:45 UTC
−10
points
4
comments
9
min read
LW
link
(docs.google.com)
Frontline of AGI Alignment
SD Marlow
4 Oct 2022 3:47 UTC
−10
points
0
comments
1
min read
LW
link
(robothouse.substack.com)
Charitable Reads of Anti-AGI-X-Risk Arguments, Part 1
sstich
5 Oct 2022 5:03 UTC
3
points
4
comments
3
min read
LW
link
AI Timelines via Cumulative Optimization Power: Less Long, More Short
jacob_cannell
6 Oct 2022 0:21 UTC
139
points
33
comments
6
min read
LW
link
The probability that Artificial General Intelligence will be developed by 2043 is extremely low.
cveres
6 Oct 2022 18:05 UTC
−13
points
8
comments
1
min read
LW
link
The Lebowski Theorem — Charitable Reads of Anti-AGI-X-Risk Arguments, Part 2
sstich
8 Oct 2022 22:39 UTC
1
point
10
comments
7
min read
LW
link
Don’t expect AGI anytime soon
cveres
10 Oct 2022 22:38 UTC
−14
points
6
comments
1
min read
LW
link
Updates and Clarifications
SD Marlow
11 Oct 2022 5:34 UTC
−5
points
1
comment
1
min read
LW
link
My argument against AGI
cveres
12 Oct 2022 6:33 UTC
7
points
5
comments
1
min read
LW
link
A strange twist on the road to AGI
cveres
12 Oct 2022 23:27 UTC
−8
points
0
comments
1
min read
LW
link
Counterarguments to the basic AI x-risk case
KatjaGrace
14 Oct 2022 13:00 UTC
368
points
124
comments
34
min read
LW
link
1
review
(aiimpacts.org)
“AGI soon, but Narrow works Better”
AnthonyRepetto
14 Oct 2022 21:35 UTC
1
point
9
comments
2
min read
LW
link
“Originality is nothing but judicious imitation”—Voltaire
Vestozia
23 Oct 2022 19:00 UTC
0
points
0
comments
13
min read
LW
link
AGI in our lifetimes is wishful thinking
niknoble
24 Oct 2022 11:53 UTC
0
points
25
comments
8
min read
LW
link
What does it take to defend the world against out-of-control AGIs?
Steven Byrnes
25 Oct 2022 14:47 UTC
187
points
46
comments
30
min read
LW
link
1
review
Why some people believe in AGI, but I don’t.
cveres
26 Oct 2022 3:09 UTC
−15
points
6
comments
1
min read
LW
link
Worldview iPeople—Future Fund’s AI Worldview Prize
Toni MUENDEL
28 Oct 2022 1:53 UTC
−22
points
4
comments
9
min read
LW
link
All life’s helpers’ beliefs
Tehdastehdas
28 Oct 2022 5:47 UTC
−12
points
1
comment
5
min read
LW
link
AI as a Civilizational Risk Part 1/6: Historical Priors
PashaKamyshev
29 Oct 2022 21:59 UTC
2
points
2
comments
7
min read
LW
link
AI as a Civilizational Risk Part 2/6: Behavioral Modification
PashaKamyshev
30 Oct 2022 16:57 UTC
9
points
0
comments
10
min read
LW
link
AI as a Civilizational Risk Part 3/6: Anti-economy and Signal Pollution
PashaKamyshev
31 Oct 2022 17:03 UTC
7
points
4
comments
14
min read
LW
link
AI as a Civilizational Risk Part 4/6: Bioweapons and Philosophy of Modification
PashaKamyshev
1 Nov 2022 20:50 UTC
7
points
1
comment
8
min read
LW
link
AI X-risk >35% mostly based on a recent peer-reviewed argument
michaelcohen
2 Nov 2022 14:26 UTC
37
points
31
comments
46
min read
LW
link
AI as a Civilizational Risk Part 5/6: Relationship between C-risk and X-risk
PashaKamyshev
3 Nov 2022 2:19 UTC
2
points
0
comments
7
min read
LW
link
Why do we post our AI safety plans on the Internet?
Peter S. Park
3 Nov 2022 16:02 UTC
4
points
4
comments
11
min read
LW
link
AI as a Civilizational Risk Part 6/6: What can be done
PashaKamyshev
3 Nov 2022 19:48 UTC
2
points
4
comments
4
min read
LW
link
Review of the Challenge
SD Marlow
5 Nov 2022 6:38 UTC
−14
points
5
comments
2
min read
LW
link
When can a mimic surprise you? Why generative models handle seemingly ill-posed problems
David Johnston
5 Nov 2022 13:19 UTC
8
points
4
comments
16
min read
LW
link
Loss of control of AI is not a likely source of AI x-risk
squek
7 Nov 2022 18:44 UTC
−6
points
0
comments
5
min read
LW
link
How likely are malign priors over objectives? [aborted WIP]
David Johnston
11 Nov 2022 5:36 UTC
−1
points
0
comments
8
min read
LW
link
AI will change the world, but won’t take it over by playing “3-dimensional chess”.
boazbarak
and
benedelman
22 Nov 2022 18:57 UTC
133
points
98
comments
24
min read
LW
link
AGI Impossible due to Energy Constrains
TheKlaus
30 Nov 2022 18:48 UTC
−11
points
13
comments
1
min read
LW
link
A Fallibilist Wordview
Toni MUENDEL
7 Dec 2022 20:59 UTC
−13
points
2
comments
13
min read
LW
link
AGI is here, but nobody wants it. Why should we even care?
MGow
20 Dec 2022 19:14 UTC
−22
points
0
comments
17
min read
LW
link
Issues with uneven AI resource distribution
User_Luke
24 Dec 2022 1:18 UTC
3
points
9
comments
5
min read
LW
link
(temporal.substack.com)
Transformative AGI by 2043 is <1% likely
Ted Sanders
6 Jun 2023 17:36 UTC
34
points
115
comments
5
min read
LW
link
(arxiv.org)