Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Logan Zoellner
Karma:
1,126
All
Posts
Comments
New
Top
Old
Page
1
[Question]
Many Turing Machines
Logan Zoellner
10 Dec 2019 17:36 UTC
−5
points
9
comments
2
min read
LW
link
[Question]
Why is the mail so much better than the DMV?
Logan Zoellner
29 Dec 2019 18:55 UTC
30
points
13
comments
2
min read
LW
link
[Question]
AI Boxing for Hardware-bound agents (aka the China alignment problem)
Logan Zoellner
8 May 2020 15:50 UTC
11
points
27
comments
10
min read
LW
link
[Question]
Libertarianism, Neoliberalism and Medicare for All?
Logan Zoellner
14 Oct 2020 21:06 UTC
5
points
13
comments
2
min read
LW
link
[Question]
(Pseudo) Mathematical Realism Bad?
Logan Zoellner
22 Nov 2020 18:21 UTC
0
points
6
comments
2
min read
LW
link
[Question]
TAI?
Logan Zoellner
30 Mar 2021 12:41 UTC
10
points
8
comments
1
min read
LW
link
Against Against Boredom
Logan Zoellner
16 May 2021 18:19 UTC
5
points
8
comments
2
min read
LW
link
The Walking Dead
Logan Zoellner
22 Jul 2021 16:19 UTC
22
points
2
comments
1
min read
LW
link
[Question]
How much should you be willing to pay for an AGI?
Logan Zoellner
20 Sep 2021 11:51 UTC
11
points
5
comments
1
min read
LW
link
AGI is at least as far away as Nuclear Fusion.
Logan Zoellner
11 Nov 2021 21:33 UTC
0
points
8
comments
1
min read
LW
link
[Question]
Does the Structure of an algorithm matter for AI Risk and/or consciousness?
Logan Zoellner
3 Dec 2021 18:31 UTC
7
points
4
comments
1
min read
LW
link
[Question]
How confident are we that there are no Extremely Obvious Aliens?
Logan Zoellner
1 May 2022 10:59 UTC
60
points
25
comments
1
min read
LW
link
Various Alignment Strategies (and how likely they are to work)
Logan Zoellner
3 May 2022 16:54 UTC
83
points
34
comments
11
min read
LW
link
The Last Paperclip
Logan Zoellner
12 May 2022 19:25 UTC
61
points
15
comments
17
min read
LW
link
An Agent Based Consciousness Model (unfortunately it’s not computable)
Logan Zoellner
21 May 2022 23:00 UTC
6
points
2
comments
8
min read
LW
link
Bureaucracy of AIs
Logan Zoellner
9 Jun 2022 23:03 UTC
17
points
6
comments
14
min read
LW
link
A Deceptively Simple Argument in favor of Problem Factorization
Logan Zoellner
6 Aug 2022 17:32 UTC
3
points
4
comments
1
min read
LW
link
[Question]
What is the “Less Wrong” approved acronym for 1984-risk?
Logan Zoellner
10 Sep 2022 14:38 UTC
5
points
8
comments
1
min read
LW
link
Natural Categories Update
Logan Zoellner
10 Oct 2022 15:19 UTC
33
points
6
comments
2
min read
LW
link
2022 was the year AGI arrived (Just don’t call it that)
Logan Zoellner
4 Jan 2023 15:19 UTC
101
points
59
comments
3
min read
LW
link
Back to top
Next