Archive
Sequences
About
Search
Log In
Home
Featured
All
Tags
Recent
Comments
Questions
Events
Shortform
Alignment Forum
AF Comments
RSS
New
Hot
Active
Old
Page
1
[Question]
Is there a Schelling point for group house room listings?
NoSignalNoNoise
23 Jul 2024 3:03 UTC
4
points
0
comments
1
min read
LW
link
[Question]
Would a scope-insensitive AGI be less likely to incapacitate humanity?
Jim Buhler
21 Jul 2024 14:15 UTC
2
points
3
comments
1
min read
LW
link
[Question]
Have people given up on iterated distillation and amplification?
Chris_Leong
19 Jul 2024 12:23 UTC
19
points
1
comment
1
min read
LW
link
[Question]
What are the actual arguments in favor of computationalism as a theory of identity?
sunwillrise
18 Jul 2024 18:44 UTC
12
points
24
comments
5
min read
LW
link
[Question]
Me & My Clone
SimonBaars
18 Jul 2024 16:25 UTC
25
points
19
comments
1
min read
LW
link
[Question]
Should we exclude alignment research from LLM training datasets?
Ben Millwood
18 Jul 2024 10:27 UTC
1
point
0
comments
1
min read
LW
link
[Question]
Opinions on Eureka Labs
jmh
17 Jul 2024 0:16 UTC
6
points
3
comments
1
min read
LW
link
[Question]
Seeking feedback on a critique of the paperclip maximizer thought experiment
bio neural
15 Jul 2024 18:39 UTC
3
points
9
comments
1
min read
LW
link
[Question]
What Other Lines of Work are Safe from AI Automation?
RogerDearnaley
11 Jul 2024 10:01 UTC
27
points
33
comments
5
min read
LW
link
[Question]
Pondering how good or bad things will be in the AGI future
Sherrinford
9 Jul 2024 22:46 UTC
11
points
9
comments
2
min read
LW
link
[Question]
If AI starts to end the world, is suicide a good idea?
IlluminateReality
9 Jul 2024 21:53 UTC
0
points
8
comments
1
min read
LW
link
[Question]
How bad would AI progress need to be for us to think general technological progress is also bad?
Jim Buhler
9 Jul 2024 10:43 UTC
9
points
5
comments
1
min read
LW
link
[Question]
Can agents coordinate on randomness without outside sources?
Mikhail Samin
6 Jul 2024 13:43 UTC
5
points
16
comments
1
min read
LW
link
[Question]
What progress have we made on automated auditing?
LawrenceC
6 Jul 2024 1:49 UTC
37
points
1
comment
1
min read
LW
link
[Question]
Are there any plans to launch a paperback version of “Rationality: From AI to Zombies”?
m_arj
5 Jul 2024 11:14 UTC
2
points
1
comment
1
min read
LW
link
[Question]
What percent of the sun would a Dyson Sphere cover?
Raemon
3 Jul 2024 17:27 UTC
24
points
26
comments
1
min read
LW
link
[Question]
Isomorphisms don’t preserve subjective experience… right?
notfnofn
3 Jul 2024 14:22 UTC
4
points
26
comments
1
min read
LW
link
[Question]
Why Can’t Sub-AGI Solve AI Alignment? Or: Why Would Sub-AGI AI Not be Aligned?
MrThink
2 Jul 2024 20:13 UTC
4
points
23
comments
1
min read
LW
link
[Question]
Why haven’t there been assassination attempts against high profile AI accelerationists like sam altman yet?
louisTrem
2 Jul 2024 18:16 UTC
−13
points
4
comments
2
min read
LW
link
[Question]
Self-censoring on AI x-risk discussions?
Decaeneus
1 Jul 2024 18:24 UTC
17
points
2
comments
1
min read
LW
link
Back to top
Next