Archive
Sequences
About
Search
Log In
Home
Featured
All
Tags
Recent
Comments
Questions
Events
Shortform
Alignment Forum
AF Comments
RSS
New
Hot
Active
Old
Page
1
[Question]
Seeking Suggestions for 2026 S-Process Recommenders
Ethan Ashkie
16 Mar 2026 20:31 UTC
4
points
0
comments
1
min read
LW
link
[Question]
Is there any way to recover lost work?
Horosphere
13 Mar 2026 18:25 UTC
3
points
1
comment
1
min read
LW
link
[Question]
AI for Agent Foundations etc.?
Valentine
12 Mar 2026 7:20 UTC
16
points
2
comments
1
min read
LW
link
[Question]
How Hard a Problem is Alignment?
RogerDearnaley
11 Mar 2026 16:47 UTC
21
points
15
comments
3
min read
LW
link
[Question]
When has forecasting been useful for you?
sanyer
7 Mar 2026 19:50 UTC
14
points
3
comments
1
min read
LW
link
[Question]
How can I arrest and reverse stress, anxiety and depression induced cognitive decline?
ConformalInfinity
5 Mar 2026 15:35 UTC
4
points
4
comments
1
min read
LW
link
[Question]
LLM coherentization as an obvious low-hanging fruit to try?
Épiphanie Gédéon
4 Mar 2026 0:59 UTC
25
points
2
comments
2
min read
LW
link
[Question]
Question: Why is the goal of AI safety not ‘moral machines’?
Mordechai Rorvig
3 Mar 2026 18:16 UTC
9
points
15
comments
1
min read
LW
link
[Question]
Can LLM chat be less prolix?
jbash
2 Mar 2026 19:54 UTC
21
points
9
comments
2
min read
LW
link
[Question]
If ‘bad guys’ don’t pause, do you?
Remmelt
2 Mar 2026 7:24 UTC
24
points
3
comments
1
min read
LW
link
[Question]
Best short introductions to AI safety & alignment for bright college students?
geoffreymiller
27 Feb 2026 18:04 UTC
7
points
0
comments
1
min read
LW
link
[Question]
What was the most effective team you’ve ever been on, and what made it excellent?
Eli Tyre
24 Feb 2026 20:18 UTC
77
points
7
comments
2
min read
LW
link
[Question]
Why did you buy Bitcoin?
NoSignalNoNoise
17 Feb 2026 5:20 UTC
11
points
1
comment
1
min read
LW
link
[Question]
What’s Your P(WEIRD)?
RogerDearnaley
16 Feb 2026 18:19 UTC
26
points
18
comments
9
min read
LW
link
[Question]
What concrete mechanisms could lead to AI models having open-ended goals?
Jemal Young
11 Feb 2026 9:08 UTC
10
points
4
comments
1
min read
LW
link
[Question]
Should we consider Meta to be a criminal enterprise?
ChristianKl
10 Feb 2026 2:10 UTC
42
points
23
comments
1
min read
LW
link
[Question]
OK, what’s the difference between coherence and representation theorems?
Algon
10 Feb 2026 0:45 UTC
15
points
7
comments
2
min read
LW
link
[Question]
What should I try to do this year?
abstractapplic
7 Feb 2026 22:06 UTC
35
points
4
comments
1
min read
LW
link
[Question]
If all humans were turned into high-fidelity mind uploads tomorrow, would we be self-sustaining?
Erich_Grunewald
6 Feb 2026 8:35 UTC
11
points
2
comments
1
min read
LW
link
[Question]
Goodfire and Training on Interpretability
Satya Benson
6 Feb 2026 1:45 UTC
32
points
5
comments
1
min read
LW
link
Back to top
Next