Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
tailcalled
Karma:
7,970
All
Posts
Comments
New
Top
Old
Page
3
Autogynephilia discourse is so absurdly bad on all sides
tailcalled
Jul 23, 2023, 1:12 PM
44
points
24
comments
2
min read
LW
link
Boundary Placement Rebellion
tailcalled
Jul 20, 2023, 5:40 PM
54
points
21
comments
12
min read
LW
link
Prospera-dump
tailcalled
Jul 18, 2023, 9:36 PM
11
points
16
comments
1
min read
LW
link
[Question]
Are there any good, easy-to-understand examples of cases where statistical causal network discovery worked well in practice?
tailcalled
Jul 12, 2023, 10:08 PM
42
points
6
comments
1
min read
LW
link
I think Michael Bailey’s dismissal of my autogynephilia questions for Scott Alexander and Aella makes very little sense
tailcalled
Jul 10, 2023, 5:39 PM
46
points
45
comments
2
min read
LW
link
[Question]
What in your opinion is the biggest open problem in AI alignment?
tailcalled
Jul 3, 2023, 4:34 PM
39
points
35
comments
1
min read
LW
link
Which personality traits are real? Stress-testing the lexical hypothesis
tailcalled
Jun 21, 2023, 7:46 PM
65
points
5
comments
9
min read
LW
link
1
review
Book Review: Autoheterosexuality
tailcalled
Jun 12, 2023, 8:11 PM
27
points
9
comments
24
min read
LW
link
[Question]
How accurate is data about past earth temperatures?
tailcalled
Jun 9, 2023, 9:29 PM
10
points
2
comments
1
min read
LW
link
[Market] Will AI xrisk seem to be handled seriously by the end of 2026?
tailcalled
May 25, 2023, 6:51 PM
15
points
2
comments
1
min read
LW
link
(manifold.markets)
Horizontal vs vertical generality
tailcalled
Apr 29, 2023, 7:14 PM
10
points
9
comments
1
min read
LW
link
Core of AI projections from first principles: Attempt 1
tailcalled
Apr 11, 2023, 5:24 PM
21
points
3
comments
3
min read
LW
link
[Question]
Is this true? @tyler_m_john: [If we had started using CFCs earlier, we would have ended most life on the planet]
tailcalled
Apr 10, 2023, 2:22 PM
73
points
15
comments
1
min read
LW
link
(twitter.com)
[Question]
Is this true? paulg: [One special thing about AI risk is that people who understand AI well are more worried than people who understand it poorly]
tailcalled
Apr 1, 2023, 11:59 AM
25
points
5
comments
1
min read
LW
link
[Question]
What does the economy do?
tailcalled
Mar 24, 2023, 10:49 AM
9
points
20
comments
1
min read
LW
link
[Question]
Are robotics bottlenecked on hardware or software?
tailcalled
Mar 21, 2023, 7:26 AM
14
points
13
comments
1
min read
LW
link
What problems do African-Americans face? An initial investigation using Standpoint Epistemology and Surveys
tailcalled
Mar 12, 2023, 11:42 AM
34
points
26
comments
15
min read
LW
link
[Question]
What do you think is wrong with rationalist culture?
tailcalled
Mar 10, 2023, 1:17 PM
16
points
77
comments
1
min read
LW
link
[Question]
What are MIRI’s big achievements in AI alignment?
tailcalled
Mar 7, 2023, 9:30 PM
29
points
7
comments
1
min read
LW
link
Coordination explosion before intelligence explosion...?
tailcalled
Mar 5, 2023, 8:48 PM
47
points
9
comments
2
min read
LW
link
Back to first
Previous
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel