Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
tailcalled
Karma:
7,918
All
Posts
Comments
New
Top
Old
Page
1
Knocking Down My AI Optimist Strawman
tailcalled
Feb 8, 2025, 10:52 AM
31
points
3
comments
6
min read
LW
link
My Mental Model of AI Optimist Opinions
tailcalled
Jan 29, 2025, 6:44 PM
12
points
7
comments
1
min read
LW
link
Evolution’s selection target depends on your weighting
tailcalled
Nov 19, 2024, 6:24 PM
23
points
22
comments
1
min read
LW
link
Empathy/Systemizing Quotient is a poor/biased model for the autism/sex link
tailcalled
Nov 4, 2024, 9:11 PM
43
points
0
comments
7
min read
LW
link
Binary encoding as a simple explicit construction for superposition
tailcalled
Oct 12, 2024, 9:18 PM
12
points
0
comments
1
min read
LW
link
Rationalist Gnosticism
tailcalled
Oct 10, 2024, 9:06 AM
11
points
10
comments
3
min read
LW
link
RLHF is the worst possible thing done when facing the alignment problem
tailcalled
Sep 19, 2024, 6:56 PM
32
points
10
comments
6
min read
LW
link
[Question]
Does life actually locally *increase* entropy?
tailcalled
Sep 16, 2024, 8:30 PM
10
points
27
comments
1
min read
LW
link
Why I’m bearish on mechanistic interpretability: the shards are not in the network
tailcalled
Sep 13, 2024, 5:09 PM
22
points
40
comments
1
min read
LW
link
In defense of technological unemployment as the main AI concern
tailcalled
Aug 27, 2024, 5:58 PM
44
points
36
comments
1
min read
LW
link
The causal backbone conjecture
tailcalled
Aug 17, 2024, 6:50 PM
26
points
0
comments
2
min read
LW
link
Rationalists are missing a core piece for agent-like structure (energy vs information overload)
tailcalled
Aug 17, 2024, 9:57 AM
62
points
9
comments
4
min read
LW
link
[LDSL#6] When is quantification needed, and when is it hard?
tailcalled
Aug 13, 2024, 8:39 PM
32
points
0
comments
2
min read
LW
link
[LDSL#5] Comparison and magnitude/diminishment
tailcalled
Aug 12, 2024, 6:47 PM
24
points
0
comments
2
min read
LW
link
[LDSL#4] Root cause analysis versus effect size estimation
tailcalled
Aug 11, 2024, 4:12 PM
29
points
0
comments
2
min read
LW
link
[LDSL#3] Information-orientation is in tension with magnitude-orientation
tailcalled
Aug 10, 2024, 9:58 PM
33
points
2
comments
3
min read
LW
link
[LDSL#2] Latent variable models, network models, and linear diffusion of sparse lognormals
tailcalled
Aug 9, 2024, 7:57 PM
26
points
2
comments
3
min read
LW
link
[LDSL#1] Performance optimization as a metaphor for life
tailcalled
Aug 8, 2024, 4:16 PM
31
points
6
comments
5
min read
LW
link
[LDSL#0] Some epistemological conundrums
tailcalled
Aug 7, 2024, 7:52 PM
54
points
11
comments
10
min read
LW
link
Yann LeCun: We only design machines that minimize costs [therefore they are safe]
tailcalled
Jun 15, 2024, 5:25 PM
19
points
8
comments
1
min read
LW
link
(twitter.com)
Back to top
Next
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel