Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
catubc
Karma:
159
All
Posts
Comments
New
Top
Old
The case for stopping AI safety research
catubc
May 23, 2024, 3:55 PM
53
points
38
comments
1
min read
LW
link
Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety
catubc
May 31, 2023, 9:18 PM
26
points
4
comments
11
min read
LW
link
Red-teaming AI-safety concepts that rely on science metaphors
catubc
Mar 16, 2023, 6:52 AM
5
points
4
comments
5
min read
LW
link
AGIs may value intrinsic rewards more than extrinsic ones
catubc
Nov 17, 2022, 9:49 PM
8
points
6
comments
4
min read
LW
link
LLMs may capture key components of human agency
catubc
Nov 17, 2022, 8:14 PM
27
points
0
comments
4
min read
LW
link
Agency engineering: is AI-alignment “to human intent” enough?
catubc
Sep 2, 2022, 6:14 PM
9
points
10
comments
6
min read
LW
link
Back to top
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel