Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Matthew_Opitz
Karma:
383
All
Posts
Comments
New
Top
Old
Proxi-Antipodes: A Geometrical Intuition For The Difficulty Of Aligning AI With Multitudinous Human Values
Matthew_Opitz
Jun 9, 2023, 9:21 PM
7
points
0
comments
5
min read
LW
link
DELBERTing as an Adversarial Strategy
Matthew_Opitz
May 12, 2023, 8:09 PM
8
points
3
comments
5
min read
LW
link
The Academic Field Pyramid—any point to encouraging broad but shallow AI risk engagement?
Matthew_Opitz
May 11, 2023, 1:32 AM
20
points
1
comment
6
min read
LW
link
Even if human & AI alignment are just as easy, we are screwed
Matthew_Opitz
Apr 13, 2023, 5:32 PM
35
points
5
comments
5
min read
LW
link
Bing AI Generating Voynich Manuscript Continuations—It does not know how it knows
Matthew_Opitz
Apr 10, 2023, 8:22 PM
15
points
6
comments
13
min read
LW
link
Matthew_Opitz’s Shortform
Matthew_Opitz
Apr 5, 2023, 7:42 PM
3
points
2
comments
LW
link
“NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1)
Matthew_Opitz
Sep 4, 2014, 4:58 PM
5
points
340
comments
11
min read
LW
link
Back to top