Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Davidmanheim
Karma:
4,594
All
Posts
Comments
New
Top
Old
Page
1
A Personal (Interim) COVID-19 Postmortem
Davidmanheim
25 Jun 2020 18:10 UTC
163
points
41
comments
9
min read
LW
link
1
review
A Dozen Ways to Get More Dakka
Davidmanheim
8 Apr 2024 4:45 UTC
127
points
10
comments
3
min read
LW
link
Modelling Transformative AI Risks (MTAIR) Project: Introduction
Davidmanheim
and
Aryeh Englander
16 Aug 2021 7:12 UTC
91
points
0
comments
9
min read
LW
link
Public Call for Interest in Mathematical Alignment
Davidmanheim
22 Nov 2023 13:22 UTC
89
points
9
comments
1
min read
LW
link
Far-UVC Light Update: No, LEDs are not around the corner (tweetstorm)
Davidmanheim
2 Nov 2022 12:57 UTC
71
points
27
comments
4
min read
LW
link
(twitter.com)
Systems that cannot be unsafe cannot be safe
Davidmanheim
2 May 2023 8:53 UTC
62
points
27
comments
2
min read
LW
link
Resolutions to the Challenge of Resolving Forecasts
Davidmanheim
11 Mar 2021 19:08 UTC
58
points
13
comments
5
min read
LW
link
AI Is Not Software
Davidmanheim
2 Jan 2024 7:58 UTC
56
points
29
comments
5
min read
LW
link
Multitudinous outside views
Davidmanheim
18 Aug 2020 6:21 UTC
55
points
13
comments
3
min read
LW
link
Safe Stasis Fallacy
Davidmanheim
5 Feb 2024 10:54 UTC
54
points
2
comments
1
min read
LW
link
Update more slowly!
Davidmanheim
13 Jul 2020 7:10 UTC
51
points
4
comments
2
min read
LW
link
Misnaming and Other Issues with OpenAI’s “Human Level” Superintelligence Hierarchy
Davidmanheim
15 Jul 2024 5:50 UTC
48
points
2
comments
3
min read
LW
link
“Safety Culture for AI” is important, but isn’t going to be easy
Davidmanheim
26 Jun 2023 12:52 UTC
47
points
2
comments
2
min read
LW
link
(forum.effectivealtruism.org)
Technologies and Terminology: AI isn’t Software, it’s… Deepware?
Davidmanheim
and
abramdemski
13 Feb 2024 13:37 UTC
40
points
10
comments
8
min read
LW
link
The Upper Limit of Value
Davidmanheim
27 Jan 2021 14:13 UTC
40
points
25
comments
3
min read
LW
link
1
review
Values Weren’t Complex, Once.
Davidmanheim
25 Nov 2018 9:17 UTC
36
points
13
comments
2
min read
LW
link
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics
Davidmanheim
9 Mar 2020 6:59 UTC
34
points
1
comment
2
min read
LW
link
Systematizing Epistemics: Principles for Resolving Forecasts
Davidmanheim
29 Mar 2021 20:46 UTC
33
points
8
comments
11
min read
LW
link
“LLMs Don’t Have a Coherent Model of the World”—What it Means, Why it Matters
Davidmanheim
1 Jun 2023 7:46 UTC
31
points
2
comments
7
min read
LW
link
Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects—Clarifying Thoughts, Part 1)
Davidmanheim
2 Jul 2019 15:36 UTC
31
points
5
comments
4
min read
LW
link
Back to top
Next