Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Mark Xu
Karma:
3,696
I do alignment research at the Alignment Research Center. Learn more about me at
markxu.com/about
All
Posts
Comments
New
Top
Old
Page
1
Strong Evidence is Common
Mark Xu
13 Mar 2021 22:04 UTC
244
points
49
comments
1
min read
LW
link
4
reviews
(markxu.com)
The Solomonoff Prior is Malign
Mark Xu
14 Oct 2020 1:33 UTC
168
points
52
comments
16
min read
LW
link
3
reviews
An Intuitive Guide to Garrabrant Induction
Mark Xu
3 Jun 2021 22:21 UTC
138
points
20
comments
24
min read
LW
link
The First Sample Gives the Most Information
Mark Xu
24 Dec 2020 20:39 UTC
133
points
16
comments
1
min read
LW
link
1
review
(markxu.com)
[Question]
What are your greatest one-shot life improvements?
Mark Xu
16 May 2020 16:53 UTC
114
points
166
comments
1
min read
LW
link
Less Realistic Tales of Doom
Mark Xu
6 May 2021 23:01 UTC
113
points
13
comments
4
min read
LW
link
Does SGD Produce Deceptive Alignment?
Mark Xu
6 Nov 2020 23:48 UTC
96
points
9
comments
16
min read
LW
link
How to do theoretical research, a personal perspective
Mark Xu
19 Aug 2022 19:41 UTC
87
points
6
comments
15
min read
LW
link
Intermittent Distillations #4: Semiconductors, Economics, Intelligence, and Technological Progress.
Mark Xu
8 Jul 2021 22:14 UTC
81
points
9
comments
10
min read
LW
link
Rogue AGI Embodies Valuable Intellectual Property
Mark Xu
and
CarlShulman
3 Jun 2021 20:37 UTC
71
points
9
comments
3
min read
LW
link
Agents Over Cartesian World Models
Mark Xu
and
evhub
27 Apr 2021 2:06 UTC
66
points
4
comments
27
min read
LW
link
ELK First Round Contest Winners
Mark Xu
and
paulfchristiano
26 Jan 2022 2:56 UTC
65
points
6
comments
1
min read
LW
link
Open Problems with Myopia
Mark Xu
and
evhub
10 Mar 2021 18:38 UTC
65
points
16
comments
8
min read
LW
link
Your Time Might Be More Valuable Than You Think
Mark Xu
18 Oct 2021 0:55 UTC
56
points
10
comments
6
min read
LW
link
(markxu.com)
Fractional progress estimates for AI timelines and implied resource requirements
Mark Xu
and
CarlShulman
15 Jul 2021 18:43 UTC
55
points
6
comments
7
min read
LW
link
Defusing AGI Danger
Mark Xu
24 Dec 2020 22:58 UTC
48
points
9
comments
9
min read
LW
link
Training Regime Day 19: Hamming Questions for Potted Plants
Mark Xu
23 Apr 2020 16:00 UTC
47
points
1
comment
3
min read
LW
link
[Question]
What posts do you want written?
Mark Xu
19 Oct 2020 3:00 UTC
47
points
41
comments
1
min read
LW
link
Towards a Mechanistic Understanding of Goal-Directedness
Mark Xu
9 Mar 2021 20:17 UTC
45
points
1
comment
5
min read
LW
link
The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument
Mark Xu
and
CarlShulman
1 Oct 2021 22:23 UTC
43
points
11
comments
7
min read
LW
link
Back to top
Next