Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Christopher King
Karma:
856
@theking@mathstodon.xyz
All
Posts
Comments
New
Top
Old
Page
1
The Bellman equation does not apply to bounded rationality
Christopher King
26 Jun 2025 23:01 UTC
17
points
2
comments
1
min read
LW
link
Futarchy using a sealed-bid auction to avoid liquidity problems
Christopher King
16 Jun 2025 1:34 UTC
21
points
6
comments
8
min read
LW
link
The Way You Go Depends A Good Deal On Where You Want To Get: FEP minimizes surprise about actions using preferences about the future as *evidence*
Christopher King
27 Apr 2025 21:55 UTC
10
points
5
comments
5
min read
LW
link
METR’s preliminary evaluation of o3 and o4-mini
Christopher King
16 Apr 2025 20:23 UTC
14
points
7
comments
1
min read
LW
link
(metr.github.io)
[Question]
How far along Metr’s law can AI start automating or helping with alignment research?
Christopher King
20 Mar 2025 15:58 UTC
20
points
21
comments
1
min read
LW
link
No, the Polymarket price does not mean we can immediately conclude what the probability of a bird flu pandemic is. We also need to know the interest rate!
Christopher King
28 Dec 2024 16:05 UTC
9
points
11
comments
1
min read
LW
link
How I saved 1 human life (in expectation) without overthinking it
Christopher King
22 Dec 2024 20:53 UTC
19
points
0
comments
4
min read
LW
link
Christopher King’s Shortform
Christopher King
18 Dec 2024 21:02 UTC
5
points
1
comment
1
min read
LW
link
LDT (and everything else) can be irrational
Christopher King
6 Nov 2024 4:05 UTC
16
points
17
comments
2
min read
LW
link
Acausal Now: We could totally acausally bargain with aliens at our current tech level if desired
Christopher King
9 Aug 2023 0:50 UTC
1
point
5
comments
4
min read
LW
link
Necromancy’s unintended consequences.
Christopher King
9 Aug 2023 0:08 UTC
−9
points
2
comments
2
min read
LW
link
How do low level hypotheses constrain high level ones? The mystery of the disappearing diamond.
Christopher King
11 Jul 2023 19:27 UTC
17
points
11
comments
2
min read
LW
link
Challenge proposal: smallest possible self-hardening backdoor for RLHF
Christopher King
29 Jun 2023 16:56 UTC
7
points
0
comments
2
min read
LW
link
Anthropically Blind: the anthropic shadow is reflectively inconsistent
Christopher King
29 Jun 2023 2:36 UTC
43
points
40
comments
10
min read
LW
link
Solomonoff induction still works if the universe is uncomputable, and its usefulness doesn’t require knowing Occam’s razor
Christopher King
18 Jun 2023 1:52 UTC
39
points
28
comments
4
min read
LW
link
Demystifying Born’s rule
Christopher King
14 Jun 2023 3:16 UTC
5
points
26
comments
3
min read
LW
link
Current AI harms are also sci-fi
Christopher King
8 Jun 2023 17:49 UTC
26
points
3
comments
1
min read
LW
link
Inference from a Mathematical Description of an Existing Alignment Research: a proposal for an outer alignment research program
Christopher King
2 Jun 2023 21:54 UTC
7
points
4
comments
16
min read
LW
link
The unspoken but ridiculous assumption of AI doom: the hidden doom assumption
Christopher King
1 Jun 2023 17:01 UTC
−9
points
1
comment
3
min read
LW
link
[Question]
What projects and efforts are there to promote AI safety research?
Christopher King
24 May 2023 0:33 UTC
4
points
0
comments
1
min read
LW
link
Back to top
Next