Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
habryka
(Oliver Habryka)
Karma:
31,783
Running Lightcone Infrastructure, which runs LessWrong. You can reach me at
habryka@lesswrong.com
All
Posts
Comments
New
Top
Old
Page
1
Goal oriented cognition in “a single forward pass”
dxu
and
habryka
22 Apr 2024 5:03 UTC
18
points
11
comments
26
min read
LW
link
Express interest in an “FHI of the West”
habryka
18 Apr 2024 3:32 UTC
251
points
37
comments
3
min read
LW
link
Structured Transparency: a framework for addressing use/mis-use trade-offs when sharing information
habryka
11 Apr 2024 18:35 UTC
23
points
0
comments
2
min read
LW
link
(arxiv.org)
LessWrong’s (first) album: I Have Been A Good Bing
habryka
and
kave
1 Apr 2024 7:33 UTC
518
points
153
comments
11
min read
LW
link
How useful is “AI Control” as a framing on AI X-Risk?
habryka
and
ryan_greenblatt
14 Mar 2024 18:06 UTC
67
points
4
comments
34
min read
LW
link
Open Thread Spring 2024
habryka
11 Mar 2024 19:17 UTC
22
points
74
comments
1
min read
LW
link
[Question]
Is a random box of gas predictable after 20 seconds?
Thomas Kwa
and
habryka
24 Jan 2024 23:00 UTC
37
points
35
comments
1
min read
LW
link
[Question]
Will quantum randomness affect the 2028 election?
Thomas Kwa
and
habryka
24 Jan 2024 22:54 UTC
63
points
48
comments
1
min read
LW
link
Vote in the LessWrong review! (LW 2022 Review voting phase)
habryka
17 Jan 2024 7:22 UTC
26
points
9
comments
2
min read
LW
link
AI Impacts 2023 Expert Survey on Progress in AI
habryka
5 Jan 2024 19:42 UTC
28
points
1
comment
7
min read
LW
link
(wiki.aiimpacts.org)
Originality vs. Correctness
alkjash
and
habryka
6 Dec 2023 18:51 UTC
60
points
16
comments
25
min read
LW
link
The LessWrong 2022 Review
habryka
5 Dec 2023 4:00 UTC
115
points
43
comments
4
min read
LW
link
Open Thread – Winter 2023/2024
habryka
4 Dec 2023 22:59 UTC
35
points
160
comments
1
min read
LW
link
Complex systems research as a field (and its relevance to AI Alignment)
Nora_Ammann
and
habryka
1 Dec 2023 22:10 UTC
64
points
9
comments
19
min read
LW
link
How useful is mechanistic interpretability?
ryan_greenblatt
,
Neel Nanda
,
Buck
and
habryka
1 Dec 2023 2:54 UTC
155
points
53
comments
25
min read
LW
link
My techno-optimism [By Vitalik Buterin]
habryka
27 Nov 2023 23:53 UTC
102
points
16
comments
2
min read
LW
link
(www.lesswrong.com)
“Epistemic range of motion” and LessWrong moderation
habryka
and
Gabriel Alfour
27 Nov 2023 21:58 UTC
60
points
3
comments
12
min read
LW
link
Debate helps supervise human experts [Paper]
habryka
17 Nov 2023 5:25 UTC
29
points
6
comments
1
min read
LW
link
(github.com)
How much to update on recent AI governance moves?
habryka
and
So8res
16 Nov 2023 23:46 UTC
109
points
4
comments
29
min read
LW
link
AI Timelines
habryka
,
Daniel Kokotajlo
,
Ajeya Cotra
and
Ege Erdil
10 Nov 2023 5:28 UTC
252
points
74
comments
51
min read
LW
link
Back to top
Next