Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Research Taste
Tag
Last edit:
19 Aug 2022 20:05 UTC
by
Raemon
Research Taste
is the intuitions that guide researchers towards productive lines of inquiry.
Relevant
New
Old
Tips for Empirical Alignment Research
Ethan Perez
29 Feb 2024 6:04 UTC
152
points
4
comments
22
min read
LW
link
Touch reality as soon as possible (when doing machine learning research)
LawrenceC
3 Jan 2023 19:11 UTC
113
points
8
comments
8
min read
LW
link
Thomas Kwa’s research journal
Thomas Kwa
and
Adrià Garriga-alonso
23 Nov 2023 5:11 UTC
79
points
1
comment
6
min read
LW
link
How I select alignment research projects
Ethan Perez
,
Henry Sleight
and
Mikita Balesni
10 Apr 2024 4:33 UTC
35
points
4
comments
24
min read
LW
link
How to do conceptual research: Case study interview with Caspar Oesterheld
Chi Nguyen
14 May 2024 15:09 UTC
48
points
5
comments
9
min read
LW
link
Some (problematic) aesthetics of what constitutes good work in academia
Steven Byrnes
11 Mar 2024 17:47 UTC
147
points
12
comments
12
min read
LW
link
Difficulty classes for alignment properties
Jozdien
20 Feb 2024 9:08 UTC
34
points
5
comments
2
min read
LW
link
Advice I found helpful in 2022
Akash
28 Jan 2023 19:48 UTC
36
points
5
comments
2
min read
LW
link
Which ML skills are useful for finding a new AIS research agenda?
Yonatan Cale
9 Feb 2023 13:09 UTC
16
points
1
comment
1
min read
LW
link
Qualities that alignment mentors value in junior researchers
Akash
14 Feb 2023 23:27 UTC
88
points
14
comments
3
min read
LW
link
A model of research skill
L Rudolf L
8 Jan 2024 0:13 UTC
55
points
6
comments
12
min read
LW
link
(www.strataoftheworld.com)
[Question]
Build knowledge base first, or backchain?
Nicholas / Heather Kross
17 Jul 2023 3:44 UTC
11
points
5
comments
1
min read
LW
link
How to do theoretical research, a personal perspective
Mark Xu
19 Aug 2022 19:41 UTC
91
points
6
comments
15
min read
LW
link
How to become an AI safety researcher
peterbarnett
15 Apr 2022 11:41 UTC
23
points
0
comments
14
min read
LW
link
How I Formed My Own Views About AI Safety
Neel Nanda
27 Feb 2022 18:50 UTC
64
points
6
comments
13
min read
LW
link
(www.neelnanda.io)
Nuclear Espionage and AI Governance
GAA
4 Oct 2021 23:04 UTC
32
points
5
comments
24
min read
LW
link
My research methodology
paulfchristiano
22 Mar 2021 21:20 UTC
159
points
38
comments
16
min read
LW
link
1
review
(ai-alignment.com)
11 heuristics for choosing (alignment) research projects
Akash
and
danesherbs
27 Jan 2023 0:36 UTC
50
points
5
comments
1
min read
LW
link
Attributes of successful professors
electroswing
13 Apr 2023 20:38 UTC
13
points
8
comments
5
min read
LW
link
Research Principles for 6 Months of AI Alignment Studies
Shoshannah Tekofsky
2 Dec 2022 22:55 UTC
23
points
3
comments
6
min read
LW
link
ML Safety Research Advice—GabeM
Gabe M
23 Jul 2024 1:45 UTC
28
points
2
comments
14
min read
LW
link
(open.substack.com)
Lessons After a Couple Months of Trying to Do ML Research
KevinRoWang
22 Mar 2022 23:45 UTC
70
points
8
comments
6
min read
LW
link
Questions about Value Lock-in, Paternalism, and Empowerment
Sam F. Brown
16 Nov 2022 15:33 UTC
13
points
2
comments
12
min read
LW
link
(sambrown.eu)
No comments.
Back to top