Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Research Taste
Tag
Last edit:
26 Jul 2025 19:31 UTC
by
Erin Robertson
Research Taste
is the intuition that guides researchers towards productive lines of inquiry.
Relevant
New
Old
Tips for Empirical Alignment Research
Ethan Perez
29 Feb 2024 6:04 UTC
182
points
4
comments
23
min read
LW
link
Touch reality as soon as possible (when doing machine learning research)
LawrenceC
3 Jan 2023 19:11 UTC
118
points
9
comments
8
min read
LW
link
1
review
Thomas Kwa’s research journal
Thomas Kwa
and
Adrià Garriga-alonso
23 Nov 2023 5:11 UTC
79
points
1
comment
6
min read
LW
link
How I select alignment research projects
Ethan Perez
,
Henry Sleight
and
Mikita Balesni
10 Apr 2024 4:33 UTC
36
points
4
comments
24
min read
LW
link
Advice I found helpful in 2022
Orpheus16
28 Jan 2023 19:48 UTC
36
points
5
comments
2
min read
LW
link
How to become an AI safety researcher
peterbarnett
15 Apr 2022 11:41 UTC
25
points
0
comments
14
min read
LW
link
Nuclear Espionage and AI Governance
Guive
4 Oct 2021 23:04 UTC
34
points
5
comments
24
min read
LW
link
Which ML skills are useful for finding a new AIS research agenda?
Yonatan Cale
9 Feb 2023 13:09 UTC
16
points
1
comment
1
min read
LW
link
How to do conceptual research: Case study interview with Caspar Oesterheld
Chi Nguyen
14 May 2024 15:09 UTC
48
points
5
comments
9
min read
LW
link
Some (problematic) aesthetics of what constitutes good work in academia
Steven Byrnes
11 Mar 2024 17:47 UTC
149
points
12
comments
12
min read
LW
link
How to do theoretical research, a personal perspective
Mark Xu
19 Aug 2022 19:41 UTC
91
points
6
comments
15
min read
LW
link
How I Formed My Own Views About AI Safety
Neel Nanda
27 Feb 2022 18:50 UTC
66
points
6
comments
13
min read
LW
link
(www.neelnanda.io)
A model of research skill
L Rudolf L
8 Jan 2024 0:13 UTC
61
points
6
comments
12
min read
LW
link
(www.strataoftheworld.com)
11 heuristics for choosing (alignment) research projects
Orpheus16
and
danesherbs
27 Jan 2023 0:36 UTC
50
points
5
comments
1
min read
LW
link
Difficulty classes for alignment properties
Jozdien
20 Feb 2024 9:08 UTC
34
points
5
comments
2
min read
LW
link
[Question]
Build knowledge base first, or backchain?
Nicholas Kross
17 Jul 2023 3:44 UTC
11
points
5
comments
1
min read
LW
link
My research methodology
paulfchristiano
22 Mar 2021 21:20 UTC
161
points
38
comments
16
min read
LW
link
1
review
(ai-alignment.com)
Qualities that alignment mentors value in junior researchers
Orpheus16
14 Feb 2023 23:27 UTC
88
points
14
comments
3
min read
LW
link
Academia as a happy place?
jow
and
pchvykov
24 Apr 2025 14:03 UTC
9
points
0
comments
19
min read
LW
link
The Road to Evil Is Paved with Good Objectives: Framework to Classify and Fix Misalignments.
Shivam
30 Jan 2025 2:44 UTC
1
point
0
comments
11
min read
LW
link
Lessons After a Couple Months of Trying to Do ML Research
RowanWang
22 Mar 2022 23:45 UTC
71
points
8
comments
6
min read
LW
link
Tips On Empirical Research Slides
James Chua
,
John Hughes
,
Ethan Perez
and
Owain_Evans
8 Jan 2025 5:06 UTC
96
points
4
comments
6
min read
LW
link
You Can’t Skip Exploration: Why understanding experimentation and taste is key to understanding AI
Oliver Sourbut
21 May 2025 16:08 UTC
20
points
0
comments
11
min read
LW
link
(www.oliversourbut.net)
Research Principles for 6 Months of AI Alignment Studies
Shoshannah Tekofsky
2 Dec 2022 22:55 UTC
23
points
3
comments
6
min read
LW
link
ML Safety Research Advice—GabeM
Gabe M
23 Jul 2024 1:45 UTC
31
points
2
comments
14
min read
LW
link
(open.substack.com)
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety—A Pilot Retrospective
Alvin Ånestrand
,
Jonas Hallgren
and
Utilop
10 Jan 2025 16:22 UTC
31
points
0
comments
4
min read
LW
link
Questions about Value Lock-in, Paternalism, and Empowerment
Sam F. Brown
16 Nov 2022 15:33 UTC
13
points
2
comments
12
min read
LW
link
(sambrown.eu)
No comments.
Back to top