RSS

Re­search Taste

TagLast edit: 19 Aug 2022 20:05 UTC by Raemon

Research Taste is the intuitions that guide researchers towards productive lines of inquiry.

Tips for Em­piri­cal Align­ment Research

Ethan Perez29 Feb 2024 6:04 UTC
152 points
4 comments22 min readLW link

Touch re­al­ity as soon as pos­si­ble (when do­ing ma­chine learn­ing re­search)

LawrenceC3 Jan 2023 19:11 UTC
113 points
8 comments8 min readLW link

Thomas Kwa’s re­search journal

23 Nov 2023 5:11 UTC
79 points
1 comment6 min readLW link

How I se­lect al­ign­ment re­search projects

10 Apr 2024 4:33 UTC
35 points
4 comments24 min readLW link

How to do con­cep­tual re­search: Case study in­ter­view with Cas­par Oesterheld

Chi Nguyen14 May 2024 15:09 UTC
48 points
5 comments9 min readLW link

Some (prob­le­matic) aes­thet­ics of what con­sti­tutes good work in academia

Steven Byrnes11 Mar 2024 17:47 UTC
147 points
12 comments12 min readLW link

Difficulty classes for al­ign­ment properties

Jozdien20 Feb 2024 9:08 UTC
34 points
5 comments2 min readLW link

Ad­vice I found helpful in 2022

Akash28 Jan 2023 19:48 UTC
36 points
5 comments2 min readLW link

Which ML skills are use­ful for find­ing a new AIS re­search agenda?

Yonatan Cale9 Feb 2023 13:09 UTC
16 points
1 comment1 min readLW link

Qual­ities that al­ign­ment men­tors value in ju­nior researchers

Akash14 Feb 2023 23:27 UTC
88 points
14 comments3 min readLW link

A model of re­search skill

L Rudolf L8 Jan 2024 0:13 UTC
55 points
6 comments12 min readLW link
(www.strataoftheworld.com)

[Question] Build knowl­edge base first, or backchain?

Nicholas / Heather Kross17 Jul 2023 3:44 UTC
11 points
5 comments1 min readLW link

How to do the­o­ret­i­cal re­search, a per­sonal perspective

Mark Xu19 Aug 2022 19:41 UTC
91 points
6 comments15 min readLW link

How to be­come an AI safety researcher

peterbarnett15 Apr 2022 11:41 UTC
23 points
0 comments14 min readLW link

How I Formed My Own Views About AI Safety

Neel Nanda27 Feb 2022 18:50 UTC
64 points
6 comments13 min readLW link
(www.neelnanda.io)

Nu­clear Es­pi­onage and AI Governance

GAA4 Oct 2021 23:04 UTC
32 points
5 comments24 min readLW link

My re­search methodology

paulfchristiano22 Mar 2021 21:20 UTC
159 points
38 comments16 min readLW link1 review
(ai-alignment.com)

11 heuris­tics for choos­ing (al­ign­ment) re­search projects

27 Jan 2023 0:36 UTC
50 points
5 comments1 min readLW link

At­tributes of suc­cess­ful professors

electroswing13 Apr 2023 20:38 UTC
13 points
8 comments5 min readLW link

Re­search Prin­ci­ples for 6 Months of AI Align­ment Studies

Shoshannah Tekofsky2 Dec 2022 22:55 UTC
23 points
3 comments6 min readLW link

ML Safety Re­search Ad­vice—GabeM

Gabe M23 Jul 2024 1:45 UTC
28 points
2 comments14 min readLW link
(open.substack.com)

Les­sons After a Cou­ple Months of Try­ing to Do ML Research

KevinRoWang22 Mar 2022 23:45 UTC
70 points
8 comments6 min readLW link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam F. Brown16 Nov 2022 15:33 UTC
13 points
2 comments12 min readLW link
(sambrown.eu)
No comments.