RSS

Eleni Angelou

Karma: 82

On tak­ing AI risk se­ri­ously

Eleni Angelou13 Mar 2023 5:50 UTC
5 points
0 comments1 min readLW link
(www.nytimes.com)

Every­thing’s nor­mal un­til it’s not

Eleni Angelou10 Mar 2023 2:02 UTC
7 points
0 comments3 min readLW link

Ques­tions about AI that bother me

Eleni Angelou5 Feb 2023 5:04 UTC
13 points
6 comments2 min readLW link

[Question] Should AI writ­ers be pro­hibited in ed­u­ca­tion?

Eleni Angelou17 Jan 2023 0:42 UTC
6 points
2 comments1 min readLW link

Progress and re­search dis­rup­tive­ness

Eleni Angelou12 Jan 2023 3:51 UTC
3 points
2 comments1 min readLW link
(www.nature.com)

AI Safety Camp: Ma­chine Learn­ing for Scien­tific Dis­cov­ery

Eleni Angelou6 Jan 2023 3:21 UTC
2 points
0 comments1 min readLW link

[Question] Book recom­men­da­tions for the his­tory of ML?

Eleni Angelou28 Dec 2022 23:50 UTC
2 points
2 comments1 min readLW link

Why I think that teach­ing philos­o­phy is high impact

Eleni Angelou19 Dec 2022 3:11 UTC
5 points
0 comments2 min readLW link

My sum­mary of “Prag­matic AI Safety”

Eleni Angelou5 Nov 2022 12:54 UTC
2 points
0 comments5 min readLW link

Against the weird­ness heuristic

Eleni Angelou2 Oct 2022 19:41 UTC
17 points
3 comments2 min readLW link

There is no royal road to alignment

Eleni Angelou18 Sep 2022 3:33 UTC
4 points
2 comments3 min readLW link

It’s (not) how you use it

Eleni Angelou7 Sep 2022 17:15 UTC
8 points
1 comment2 min readLW link

Three sce­nar­ios of pseudo-al­ign­ment

Eleni Angelou3 Sep 2022 12:47 UTC
9 points
0 comments3 min readLW link

Align­ment is hard. Com­mu­ni­cat­ing that, might be harder

Eleni Angelou1 Sep 2022 16:57 UTC
7 points
8 comments3 min readLW link

Who or­dered al­ign­ment’s ap­ple?

Eleni Angelou28 Aug 2022 4:05 UTC
6 points
3 comments3 min readLW link

Align­ment’s phlo­gis­ton

Eleni Angelou18 Aug 2022 22:27 UTC
10 points
2 comments2 min readLW link