RSS

AnnaSalamon

Karma: 17,263

Believ­ing In

AnnaSalamon8 Feb 2024 7:06 UTC
202 points
49 comments13 min readLW link

[Question] Which parts of the ex­ist­ing in­ter­net are already likely to be in (GPT-5/​other soon-to-be-trained LLMs)’s train­ing cor­pus?

AnnaSalamon29 Mar 2023 5:17 UTC
49 points
2 comments1 min readLW link

[Question] Are there spe­cific books that it might slightly help al­ign­ment to have on the in­ter­net?

AnnaSalamon29 Mar 2023 5:08 UTC
78 points
25 comments1 min readLW link

What should you change in re­sponse to an “emer­gency”? And AI risk

AnnaSalamon18 Jul 2022 1:11 UTC
328 points
60 comments6 min readLW link1 review

Com­ment re­ply: my low-qual­ity thoughts on why CFAR didn’t get farther with a “real/​effi­ca­cious art of ra­tio­nal­ity”

AnnaSalamon9 Jun 2022 2:12 UTC
253 points
62 comments17 min readLW link1 review

Nar­ra­tive Syncing

AnnaSalamon1 May 2022 1:48 UTC
117 points
48 comments7 min readLW link1 review

The feel­ing of break­ing an Over­ton window

AnnaSalamon17 Feb 2021 5:31 UTC
128 points
29 comments1 min readLW link1 review

“PR” is cor­ro­sive; “rep­u­ta­tion” is not.

AnnaSalamon14 Feb 2021 3:32 UTC
308 points
93 comments2 min readLW link3 reviews

[Question] Where do (did?) sta­ble, co­op­er­a­tive in­sti­tu­tions come from?

AnnaSalamon3 Nov 2020 22:14 UTC
150 points
72 comments4 min readLW link

Real­ity-Re­veal­ing and Real­ity-Mask­ing Puzzles

AnnaSalamon16 Jan 2020 16:15 UTC
258 points
57 comments13 min readLW link1 review

We run the Cen­ter for Ap­plied Ra­tion­al­ity, AMA

AnnaSalamon19 Dec 2019 16:34 UTC
108 points
324 comments1 min readLW link

An­naSala­mon’s Shortform

AnnaSalamon25 Jul 2019 5:24 UTC
20 points
12 comments1 min readLW link

“Flinch­ing away from truth” is of­ten about *pro­tect­ing* the epistemology

AnnaSalamon20 Dec 2016 18:39 UTC
222 points
58 comments7 min readLW link

Fur­ther dis­cus­sion of CFAR’s fo­cus on AI safety, and the good things folks wanted from “cause neu­tral­ity”

AnnaSalamon12 Dec 2016 19:39 UTC
64 points
38 comments5 min readLW link

CFAR’s new mis­sion state­ment (on our web­site)

AnnaSalamon10 Dec 2016 8:37 UTC
15 points
14 comments1 min readLW link
(www.rationality.org)

CFAR’s new fo­cus, and AI Safety

AnnaSalamon3 Dec 2016 18:09 UTC
51 points
88 comments3 min readLW link

On the im­por­tance of Less Wrong, or an­other sin­gle con­ver­sa­tional locus

AnnaSalamon27 Nov 2016 17:13 UTC
173 points
365 comments4 min readLW link

Sev­eral free CFAR sum­mer pro­grams on ra­tio­nal­ity and AI safety

AnnaSalamon14 Apr 2016 2:35 UTC
30 points
14 comments2 min readLW link

Con­sider hav­ing sparse insides

AnnaSalamon1 Apr 2016 0:07 UTC
26 points
25 comments1 min readLW link

The cor­rect re­sponse to un­cer­tainty is *not* half-speed

AnnaSalamon15 Jan 2016 22:55 UTC
258 points
45 comments3 min readLW link