RSS

riceissa(Issa Rice)

Karma: 1,998

I am Issa Rice. https://​​issarice.com/​​

[Question] Why is cap­nom­e­try biofeed­back not more widely known?

riceissa21 Dec 2023 2:42 UTC
20 points
21 comments4 min readLW link

Idea: med­i­cal hy­pothe­ses app for mys­te­ri­ous chronic illnesses

riceissa19 May 2023 20:49 UTC
63 points
8 comments3 min readLW link

Ex­po­si­tion as sci­ence: some ideas for how to make progress

riceissa8 Jul 2022 1:29 UTC
21 points
1 comment8 min readLW link

How to get peo­ple to pro­duce more great ex­po­si­tion? Some strate­gies and their assumptions

riceissa25 May 2022 22:30 UTC
26 points
10 comments3 min readLW link

A scheme for sam­pling durable goods first-hand be­fore mak­ing a purchase

riceissa17 Feb 2022 23:36 UTC
29 points
7 comments2 min readLW link

Ar­gu­ments about Highly Reli­able Agent De­signs as a Use­ful Path to Ar­tifi­cial In­tel­li­gence Safety

27 Jan 2022 13:13 UTC
27 points
0 comments1 min readLW link
(arxiv.org)

Analo­gies and Gen­eral Pri­ors on Intelligence

20 Aug 2021 21:03 UTC
57 points
12 comments14 min readLW link

ri­ceissa’s Shortform

riceissa27 Mar 2021 4:51 UTC
6 points
41 comments1 min readLW link

Timeline of AI safety

riceissa7 Feb 2021 22:29 UTC
73 points
6 comments2 min readLW link
(timelines.issarice.com)

Dis­cov­ery fic­tion for the Pythagorean theorem

riceissa19 Jan 2021 2:09 UTC
16 points
1 comment4 min readLW link

Gems from the Wiki: Do The Math, Then Burn The Math and Go With Your Gut

17 Sep 2020 22:41 UTC
53 points
3 comments3 min readLW link
(www.lesswrong.com)

Plau­si­ble cases for HRAD work, and lo­cat­ing the crux in the “re­al­ism about ra­tio­nal­ity” debate

riceissa22 Jun 2020 1:10 UTC
85 points
15 comments10 min readLW link

[Question] Source code size vs learned model size in ML and in hu­mans?

riceissa20 May 2020 8:47 UTC
11 points
6 comments1 min readLW link

[Question] How does iter­ated am­plifi­ca­tion ex­ceed hu­man abil­ities?

riceissa2 May 2020 23:44 UTC
19 points
9 comments2 min readLW link

[Question] What are some ex­er­cises for build­ing/​gen­er­at­ing in­tu­itions about key dis­agree­ments in AI al­ign­ment?

riceissa16 Mar 2020 7:41 UTC
18 points
2 comments1 min readLW link

[Question] What does Solomonoff in­duc­tion say about brain du­pli­ca­tion/​con­scious­ness?

riceissa2 Mar 2020 23:07 UTC
10 points
13 comments2 min readLW link

[Question] Is it harder to be­come a MIRI math­e­mat­i­cian in 2019 com­pared to in 2013?

riceissa29 Oct 2019 3:28 UTC
65 points
3 comments3 min readLW link

De­liber­a­tion as a method to find the “ac­tual prefer­ences” of humans

riceissa22 Oct 2019 9:23 UTC
23 points
5 comments9 min readLW link

[Question] What are the differ­ences be­tween all the iter­a­tive/​re­cur­sive ap­proaches to AI al­ign­ment?

riceissa21 Sep 2019 2:09 UTC
30 points
14 comments2 min readLW link

In­ver­sion of the­o­rems into defi­ni­tions when generalizing

riceissa4 Aug 2019 17:44 UTC
25 points
3 comments5 min readLW link