# riceissa(Issa Rice)

Karma: 1,181

I am Issa Rice. https://​​is­sarice.com/​​

• Does this anal­y­sis take into ac­count the fact that young peo­ple are most likely to die in ways that are un­likely to re­sult in suc­cess­ful cry­op­reser­va­tion? If not, I’m won­der­ing what the num­bers look like if you re-run the simu­la­tion af­ter tak­ing this into ac­count. As a young per­son my­self, if I die in the next decade I think it is most likely to be from in­jury or suicide (nei­ther of which seems likely to lead to suc­cess­ful cry­op­reser­va­tion), and this is one of the main rea­sons I have been cry­ocras­ti­nat­ing. See also this dis­cus­sion.

• “Con­sume ra­tio­nal­ist and effec­tive al­tru­ist con­tent” makes sense but some more spe­cific ad­vice would be helpful, like what ma­te­rial to in­tro­duce, when, and how to en­courage their in­ter­est if they’re not im­me­di­ately in­ter­ested. Have any par­ents done this and can share their ex­pe­rience?

I don’t have kids (yet) and I’m plan­ning to de­lay any po­ten­tial de­tailed re­search un­til I do have kids, so I don’t have spe­cific ad­vice. You could talk to James Miller and his son. Bryan Ca­plan seems to also be do­ing well in terms of keep­ing his sons’ views similar to his own; he does home­school, but maybe you could learn some­thing from look­ing at what he does any­way. There are a few other ra­tio­nal­ist par­ents, but I haven’t seen any de­tailed info on what they do in terms of in­tro­duc­ing ra­tio­nal­ity/​EA stuff. Dun­can Sa­bien has also thought a lot about teach­ing chil­dren, in­clud­ing de­sign­ing a ra­tio­nal­ity camp for kids.

I can also give my own data point: Be­fore dis­cov­er­ing LessWrong (age 13-15?), I con­sumed a bunch of tra­di­tional ra­tio­nal­ity con­tent like Feyn­man, pop­u­lar sci­ence, on­line philos­o­phy lec­tures, and lower qual­ity on­line dis­course like the xkcd fo­rums. I dis­cov­ered LessWrong when I was 14-16 (I don’t re­mem­ber the ex­act date) and read a bunch of posts in an un­struc­tured way (e.g. I think I read about half of the Se­quences but not in or­der), and con­cur­rently read things like GEB and started learn­ing how to write math­e­mat­i­cal proofs. That was enough to get me to stick around, and led to me dis­cov­er­ing EA, get­ting much deeper into ra­tio­nal­ity, AI safety, LessWron­gian philos­o­phy, etc. I feel like I could have started much ear­lier though (maybe 9-10?) and that it was only be­cause of my bad en­vi­ron­ment (in par­tic­u­lar, hav­ing no­body tell me that LessWrong/​Over­com­ing Bias ex­isted) and poor English abil­ity (I moved to the US when I was 10 and couldn’t read/​write English at the level of my peers un­til age 16 or so) that I had to start when I did.

• Do you think that hav­ing your kids con­sume ra­tio­nal­ist and effec­tive al­tru­ist con­tent and/​or do­ing home­school­ing/​un­school­ing are in­suffi­cient for pro­tect­ing your kids against mind viruses? If so, I want to un­der­stand why you think so (maybe you’re imag­in­ing some sort of AI-pow­ered memetic war­fare?).

Eliezer has a Face­book post where he talks about how be­ing so­cial­ized by old sci­ence fic­tion was helpful for him.

For my­self, I think the biggest fac­tors that helped me be­come/​stay sane were spend­ing a lot of time on the in­ter­net (which led to me dis­cov­er­ing LessWrong, effec­tive al­tru­ism, Cog­nito Men­tor­ing) and not talk­ing to other kids (I didn’t have any friends from US pub­lic school dur­ing grades 4 to 11).

• If ran­dom­ness/​noise is a fac­tor, there is also re­gres­sion to the mean when the luck dis­ap­pears on the fol­low­ing rounds.

• Peo­ple I fol­lowed on Twit­ter for their cred­ible takes on COVID-19 now sound in­sane. Sigh...

Are you say­ing that you ini­tially fol­lowed peo­ple for their good thoughts on COVID-19, but (a) now they switched to talk­ing about other top­ics (Ge­orge Floyd protests?), and their thoughts are much worse on these other top­ics, (b) their thoughts on COVID-19 be­came worse over time, (c) they made some COVID-19-re­lated pre­dic­tions/​state­ments that now look ob­vi­ously wrong, so that what they pre­vi­ously said sounds ob­vi­ously wrong, or (d) some­thing else?

• I’m not sure ex­actly what you’re try­ing to learn here, or what de­bate you’re try­ing to re­solve. (Do you have a refer­ence?)

I’m not en­tirely sure what I’m try­ing to learn here (which is part of what I was try­ing to ex­press with the fi­nal para­graph of my ques­tion); this just seemed like a nat­u­ral ques­tion to ask as I started think­ing more about AI take­off.

In “I Heart CYC”, Robin Han­son writes: “So we need to ex­plic­itly code knowl­edge by hand un­til we have enough to build sys­tems effec­tive at ask­ing ques­tions, read­ing, and learn­ing for them­selves. Prior AI re­searchers were too com­fortable start­ing ev­ery pro­ject over from scratch; they needed to join to cre­ate larger in­te­grated knowl­edge bases.”

It sounds like he ex­pects early AGI sys­tems to have lots of hand-coded knowl­edge, i.e. the min­i­mum num­ber of bits needed to spec­ify a seed AI is large com­pared to what Eliezer Yud­kowsky ex­pects. (I wish peo­ple gave num­bers for this so it’s clear whether there re­ally is a dis­agree­ment.) It also sounds like Robin Han­son ex­pects progress in AI ca­pa­bil­ities to come from piling on more hand-coded con­tent.

If ML source code is small and isn’t grow­ing in size, that seems like ev­i­dence against Han­son’s view.

If ML source code is much smaller than the hu­man genome, I can do a bet­ter job of vi­su­al­iz­ing the kind of AI de­vel­op­ment tra­jec­tory that Robin Han­son ex­pects, where we stick in a bunch of con­tent and share con­tent among AI sys­tems. If ML source code is already quite large, then it’s harder for me to vi­su­al­ize this (in this case, it seems like we don’t know what we’re do­ing, and progress will come from bet­ter un­der­stand­ing).

If the hu­man genome is small, I think that makes a dis­con­ti­nu­ity in ca­pa­bil­ities more likely. When I try to vi­su­al­ize where progress comes from in this case, it seems like it would come from a small num­ber of in­sights. We can take some ex­treme cases: if we knew that the code for a seed AGI could fit in a 500-line Python pro­gram (I don’t know if any­body ex­pects this), a FOOM seems more likely (there’s just less sur­face area for mak­ing lots of small im­prove­ments). Whereas if I knew that the small­est pro­gram for a seed AGI re­quired gi­gabytes of source code, I feel like progress would come in smaller pieces.

If an al­gorithm uses data struc­tures that are speci­fi­cally suited to do­ing Task X, and a differ­ent set of data struc­tures that are suited to Task Y, would you call that two units of con­tent or two units of ar­chi­tec­ture?

I’m not sure. The con­tent/​ar­chi­tec­ture split doesn’t seem clean to me, and I haven’t seen any­one give a clear defi­ni­tion. Spe­cial­ized data struc­tures seems like a good ex­am­ple of some­thing that’s in be­tween.

• I’m con­fused about the trade­off you’re de­scribing. Why is the first bul­let point “Gen­er­at­ing bet­ter ground truth data”? It would make more sense to me if it said in­stead some­thing like “Gen­er­at­ing large amounts of non-ground-truth data”. In other words, the thing that am­plifi­ca­tion seems to be pro­vid­ing is ac­cess to more data (even if that data isn’t the ground truth that is pro­vided by the origi­nal hu­man).

Also in the sec­ond bul­let point, by “in­creas­ing the amount of data that you train on” I think you mean in­creas­ing the amount of data from the origi­nal hu­man (rather than data com­ing from the am­plified sys­tem), but I want to con­firm.

Aside from that, I think my main con­fu­sion now is ped­a­gog­i­cal (rather than tech­ni­cal). I don’t un­der­stand why the IDA post and pa­per don’t em­pha­size the effi­ciency of train­ing. The post even says “Re­source and time cost dur­ing train­ing is a more open ques­tion; I haven’t ex­plored the as­sump­tions that would have to hold for the IDA train­ing pro­cess to be prac­ti­cally fea­si­ble or re­source-com­pet­i­tive with other AI pro­jects” which makes it sound like the effi­ciency of train­ing isn’t im­por­tant.

• And I’ve seen Eliezer make the claim a few times. But I can’t find an ar­ti­cle de­scribing the idea. Does any­one have a link?

Eliezer talks about this in Do Earths with slower eco­nomic growth have a bet­ter chance at FAI? e.g.

Rel­a­tive to UFAI, FAI work seems like it would be math­ier and more in­sight-based, where UFAI can more eas­ily cob­ble to­gether lots of pieces. This means that UFAI par­allelizes bet­ter than FAI.

• The ad­di­tion of the dis­til­la­tion step is an ex­tra con­founder, but we hope that it doesn’t dis­tort any­thing too much—its pur­pose is to im­prove speed with­out af­fect­ing any­thing else (though in prac­tice it will re­duce ca­pa­bil­ities some­what).

I think this is the crux of my con­fu­sion, so I would ap­pre­ci­ate if you could elab­o­rate on this. (Every­thing else in your an­swer makes sense to me.) In Evans et al., dur­ing the dis­til­la­tion step, the model learns to solve the difficult tasks di­rectly by us­ing ex­am­ple solu­tions from the am­plifi­ca­tion step. But if can do that, then why can’t it also learn di­rectly from ex­am­ples pro­vided by the hu­man?

To use your anal­ogy, I have no doubt that a team of Ro­hins or a sin­gle Ro­hin think­ing for days can an­swer any ques­tion that I can (given a sin­gle day). But with dis­til­la­tion you’re say­ing there’s a robot that can learn to an­swer any ques­tion I can (given a sin­gle day) by first ob­serv­ing the team of Ro­hins for long enough. If the robot can do that, why can’t the robot also learn to do the same thing by ob­serv­ing me for long enough?

• I want to high­light a po­ten­tial am­bi­guity, which is that “New­ton’s ap­prox­i­ma­tion” is some­times used to mean New­ton’s method for find­ing roots, but the “New­ton’s ap­prox­i­ma­tion” I had in mind is the one given in Tao’s Anal­y­sis I, Propo­si­tion 10.1.7, which is a way of restat­ing the defi­ni­tion of the deriva­tive. (Here is the state­ment in Tao’s notes in case you don’t have ac­cess to the book.)

• What is the plan go­ing for­ward for in­ter­views? Are you plan­ning to in­ter­view peo­ple who are more pes­simistic?

• In the first cat­e­go­riza­tion scheme, I’m also not ex­actly sure what nihilism is refer­ring to. Do you know? Is it just refer­ring to Er­ror The­ory (and maybe in­co­her­en­tism)?

Yes, Hue­mer writes: “Nihilism (a.k.a. ‘the er­ror the­ory’) holds that eval­u­a­tive state­ments are gen­er­ally false.”

Usu­ally non-cog­ni­tivism would fall within nihilism, no?

I’m not sure how the term “nihilism” is typ­i­cally used in philo­soph­i­cal writ­ing, but if we take nihilism=er­ror the­ory then it looks like non-cog­ni­tivism wouldn’t fall within nihilism (just like non-cog­ni­tivism doesn’t fall within er­ror the­ory in your flowchart).

I ac­tu­ally don’t think ei­ther of these di­a­grams place Nihilism cor­rectly.

For the first di­a­gram, Hue­mer writes “if we say ‘good’ pur­ports to re­fer to a prop­erty, some things have that prop­erty, and the prop­erty does not de­pend on ob­servers, then we have moral re­al­ism.” So for Hue­mer, nihilism fails the mid­dle con­di­tion, so is clas­sified as anti-re­al­ist. For the sec­ond di­a­gram, see the quote be­low about du­al­ism vs monism.

I’m not su­per well ac­quainted with the monism/​du­al­ism dis­tinc­tion, but in the com­mon con­cep­tion don’t they both gen­er­ally as­sume that moral­ity is real, at least in some semi-ro­bust sense?

Hue­mer writes:

Here, du­al­ism is the idea that there are two fun­da­men­tally differ­ent kinds of facts (or prop­er­ties) in the world: eval­u­a­tive facts (prop­er­ties) and non-eval­u­a­tive facts (prop­er­ties). Only the in­tu­ition­ists em­brace this.

Every­one else is a monist: they say there is only one fun­da­men­tal kind of fact in the world, and it is the non-eval­u­a­tive kind; there aren’t any value facts over and above the other facts. This im­plies that ei­ther there are no value facts at all (elimi­na­tivism), or value facts are en­tirely ex­pli­ca­ble in terms of non-eval­u­a­tive facts (re­duc­tion­ism).

• Michael Hue­mer gives two tax­onomies of metaeth­i­cal views in sec­tion 1.4 of his book Eth­i­cal In­tu­ition­ism:

As the pre­ced­ing sec­tion sug­gests, metaeth­i­cal the­o­ries are tra­di­tion­ally di­vided first into re­al­ist and anti-re­al­ist views, and then into two forms of re­al­ism and three forms of anti-re­al­ism:

           Nat­u­ral­ism
/​
Real­ism
/​       \
/​         In­tu­ition­ism
/​
\
\              Sub­jec­tivism
\            /​
Anti-Real­ism—Non-Cog­ni­tivism
\
Nihilism


This is not the most illu­mi­nat­ing way of clas­sify­ing po­si­tions. It im­plies that the most fun­da­men­tal di­vi­sion in metaethics is be­tween re­al­ists and anti-re­al­ists over the ques­tion of ob­jec­tivity. The dis­pute be­tween nat­u­ral­ism and in­tu­ition­ism is then seen as rel­a­tively minor, with the nat­u­ral­ists be­ing much closer to the in­tu­ition­ists than they are, say, to the sub­jec­tivists. That isn’t how I see things. As I see it, the most fun­da­men­tal di­vi­sion in metaethics is be­tween the in­tu­ition­ists, on the one hand, and ev­ery­one else, on the other. I would clas­sify the po­si­tions as fol­lows:

   Dual­ism—In­tu­ition­ism
/​
/​                      Sub­jec­tivism
/​                      /​
\          Re­duc­tion­ism
\        /​            \
\      /​              Nat­u­ral­ism
Mon­ism
\               Non-Cog­ni­tivism
\             /​
Elimi­na­tivism
\
Nihilism