# Buck(Buck Shlegeris)

Karma: 512
• I’ve now made a Guessti­mate here. I sus­pect that it is very bad and dumb; please make your own that is bet­ter than mine. I’m prob­a­bly not go­ing to fix prob­lems with mine. Some peo­ple like Daniel Filan are con­fused by what my model means; I am like 50-50 on whether my model is re­ally dumb or just con­fus­ing to read.

Also don’t un­der­stand this part. “4x as many mild cases as se­vere cases” is com­pat­i­ble with what I as­sumed (10%-20% of all cases end up se­vere or crit­i­cal) but where does 3% come from?

Yeah my text was wrong here; I meant that I think you get 4x as many un­no­ticed in­fec­tions as con­firmed in­fec­tions, then 10-20% of con­firmed cases end up se­vere or crit­i­cal.

• Oh yeah I’m to­tally wrong there. I don’t have time to cor­rect this now. Some helpful on­looker should make a Guessti­mate for all this.

• Epistemic sta­tus: I don’t re­ally know what I’m talk­ing about. I am not at all an ex­pert here (though I have been talk­ing to some of my more ex­pert friends about this).

EDIT: I now have a Guessti­mate model here, but its re­sults don’t re­ally make sense. I en­courage oth­ers to make their own.

Here’s my model: To get such a large death toll, there would need to be lots of peo­ple who need oxy­gen all at once and who can’t get it. So we need to mul­ti­ply the pro­por­tion of peo­ple who might have be in­fected all at once by the fatal­ity rate for such peo­ple. I’m go­ing to use point es­ti­mates here and note that they look way lower than yours; this should prob­a­bly be a Guessti­mate model.

Fatal­ity rate

This com­ment sug­gests maybe 85% fatal­ity of con­firmed cases if they don’t have a ven­tila­tor, and 75% with­out oxy­gen. EDIT: This is to­tally wrong, see replies. I will fix it later. Idk what it does to the bot­tom line.

But there are plau­si­bly way more mild cases than con­firmed cases. In places with ag­gres­sive test­ing, like Di­a­mond Princess and South Korea, you see much lower fatal­ity rates, which sug­gests that lots of cases are mild and there­fore don’t get con­firmed. So plau­si­bly there are 4x as many mild cases as con­firmed cases. This gets us to like 3% fatal­ity rate (again as­sum­ing no sup­ple­men­tal oxy­gen, which I don’t think is clear and I ex­pect some­one else to be able to make progress on fore­cast­ing if they want).

How many peo­ple get it at once

(If we as­sume that like 1000 peo­ple in the US cur­rently have it, and dou­bling time is 5 days, then peak time is like 3 months away.)

To get to over­all 2.5% fatal­ity, you need more than 80% of liv­ing hu­mans to get it, in a big clump such that they don’t have oxy­gen ac­cess. This prob­a­bly won’t hap­pen (20%), be­cause of ar­gu­ments like the fol­low­ing:

• This doesn’t seem to have hap­pened in China, so it seems pos­si­ble to pre­vent.

• China is prob­a­bly un­usu­ally good at han­dling this, but even if only China does this

• Flu is spread out over a few months, and it’s more trans­mis­si­ble than this, and not ev­ery­one gets it. (Maybe it’s be­cause of im­mu­nity to flu from pre­vi­ous flus?)

• If the fatal­ity rate looks on the high end, peo­ple will try harder to not get it

Other fac­tors that dis­count it

• The warm weather might make it get a lot less bad. (10% hail mary?)

• Effec­tive coun­ter­mea­sures might be in­vented in the next few months. Eg we might need to no­tice that some ex­ist­ing an­tiviral is helpful. Peo­ple are test­ing a bunch of these, and there are some that might be effec­tive. (20% hail mary?)

Conclusion

This over­all adds up to like 20% * (1-0.1-0.2) = 14% chance of 2.5% mor­tal­ity, based on mul­ti­pli­ca­tions of point es­ti­mates which I’m sure are in­valid.

• Just for the record, I think that this es­ti­mate is pretty high and I’d be pretty sur­prised if it were true; I’ve talked to a few biose­cu­rity friends about this and they thought it was too high. I’m wor­ried that this an­swer has been highly up­voted but there are lots of peo­ple who think it’s wrong. I’d be ex­cited for more com­menters giv­ing their bot­tom line pre­dic­tions about this, so that it’s eas­ier to see the spread.

• (I’m un­sure whether I should write this com­ment refer­ring to the au­thor of this post in sec­ond or third per­son; I think I’m go­ing to go with third per­son, though it feels a bit awk­ward. Arthur re­viewed this com­ment be­fore I posted it.)

Here are a cou­ple of clar­ifi­ca­tions about things in this post, which might be rele­vant for peo­ple who are us­ing it to learn about the MIRI re­cruit­ing pro­cess. Note that I’m the MIRI re­cruiter Arthur de­scribes work­ing with.

Gen­eral com­ments:

I think Arthur is a re­ally smart, good pro­gram­mer. Arthur doesn’t have as much back­ground with AI safety stuff as many peo­ple who I con­sider as can­di­dates for MIRI work, but it seemed worth spend­ing effort on bring­ing Arthur to AIRCS etc be­cause it would be re­ally cool if it worked out.

Arthur re­ports a va­ri­ety of peo­ple in this post as say­ing things that I think are some­what mis­in­ter­preted, and I dis­agree with sev­eral of the things he de­scribes them as say­ing.

I still don’t un­der­stand that: what’s the point of invit­ing me if the test fails ? It would ap­pear more cost effi­cient to wait un­til af­ter the test to de­cide whether they want me to come or not (I don’t think I ever asked it out loud, I was already happy to have a trip to Cal­ifor­nia for free).

I thought it was very likely Arthur would do well on the two-day pro­ject (he did).

I do not wish to dis­close how much I have been paid, but I’ll state that two hours at that rate was more than a day at the French PhD rate. I didn’t even ask to be paid; I hadn’t even thought that be­ing paid for a job in­ter­view was pos­si­ble.

It’s con­sid­ered good prac­tice to pay peo­ple to do work for tri­als; we paid Arthur a rate which is lower than you’d pay a Bay Area soft­ware en­g­ineer as a con­trac­tor, and I was get­ting Arthur to do some­what un­usu­ally difficult (though un­usu­ally in­ter­est­ing) work.

I as­sume that if EA cares about an­i­mal suffer­ing in it­self, then us­ing throw­aways is less of a di­rect suffer­ing fac­tor.

Yep

So Anna Sala­mon gave us a rule: We don’t speak of AI safety to peo­ple who do not ex­press the de­sire to hear about it. When I asked for more in­for­ma­tions, she speci­fied that it is okay to men­tion the words “AI Safety”; but not to give any de­tails un­til the other per­son is sure they want to hear about it. In prac­tice, this means it is okay to share a book/​post on AI safety, but we should warn the per­son to read it only if they feel ready. Which leads to a re­lated prob­lem: some peo­ple never ex­pe­rienced an ex­is­ten­tial crisis or anx­iety at­tack of their life, so it’s all too pos­si­ble they can’t re­ally “be ready”.

I think this is a sub­stan­tial mi­s­un­der­stand­ing of what Anna said. I don’t think she was try­ing to pro­pose a rule that peo­ple should fol­low, and she definitely wasn’t ex­plain­ing a rule of the AIRCS work­shop or some­thing; I think she was do­ing some­thing a lot more like talk­ing about some­thing she thought about how peo­ple should re­late to AI risk. I might come back and edit this com­ment later to say more.

That means that, dur­ing cir­cles, I was asked to be as hon­est as pos­si­ble about my feel­ings while also be­ing con­sid­ered for an in­tern­ship. This is ex­tremely awk­ward.

For the record, I think that “be­ing asked to be as hon­est as pos­si­ble” is a pretty bad de­scrip­tion of what cir­cling is, though I’m sad that it came across this way to Arthur (I’ve already talked to him about this)

But just be­cause they do not think of AIRCS as a job in­ter­view does not mean AIRCS is not a job in­ter­view. Case in point: half a week af­ter the work­shop, the re­cruiter told me that “After dis­cussing some more, we de­cided that we don’t want to move for­ward with you right now”. So the work­shop re­ally was what led them to de­cide not to hire me.

For the record, the work­shop in­deed made the differ­ence about whether we wanted to make Arthur an offer right then. I think this is to­tally rea­son­able—Arthur is a smart guy, but not that in­volved with the AI safety com­mu­nity; my best guess be­fore the AIRCS work­shop was that he wouldn’t be a good fit at MIRI im­me­di­ately be­cause of his in­suffi­cient back­ground in AI safety, and then at the AIRCS work­shop I felt like it turned out that this guess was right and the gam­ble hadn’t paid off (though I told Arthur, truth­fully, that I hoped he’d keep in con­tact).

Dur­ing a trip to the beach, I fi­nally had the courage to tell the re­cruiter that AIRCS is quite com­plex to nav­i­gate for me, when it’s both a CFAR work­shop and a job in­ter­view.

:( This is in­deed awk­ward and I wish I knew how to do it bet­ter. My main strat­egy is to be as up­front and ac­cu­rate with peo­ple as I can; AFAICT, my level of trans­parency with ap­pli­cants is quite un­usual. This of­ten isn’t suffi­cient to make ev­ery­thing okay.

First: they could men­tion peo­ple com­ing to AIRCS for a fu­ture job in­ter­view that some things will be awk­ward for them; but that they have the same work­shop as ev­ery­one else so they’ll have to deal with it.

I think I do men­tion this (and am some­what sur­prised that it was a sur­prise for Arthur)

Fur­ther­more, I do un­der­stand why it’s gen­er­ally a bad idea to tell un­known peo­ple in your build­ings that they won’t have the job.

I wasn’t wor­ried about Arthur de­stroy­ing the AIRCS venue; I needed to con­fer with my cowork­ers be­fore mak­ing a de­ci­sion.

I do not be­lieve that my first ad­vice will be listened to. Dur­ing a dis­cus­sion, the last night near the fire, the re­cruiter was dis­cussing with some other miri staff and par­ti­ci­pants. And at some point they men­tioned MIRI’s re­cruit­ing pro­cess. I think that they were men­tion­ing that they loved re­cruit­ing be­cause it leads them to work with ex­tremely in­ter­est­ing peo­ple, but that it’s hard to find them. Given that my goal was ex­plic­itly to be re­cruited, and that I didn’t have any an­swers yet, it was ex­tremely awk­ward for me. I can’t state ex­plic­itly why, af­ter all I didn’t have to add any­thing to their re­mark. But even if I can’t ex­plain why I think that, I still firmly be­lieve that it’s the kind of things a re­cruiter should avoid say­ing near their po­ten­tial hire.

I don’t quite un­der­stand what Arthur’s com­plaint is here, though I agree that it’s awk­ward hav­ing peo­ple be at events with peo­ple who are con­sid­er­ing hiring them.

Miri here is an ex­cep­tion. I can see only so many rea­sons not to hire me that the out­come was un­sur­pris­ing. The pro­cess and they con­sid­er­ing me in the first place was.

Arthur is re­ally smart and it seemed worth get­ting him more in­volved in all this stuff.

• For the record, parts of that ratanon post seem ex­tremely in­ac­cu­rate to me; for ex­am­ple, the claim that MIRI peo­ple are defer­ring to Dario Amodei on timelines is not even re­motely rea­son­able. So I wouldn’t take it that se­ri­ously.

• 5 Dec 2019 1:21 UTC
LW: 12 AF: 6
AF

This policy it­self is still a mul­ti­layer per­cep­tron, which has no in­ter­nal state, so we be­lieve that in some cases the agent uses its arms to store in­for­ma­tion.

• Given a policy π we can di­rectly search for an in­put on which it be­haves a cer­tain way.

(I’m sure this point is ob­vi­ous to Paul, but it wasn’t to me)

We can search for in­puts on which a policy be­haves badly, which is re­ally helpful for ver­ify­ing the worst case of a cer­tain policy. But we can’t search for a policy which has a good worst case, be­cause that would re­quire us­ing the black box in­side the func­tion passed to the black box, which we can’t do. I think you can also say this as “the black box is an NP or­a­cle, not a or­a­cle”.

This still means that we can build a sys­tem which in the worst case does noth­ing, rather than in the worst case is dan­ger­ous: we do what­ever thing to get some policy, then we search for an in­put on which it be­haves badly, and if one ex­ists we don’t run the policy.

• [I’m not sure how good this is, it was in­ter­est­ing to me to think about, idk if it’s use­ful, I wrote it quickly.]

Over the last year, I in­ter­nal­ized Bayes’ The­o­rem much more than I pre­vi­ously had; this led me to notic­ing that when I ap­plied it in my life it tended to have coun­ter­in­tu­itive re­sults; af­ter think­ing about it for a while, I con­cluded that my in­tu­itions were right and I was us­ing Bayes wrong. (I’m go­ing to call Bayes’ The­o­rem “Bayes” from now on.)

Be­fore I can tell you about that, I need to make sure you’re think­ing about Bayes in terms of ra­tios rather than frac­tions. Bayes is enor­mously eas­ier to un­der­stand and use when de­scribed in terms of ra­tios. For ex­am­ple: Sup­pose that 1% of women have a par­tic­u­lar type of breast can­cer, and a mam­mo­gram is 20 times more likely to re­turn a pos­i­tive re­sult if you do have breast can­cer, and you want to know the prob­a­bil­ity that you have breast can­cer if you got that pos­i­tive re­sult. The prior prob­a­bil­ity ra­tio is 1:99, and the like­li­hood ra­tio is 20:1, so the pos­te­rior prob­a­bil­ity is = 20:99, so you have prob­a­bil­ity of 20/​(20+99) of hav­ing breast can­cer.

I think that this is ab­surdly eas­ier than us­ing the frac­tion for­mu­la­tion. I think that teach­ing the frac­tion for­mu­la­tion is the sin­gle biggest di­dac­tic mis­take that I am aware of in any field.

Any­way, a year or so ago I got into the habit of calcu­lat­ing things us­ing Bayes when­ever they came up in my life, and I quickly no­ticed that Bayes seemed sur­pris­ingly ag­gres­sive to me.

For ex­am­ple, the first time I went to the Hot Tubs of Berkeley, a hot tub rental place near my house, I saw a friend of mine there. I won­dered how reg­u­larly he went there. Con­sider the hy­pothe­ses of “he goes here three times a week” and “he goes here once a month”. The like­li­hood ra­tio is about 12x in fa­vor of the former hy­poth­e­sis. So if I pre­vi­ously was ten to one against the three-times-a-week hy­poth­e­sis com­pared to the once-a-month hy­poth­e­sis, I’d now be 12:10 = 6:5 in fa­vor of it. This felt sur­pris­ingly high to me.

(I have a more gen­eral habit of think­ing about whether the re­sults of calcu­la­tions feel in­tu­itively too low or high to me; this has re­sulted in me notic­ing amus­ing in­con­sis­ten­cies in my nu­mer­i­cal in­tu­itions. For ex­am­ple, my in­tu­itions say that $3.50 for ten photo prints is cheap, but 35c per print is kind of ex­pen­sive.) Another ex­am­ple: A while ago I walked through six cars of a train, which felt like an un­usu­ally long way to walk. But I re­al­ized that I’m 6x more likely to see some­one who walks 6 cars than some­one who walks 1. In all these cases, Bayes The­o­rem sug­gested that I up­date fur­ther in the di­rec­tion of the hy­poth­e­sis fa­vored by the like­li­hood ra­tio than I in­tu­itively wanted to. After con­sid­er­ing this a bit more, I have came to the con­clu­sion that my in­tu­itions were di­rec­tion­ally right; I was calcu­lat­ing the like­li­hood ra­tios in a bi­ased way, and I was also bump­ing up against an in­con­sis­tency in how I es­ti­mated pri­ors and how I es­ti­mated like­li­hood ra­tios. If you want, you might en­joy try­ing to guess what mis­take I think I was mak­ing, be­fore I spoil it for you. Here’s the main mis­take I think I was mak­ing. Re­mem­ber the two hy­pothe­ses about my friend go­ing to the hot tub place 3x a week vs once a month? I said that the like­li­hood ra­tio fa­vored the first by 12x. I calcu­lated this by as­sum­ing that in both cases, my friend vis­ited the hot tub place on ran­dom nights. But in re­al­ity, when I’m ask­ing whether my friend goes to the hot tub place 3x ev­ery week, I’m ask­ing about the to­tal prob­a­bil­ity of all hy­pothe­ses in which he vis­its the hot tub place 3x per week. There are a va­ri­ety of such hy­pothe­ses, and when I con­struct them, I no­tice that some of the hy­pothe­ses placed a higher prob­a­bil­ity on me see­ing my friend than the ran­dom night hy­poth­e­sis. For ex­am­ple, it was a Satur­day night when I saw my friend there and started think­ing about this. It seems kind of plau­si­ble that my friend goes once a month and 50% of the times he vis­its are on a Satur­day night. If my friend went to the hot tub place three times a week on av­er­age, no more than a third of those vis­its could be on a Satur­day night. I think there’s a gen­eral phe­nomenon where when I make a hy­poth­e­sis class like “go­ing once a month”, I ne­glect to think about things about spe­cific hy­pothe­ses in the class which make the ob­served data more likely. The hy­poth­e­sis class offers a tempt­ing way to calcu­late the like­li­hood, but it’s in fact a trap. There’s a gen­eral rule here, some­thing like: When you see some­thing hap­pen that a hy­poth­e­sis class thought was un­likely, you up­date a lot to­wards hy­pothe­ses in that class which gave it un­usu­ally high like­li­hood. And this next part is some­thing that I’ve no­ticed, rather than some­thing that fol­lows from the math, but it seems like most of the time when I make up hy­pothe­ses classes, some­thing like this hap­pens where I ini­tially calcu­late the like­li­hood to be lower than it is, and the like­li­hoods of differ­ent hy­poth­e­sis classes are closer than they would be. (I sus­pect that the con­cept of a max­i­mum en­tropy hy­poth­e­sis is rele­vant. For ev­ery hy­poth­e­sis class, there’s a max­i­mum en­tropy (aka max­ent) hy­poth­e­sis, which is the hy­poth­e­sis which is max­i­mally un­cer­tain sub­ject to the con­straint of the hy­poth­e­sis class. Eg the max­i­mum en­tropy hy­poth­e­sis for the class “my friend vis­its the hot tub place three times a month on av­er­age” is the hy­poth­e­sis where the prob­a­bil­ity of my friend vis­it­ing the hot tub place ev­ery day is equal and un­cor­re­lated. In my ex­pe­rience in real world cases, hy­pothe­ses classes tend to con­tain non-max­ent hy­pothe­ses which fit the data bet­ter much bet­ter. In gen­eral for a statis­ti­cal prob­lem, these hy­pothe­ses don’t do bet­ter than the max­ent hy­poth­e­sis; I don’t know why they tend to do bet­ter in prob­lems I think about.) Another thing caus­ing my pos­te­ri­ors to be ex­ces­sively bi­ased to­wards low-prior high-like­li­hood hy­pothe­ses is that pri­ors tend to be more sub­jec­tive to es­ti­mate than like­li­hoods are. I think I’m prob­a­bly un­der­con­fi­dent in as­sign­ing ex­tremely high or low prob­a­bil­ities to hy­pothe­ses, and this means that when I see some­thing that looks like mod­er­ate ev­i­dence of an ex­tremely un­likely event, the like­li­hood ra­tio is more ex­treme than the prior, lead­ing me to have a coun­ter­in­tu­itively high pos­te­rior on the low-prior hy­poth­e­sis. I could get around this by be­ing more con­fi­dent in my prob­a­bil­ity es­ti­mates at the 98% or 99% level, but it takes a re­ally long time to be­come cal­ibrated on those. • Minor point: I think as­ter­oid strikes are prob­a­bly very highly cor­re­lated be­tween Everett branches (though maybe the timing of spot­ting an as­ter­oid on a col­li­sion course is vari­able). • A cou­ple weeks ago I spent an hour talk­ing over video chat with Daniel Cantu, a UCLA neu­ro­science post­doc who I hired on Wyzant.com to spend an hour an­swer­ing a va­ri­ety of ques­tions about neu­ro­science I had. (Thanks Daniel for re­view­ing this blog post for me!) The most in­ter­est­ing thing I learned is that I had quite sub­stan­tially mi­s­un­der­stood the con­nec­tion be­tween con­volu­tional neu­ral nets and the hu­man vi­sual sys­tem. Peo­ple claim that these are some­what bio-in­spired, and that if you look at early lay­ers of the vi­sual cor­tex you’ll find that it op­er­ates kind of like the early lay­ers of a CNN, and so on. The claim that the vi­sual sys­tem works like a CNN didn’t quite make sense to me though. Ac­cord­ing to my ex­tremely rough un­der­stand­ing, biolog­i­cal neu­rons op­er­ate kind of like the ar­tifi­cial neu­rons in a fully con­nected neu­ral net layer—they have some in­put con­nec­tions and a non­lin­ear­ity and some out­put con­nec­tions, and they have some kind of mechanism for Heb­bian learn­ing or back­prop­a­ga­tion or some­thing. But that story doesn’t seem to have a mechanism for how neu­rons do weight ty­ing, which to me is the key fea­ture of CNNs. Daniel claimed that in­deed hu­man brains don’t have weight ty­ing, and we achieve the effi­ciency gains over dense neu­ral nets by two other mechanisms in­stead: Firstly, the early lay­ers of the vi­sual cor­tex are set up to rec­og­nize par­tic­u­lar low-level vi­sual fea­tures like edges and mo­tion, but this is largely ge­net­i­cally en­coded rather than learned with weight-shar­ing. One way that we know this is that mice de­velop a lot of these fea­tures be­fore their eyes open. Th­ese low-level fea­tures can be re­in­forced by pos­i­tive sig­nals from later lay­ers, like other neu­rons, but these up­dates aren’t done with weight-ty­ing. So the weight-shar­ing and learn­ing here is done at the ge­netic level. Se­condly, he thinks that we get around the need for weight-shar­ing at later lev­els by not try­ing to be able to rec­og­nize com­pli­cated de­tails with differ­ent neu­rons. Our vi­sion is way more de­tailed in the cen­ter of our field of view than around the edges, and if we need to look at some­thing closely we move our eyes over it. He claims that this gets around the need to have weight ty­ing, be­cause we only need to be able to rec­og­nize images cen­tered in one place. I was pretty skep­ti­cal of this claim at first. I pointed out that I can in fact read let­ters that are a va­ri­ety of dis­tances from the cen­ter of my vi­sual field; his guess is that I learned to read all of these sep­a­rately. I’m also kind of con­fused by how this story fits in with the fact that hu­mans seem to rel­a­tively quickly learn to adapt to in­ver­sion gog­gled. I would love to check what some other peo­ple who know neu­ro­science think about this. I found this pretty mind­blow­ing. I’ve heard peo­ple use CNNs as an ex­am­ple of how un­der­stand­ing brains helped us figure out how to do ML stuff bet­ter; peo­ple use this as an ar­gu­ment for why fu­ture AI ad­vances will need to be based on im­proved neu­ro­science. This ar­gu­ment seems ba­si­cally com­pletely wrong if the story I pre­sented here is cor­rect. • I think that an ex­tremely effec­tive way to get a bet­ter feel for a new sub­ject is to pay an on­line tu­tor to an­swer your ques­tions about it for an hour. It turns that there are a bunch of grad stu­dents on Wyzant who mostly work tu­tor­ing high school math or what­ever but who are very happy to spend an hour an­swer­ing your weird ques­tions. For ex­am­ple, a few weeks ago I had a ses­sion with a first-year Har­vard syn­thetic biol­ogy PhD. Be­fore the ses­sion, I spent a ten-minute timer writ­ing down things that I cur­rently didn’t get about biol­ogy. (This is an ex­er­cise worth do­ing even if you’re not go­ing to have a tu­tor, IMO.) We spent the time talk­ing about some mix of the ques­tions I’d pre­pared, var­i­ous tan­gents that came up dur­ing those ex­pla­na­tions, and his sense of the field over­all. I came away with a whole bunch of my minor mis­con­cep­tions fixed, a few poin­t­ers to top­ics I wanted to learn more about, and a way bet­ter sense of what the field feels like and what the im­por­tant prob­lems and re­cent de­vel­op­ments are. There are a few rea­sons that hav­ing a paid tu­tor is a way bet­ter way of learn­ing about a field than try­ing to meet peo­ple who hap­pen to be in that field. I re­ally like it that I’m pay­ing them, and so I can ag­gres­sively di­rect the con­ver­sa­tion to wher­ever my cu­ri­os­ity is, whether it’s about their work or some minor point or what­ever. I don’t need to worry about them get­ting bored with me, so I can just keep ask­ing ques­tions un­til I get some­thing. Con­ver­sa­tional moves I par­tic­u­larly like: • “I’m go­ing to try to give the thirty sec­ond ex­pla­na­tion of how gene ex­pres­sion is con­trol­led in an­i­mals; you should tell me the most im­por­tant things I’m wrong about.” • “Why don’t peo­ple talk about X?” • “What should I read to learn more about X, based on what you know about me from this con­ver­sa­tion?” All of the above are way faster with a live hu­man than with the in­ter­net. I think that do­ing this for an hour or two weekly will make me sub­stan­tially more knowl­edge­able over the next year. Var­i­ous other notes on on­line tu­tors: • On­line lan­guage tu­tors are su­per cheap—I had some Ja­panese tu­tor who was like$10 an hour. They’re a great way to prac­tice con­ver­sa­tion. They’re also su­per fun IMO.

• Sadly, tu­tors from well paid fields like pro­gram­ming or ML are way more ex­pen­sive.

• If you wanted to save money, you could gam­ble more on less cre­den­tialed tu­tors, who are of­ten $20-$40 an hour.

If you end up do­ing this, I’d love to hear your ex­pe­rience.

# Buck’s Shortform

18 Aug 2019 7:22 UTC
12 points