Link Retrospective for 2020 Q1

Link post

Below are a list of in­ter­est­ing con­tent I came across in the first quar­ter of 2020. I sub­di­vided the links into the type of medium they use, namely au­di­tory, tex­tual, and vi­sual. The links are in no par­tic­u­lar or­der.

1. Auditory

D. McRaney. How a Divi­sive Pho­to­graph of a Per­cep­tu­ally Am­bigu­ous Dress Led Two Re­searchers to Build the Nu­clear Bomb of Cog­ni­tive Science out of Socks and Crocs. You Are Not So Smart. 2020:

...the sci­ence be­hind The Dress, why some peo­ple see it as black and blue, and oth­ers see it as white and gold. But it’s also about how the sci­en­tific in­ves­ti­ga­tion of The Dress lead to the sci­en­tific in­ves­ti­ga­tion of socks and Crocs, and how the sci­en­tific in­ves­ti­ga­tion of socks and Crocs may be, as one re­searcher told me, the nu­clear bomb of cog­ni­tive neu­ro­science.

When fac­ing a novel and un­cer­tain situ­a­tion, the brain se­cretly dis­am­biguates the am­bigu­ous with­out let­ting you know it was ever un­cer­tain in the first place, lead­ing peo­ple who dis­am­biguate differ­ently to seem iNsAnE.

C. Con­nor. Psy­choa­cous­tics. YouTube. 2020:

00:00 Psy­choa­cous­tics is the study of the per­cep­tion of sound. Th­ese videos at­tempt to gather all of the var­i­ous in­ter­est­ing phe­nom­ena that fall in to this cat­e­gory in one con­densed se­ries, in­clud­ing many neat illu­sions. We will also cover a few fas­ci­nat­ing geeky top­ics re­lat­ing to hear­ing.

MIT,, fif­, 2020:

This is a text-to-speech tool that you can use to gen­er­ate 44.1 kHz voices of var­i­ous char­ac­ters. The voices are gen­er­ated in real time us­ing mul­ti­ple au­dio syn­the­sis al­gorithms and cus­tomized deep neu­ral net­works trained on very lit­tle available data (be­tween 30 15 and 120 min­utes of clean di­alogue for each char­ac­ter). This pro­ject demon­strates a sig­nifi­cant re­duc­tion in the amount of au­dio re­quired to re­al­is­ti­cally clone voices while re­tain­ing their af­fec­tive prosodies.

2. Textual

AI Im­pacts. In­ter­views on Plau­si­bil­ity of AI Safety by De­fault. AI Im­pacts Blog. 2020:

AI Im­pacts con­ducted in­ter­views with sev­eral thinkers on AI safety in 2019 as part of a pro­ject ex­plor­ing ar­gu­ments for ex­pect­ing ad­vanced AI to be safe by de­fault. The in­ter­views also cov­ered other AI safety top­ics, such as timelines to ad­vanced AI, the like­li­hood of cur­rent tech­niques lead­ing to AGI, and cur­rently promis­ing AI safety in­ter­ven­tions.

Be­fore tak­ing into ac­count other re­searchers’ opinions, Shah guesses an ex­tremely rough~90% chance that even with­out any ad­di­tional in­ter­ven­tion from cur­rent longter­mists, ad­vanced AI sys­tems will not cause hu­man ex­tinc­tion by ad­ver­sar­i­ally op­ti­miz­ing against hu­mans.

Chris­ti­ano is more op­ti­mistic about the likely so­cial con­se­quences of ad­vanced AI than some oth­ers in AI safety, in par­tic­u­lar re­searchers at the Ma­chine In­tel­li­gence Re­search In­sti­tute (MIRI)

Gleave thinks there’s a ~10% chance that AI safety is very hard in the way that MIRI would ar­gue, a ~20-30% chance that AI safety will al­most cer­tainly be solved by de­fault, and a re­main­ing ~60-70% chance that what we’re work­ing on ac­tu­ally has some im­pact.

Han­son thinks that now is the wrong time to put a lot of effort into ad­dress­ing AI risk.

Deep­Mind. Out­perform­ing the Hu­man Atari Bench­mark. Deep­Mind Blog. 2020:

The Atari57 suite of games is a long-stand­ing bench­mark to gauge agent perfor­mance across a wide range of tasks. We’ve de­vel­oped Agent57, the first deep re­in­force­ment learn­ing agent to ob­tain a score that is above the hu­man baseline on all 57 Atari 2600 games. Agent57 com­bines an al­gorithm for effi­cient ex­plo­ra­tion with a meta-con­trol­ler that adapts the ex­plo­ra­tion and long vs. short-term be­havi­our of the agent.

Agent57 is built on the fol­low­ing ob­ser­va­tion: what if an agent can learn when it’s bet­ter to ex­ploit, and when it’s bet­ter to ex­plore? We in­tro­duced the no­tion of a meta-con­trol­ler that adapts the ex­plo­ra­tion-ex­ploita­tion trade-off, as well as a time hori­zon that can be ad­justed for games re­quiring longer tem­po­ral credit as­sign­ment. With this change, Agent57 is able to get the best of both wor­lds: above hu­man-level perfor­mance on both easy games and hard games.

T. Weiss et al. Per­cep­tual Con­ver­gence of Multi-Com­po­nent Mix­tures in Ol­fac­tion Im­plies an Ol­fac­tory White. PNAS. 2012:

In vi­sion, two mix­tures, each con­tain­ing an in­de­pen­dent set of many differ­ent wave­lengths, may pro­duce a com­mon color per­cept termed “white.” In au­di­tion, two mix­tures, each con­tain­ing an in­de­pen­dent set of many differ­ent fre­quen­cies, may pro­duce a com­mon per­cep­tual hum termed “white noise.” Vi­sual and au­di­tory whites emerge upon two con­di­tions: when the mix­ture com­po­nents span stim­u­lus space, and when they are of equal in­ten­sity.

We con­clude that a com­mon ol­fac­tory per­cept, “ol­fac­tory white,” is as­so­ci­ated with mix­tures of ∼30 or more equal-in­ten­sity com­po­nents that span stim­u­lus space, im­ply­ing that ol­fac­tory rep­re­sen­ta­tions are of fea­tures of molecules rather than of molec­u­lar iden­tity.

3. Visual

3Blue1Brown. Si­mu­lat­ing an Epi­demic. YouTube. 2020:

01:12 Th­ese simu­la­tions rep­re­sent what’s called an “SIR model”, mean­ing the pop­u­la­tion is bro­ken up into three cat­e­gories, those who are sus­cep­ti­ble to the given dis­ease, those who are in­fec­tious, and those who have re­cov­ered from the in­fec­tion.

04:30 The first key take­away to tuck away in your mind is just how sen­si­tive this growth is to each pa­ram­e­ter in our con­trol. It’s not hard to imag­ine chang­ing your daily habits in ways that mul­ti­ply the num­ber of peo­ple you in­ter­act with or that cut your prob­a­bil­ity of catch­ing an in­fec­tion in half.

09:00 A sec­ond key take­away here is that changes in how many peo­ple slip through the tests cause dis­pro­por­tionately large changes to the to­tal num­ber of peo­ple in­fected.

21:22 After mak­ing all these, what I came away with more than any­thing was a deeper ap­pre­ci­a­tion for dis­ease con­trol done right; for the in­or­di­nate value of early wide­spread test­ing and the abil­ity to iso­late cases; for the ther­a­peu­tics that treat these cases, and most im­por­tantly for how easy it is to un­der­es­ti­mate all that value when times are good.

3Blue1Brown. Bayes The­o­rem, and Mak­ing Prob­a­bil­ity In­tu­itive. YouTube. 2019:

00:00 The goal is for you to come away from this video un­der­stand­ing one of the most im­por­tant for­mu­las in all of prob­a­bil­ity, Bayes’ the­o­rem. This for­mula is cen­tral to sci­en­tific dis­cov­ery, it’s a core tool in ma­chine learn­ing and AI, and it’s even been used for trea­sure hunt­ing, when in the 80’s a small team led by Tommy Thomp­son used Bayesian search tac­tics to help un­cover a ship that had sunk a cen­tury and half ear­lier car­ry­ing what, in to­day’s terms, amounts to $700,000,000 worth of gold. So it’s a for­mula worth un­der­stand­ing.

08:44 This is sort of the dis­til­led ver­sion of think­ing with a rep­re­sen­ta­tive sam­ple where we think with ar­eas in­stead of counts, which is more flex­ible and eas­ier to sketch on the fly. Rather than bring­ing to mind some spe­cific num­ber of ex­am­ples, think of the space of all pos­si­bil­ities as a 1x1 square. Any event oc­cu­pies some sub­set of this space, and the prob­a­bil­ity of that event can be thought about as the area of that sub­set.

Ta­nia Lom­brozo. Learn­ing By Think­ing. Edge. 2017:

Some­times you think you un­der­stand some­thing, and when you try to ex­plain it to some­body else, you re­al­ize that maybe you gained some new in­sight that you didn’t have be­fore. Maybe you re­al­ize you didn’t un­der­stand it as well as you thought you did. What I think is in­ter­est­ing about this pro­cess is that it’s a pro­cess of learn­ing by think­ing. When you’re ex­plain­ing to your­self or to some­body else with­out them pro­vid­ing feed­back, in­so­far as you gain new in­sight or un­der­stand­ing, it isn’t driven by that new in­for­ma­tion that they’ve pro­vided. In some way, you’ve re­ar­ranged what was already in your head in or­der to get new in­sight.

Some­times what we want to do is be per­sua­sive. Some­times what we want to do is come up with a con­ve­nient way for solv­ing a par­tic­u­lar type of prob­lem. Again, it might be wrong some of the time, but it’s go­ing to be much eas­ier to im­ple­ment in other cases. There are all sorts of differ­ent epistemic and so­cial goals that we might have. In­creas­ingly, I’m think­ing that maybe ex­pla­na­tion doesn’t have just one goal; it prob­a­bly has mul­ti­ple goals. What­ever it is, it’s prob­a­bly not just the thing that Bayesian in­fer­ence tracks. It’s prob­a­bly track­ing some of these other things.

Ver­i­ta­sium. Par­allel Wor­lds Prob­a­bly Ex­ist. YouTube. 2020:

00:56 so how are we to rec­on­cile the spread-out wave­func­tion evolv­ing smoothly un­der the Schrod­inger equa­tion with this point like par­ti­cle de­tec­tion 02:23 the out­comes of ex­per­i­ments so the way quan­tum me­chan­ics came to be un­der­stood and the way I learned it is that there are two sets of rules when you’re not look­ing the wave func­tion sim­ply evolves ac­cord­ing to the Schrod­inger equa­tion but when you are look­ing when you make a mea­sure­ment the wave­func­tion col­lapses sud­denly and ir­re­versibly and the prob­a­bil­ity of mea­sur­ing any par­tic­u­lar out­come is given by the am­pli­tude of the wave func­tion as­so­ci­ated with that out­come squared now shrod­inger him­self hated this formulation

04:54 in this video I want to show that there is a bet­ter way to think about Schrod­inger’s cat in fact a bet­ter way to think about quan­tum me­chan­ics en­tirely that I’d ar­gue is more log­i­cal and con­sis­tent to get there we have to ex­am­ine the three es­sen­tial com­po­nents of Schrod­inger’s cat su­per­po­si­tion en­tan­gle­ment and mea­sure­ment to see if any of them is flawed

11:58 the im­pli­ca­tion is that the founders of quan­tum the­ory may have got it ex­actly back­wards the wave­func­tion is the com­plete pic­ture of re­al­ity and our mea­sure­ment is just a tiny frac­tion of it

How To Make Every­thing. Creat­ing My Own Alpha­bet From Scratch. YouTube. 2020:

03:03 well writ­ing in essence can be de­scribed as a sys­tem in which the at­tempt is made to put lan­guage in some sort of writ­ten form

03:59 de­vel­oped soon af­ter that was writ­ing that can con­vey words the lo­gogram of writ­ing that can also con­vey syl­la­bles we call those syl­labo­gram and then these early writ­ing sys­tems both in Me­sopotamia Egypt had a third cat­e­gory of signs and those were de­ter­mi­na­tives.

SmarterEveryDay. How Rock­ets Are Made. YouTube. 2020:

02:51 this is Vul­can, and this rocket has never flown. Never flown, not yet. And you’re go­ing to see, the first flight ve­hi­cle hard­ware in the fac­tory be­ing fabri­cated when we go in there to­day.

51:54 Yes, our spe­cialty are the higher-en­ergy, more difficult or­bits, things like Mars 2020, an in­ter­plane­tary mis­sion. -Right, and that’s… Liter­ally right there, yeah, but we don’t call it Mars 2020, here. -What do you call it? We call it Mars 2020...20, -Why would you do that? Be­cause it’s our 20th trip to Mars.

VegSource. When Sup­ple­ments Harm. YouTube. 2020:

To­day we look at re­search show­ing the po­ten­tial prob­lems of rais­ing your B12 blood lev­els too high, which could in­clude death.

If you ad­here to a ve­gan diet, B12 sup­ple­men­ta­tion is pru­dent. I also recom­mend hav­ing your­self tested for vi­tamin B12 defi­ciency ev­ery few years. The most ap­pro­pri­ate test for eval­u­at­ing B12 sta­tus is the urine test for methyl­malonic acid (MMA). Ele­vated MMA is cur­rently the best tool for de­tect­ing vi­tamin B12 defi­ciency, and is con­sid­ered to be su­pe­rior to test­ing for serum B12 di­rectly. An al­ter­na­tive and less costly screen­ing blood test is Ho­mo­cys­teine.

TwoMinutePapers. This Neu­ral Net­work Turns Videos Into 60 FPS. YouTube. 2020:

00:12 it al­most always hap­pens that I en­counter the pa­per videos that have any­thing from 24 to 30 frames per sec­ond. In this case, I put them in my video ed­i­tor that has a 60 fps timeline, so half or even more of these frames will not provide any new in­for­ma­tion. As we try to slow down the videos for some nice slow-mo­tion ac­tion, this ra­tio is even worse, cre­at­ing an ex­tremely choppy out­put video be­cause we have huge gaps be­tween these frames.

2:18 The de­sign of this neu­ral net­work tries to pro­duce four differ­ent kinds of data to fill in these images [...] op­ti­cal flows [...] depth map [...] con­tex­tual ex­trac­tion [...] in­ter­po­la­tion ker­nels...

03:18 All it needs is just the two neigh­bor­ing images.