Karma: 369

I am the co founder of and researcher at the quantitative long term strategy organization Convergence (see here for our growing list of publications). Over the last eleven years I have worked with MIRI, CFAR, EA Global, Founders Fund, and Leverage, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. I have a MS degree in computer science and BS degrees in computer science, mathematics, and physics.

Eval­u­at­ing ex­per­tise: a clear box model

JustinShovelain15 Oct 2020 14:18 UTC
32 points
3 comments5 min readLW link

Good and bad ways to think about down­side risks

11 Jun 2020 1:38 UTC
15 points
11 comments11 min readLW link

COVID-19: An op­por­tu­nity to help by mod­el­ling test­ing and trac­ing to in­form the UK government

JustinShovelain17 Apr 2020 17:21 UTC
14 points
2 comments2 min readLW link

[Question] Test­ing and con­tact trac­ing im­pact as­sess­ment model?

JustinShovelain9 Apr 2020 17:42 UTC
6 points
3 comments1 min readLW link

COVID-19: List of ideas to re­duce the di­rect harm from the virus, with an em­pha­sis on un­usual ideas

JustinShovelain9 Apr 2020 11:33 UTC
30 points
13 comments7 min readLW link

Memetic down­side risks: How ideas can evolve and cause harm

25 Feb 2020 19:47 UTC
14 points
3 comments15 min readLW link

In­for­ma­tion haz­ards: Why you should care and what you can do

23 Feb 2020 20:47 UTC
15 points
4 comments15 min readLW link

Map­ping down­side risks and in­for­ma­tion hazards

20 Feb 2020 14:46 UTC
14 points
0 comments9 min readLW link

Us­ing vec­tor fields to vi­su­al­ise prefer­ences and make them consistent

28 Jan 2020 19:44 UTC
38 points
32 comments11 min readLW link

AI al­ign­ment con­cepts: philo­soph­i­cal break­ers, stop­pers, and distorters

JustinShovelain24 Jan 2020 19:23 UTC
20 points
3 comments3 min readLW link

Safety reg­u­la­tors: A tool for miti­gat­ing tech­nolog­i­cal risk

JustinShovelain21 Jan 2020 13:07 UTC
13 points
4 comments4 min readLW link

FAI Re­search Con­straints and AGI Side Effects

JustinShovelain3 Jun 2015 19:25 UTC
26 points
59 comments7 min readLW link

Min­neapo­lis Meetup: Satur­day May 28, 3:00PM

JustinShovelain23 May 2011 23:55 UTC
4 points
0 comments1 min readLW link

Min­neapo­lis Meetup: Satur­day May 14, 3:00PM

JustinShovelain13 May 2011 21:14 UTC
8 points
5 comments1 min readLW link

Se­quen­tial Or­ga­ni­za­tion of Think­ing: “Six Think­ing Hats”

JustinShovelain18 Mar 2010 5:22 UTC
30 points
14 comments3 min readLW link

Coffee: When it helps, when it hurts

JustinShovelain10 Mar 2010 6:14 UTC
51 points
109 comments1 min readLW link

Meetup: Bay Area: Sun­day, March 7th, 7pm

JustinShovelain2 Mar 2010 21:18 UTC
8 points
44 comments1 min readLW link

In­tu­itive su­per­goal uncertainty

JustinShovelain4 Dec 2009 5:21 UTC
8 points
27 comments5 min readLW link

Min­neapo­lis Meetup: Sur­vey of interest

JustinShovelain18 Sep 2009 18:52 UTC
8 points
8 comments1 min readLW link

Causes of disagreements

JustinShovelain16 Jul 2009 21:51 UTC
27 points
20 comments4 min readLW link