RSS

Fu­ture of Hu­man­ity In­sti­tute (FHI)

TagLast edit: 24 Sep 2020 18:52 UTC by Multicore

The Future of Humanity Institute is part of the Faculty of Philosophy and the Oxford Martin School at the University of Oxford. Founded in 2005, its director is Nick Bostrom. The mission of FHI is described on their website:

FHI puts together a wide range of researches, prominent on their original fields, which decided to focus on global questions about the progress and future of humanity, e.g.:

The FHI is an affiliate to LessWrong and Overcoming Bias. Their past activities include holding a conference in 2008 titled Global Catastrophic Risks Conference and publishing a book, also titled Global Catastrophic Risks.

See also

External links

FHI pa­per pub­lished in Science: in­ter­ven­tions against COVID-19

SoerenMind16 Dec 2020 21:19 UTC
119 points
0 comments3 min readLW link

Timeline of Fu­ture of Hu­man­ity Institute

riceissa18 Mar 2018 18:45 UTC
11 points
0 comments1 min readLW link
(timelines.issarice.com)

SIAI vs. FHI achieve­ments, 2008-2010

Kaj_Sotala25 Sep 2011 11:42 UTC
40 points
62 comments4 min readLW link

Google Deep­mind and FHI col­lab­o­rate to pre­sent re­search at UAI 2016

Stuart_Armstrong9 Jun 2016 18:08 UTC
39 points
11 comments2 min readLW link

Donat­ing to MIRI vs. FHI vs. CEA vs. CFAR

ChrisHallquist27 Dec 2013 3:43 UTC
31 points
45 comments1 min readLW link

FHI is hiring re­searchers!

Stuart_Armstrong23 Dec 2015 22:46 UTC
20 points
2 comments2 min readLW link

Has SIAI/​FHI con­sid­ered putting up prizes for con­tri­bu­tions to im­por­tant prob­lems?

jsalvatier3 Jul 2011 17:26 UTC
18 points
9 comments1 min readLW link

The Fu­ture of Hu­man­ity In­sti­tute is hiring a pro­ject manager

crmflynn26 Jan 2017 18:19 UTC
10 points
0 comments1 min readLW link

Could you be Prof Nick Bostrom’s side­kick?

RobertWiblin5 Dec 2014 1:09 UTC
78 points
47 comments1 min readLW link

Nick Bostrom’s TED talk on Su­per­in­tel­li­gence is now online

chaosmage27 Apr 2015 15:15 UTC
34 points
10 comments1 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link1 review

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

FHI pa­per on COVID-19 gov­ern­ment countermeasures

jacobjacob4 Jun 2020 21:06 UTC
47 points
5 comments1 min readLW link
(doi.org)

FHI Re­search Schol­ars Programme

habryka29 Jun 2018 2:31 UTC
28 points
2 comments1 min readLW link
(www.fhi.ox.ac.uk)

Room for more fund­ing at the Fu­ture of Hu­man­ity Institute

John_Maxwell16 Nov 2012 20:45 UTC
26 points
15 comments1 min readLW link

Sny­der-Beat­tie, Sand­berg, Drexler & Bon­sall (2020): The Timing of Evolu­tion­ary Tran­si­tions Suggests In­tel­li­gent Life Is Rare

Kaj_Sotala24 Nov 2020 10:36 UTC
83 points
20 comments2 min readLW link
(www.liebertpub.com)

Thoughts on AGI safety from the top

jylin042 Feb 2022 20:06 UTC
36 points
3 comments32 min readLW link

The Fu­ture of Hu­man­ity In­sti­tute could make use of your money

danieldewey26 Sep 2014 22:53 UTC
78 points
25 comments1 min readLW link

Claims & As­sump­tions made in Eter­nity in Six Hours

Ruby8 May 2019 23:11 UTC
50 points
7 comments3 min readLW link

A Vi­su­al­iza­tion of Nick Bostrom’s Superintelligence

[deleted]23 Jul 2014 0:24 UTC
62 points
28 comments3 min readLW link

2017 AI Safety Liter­a­ture Re­view and Char­ity Com­par­i­son

Larks24 Dec 2017 18:52 UTC
41 points
5 comments23 min readLW link

De­ci­pher­ing China’s AI Dream

Qiaochu_Yuan18 Mar 2018 3:26 UTC
12 points
2 comments1 min readLW link
(www.fhi.ox.ac.uk)

Dis­solv­ing the Fermi Para­dox, and what re­flec­tion it provides

Jan_Kulveit30 Jun 2018 16:35 UTC
28 points
22 comments1 min readLW link
(arxiv.org)

Op­por­tu­ni­ties for in­di­vi­d­ual donors in AI safety

Alex Flint31 Mar 2018 18:37 UTC
30 points
3 comments11 min readLW link

AI Safety Re­search Camp—Pro­ject Proposal

David_Kristoffersson2 Feb 2018 4:25 UTC
29 points
11 comments8 min readLW link

The Sin­gu­lar­ity Wars

JoshuaFox14 Feb 2013 9:44 UTC
82 points
25 comments3 min readLW link

The Vuln­er­a­ble World Hy­poth­e­sis (by Bostrom)

Ben Pace6 Nov 2018 20:05 UTC
50 points
17 comments4 min readLW link
(nickbostrom.com)
No comments.