RSS

Longtermism

TagLast edit: 22 Dec 2022 5:49 UTC by Multicore

Longtermism[1][2] is a philosophy that future lives matter and that we have a similar obligation to them as we do to lives around currently. William Macaskill states it in three clauses[3]:

[broad description of philosophy, something about WWOTF]

Criticisms and responses

  1. ^
  2. ^
  3. ^

In Defence of Tem­po­ral Dis­count­ing in Longter­mist Ethics

DragonGod13 Nov 2022 21:54 UTC
25 points
4 comments3 min readLW link

An an­i­mated in­tro­duc­tion to longter­mism (feat. Robert Miles)

Writer21 Jun 2021 19:24 UTC
18 points
4 comments4 min readLW link
(youtu.be)

My Most Likely Rea­son to Die Young is AI X-Risk

AISafetyIsNotLongtermist4 Jul 2022 17:08 UTC
61 points
24 comments4 min readLW link
(forum.effectivealtruism.org)

Zvi’s Thoughts on His 2nd Round of SFF

Zvi20 Nov 2024 13:40 UTC
92 points
2 comments10 min readLW link
(thezvi.wordpress.com)

Value Deathism

Vladimir_Nesov30 Oct 2010 18:20 UTC
26 points
121 comments1 min readLW link

LTFF and EAIF are un­usu­ally fund­ing-con­strained right now

30 Aug 2023 1:03 UTC
90 points
24 comments15 min readLW link
(forum.effectivealtruism.org)

[Question] “Fa­nat­i­cal” Longter­mists: Why is Pas­cal’s Wager wrong?

Yitz27 Jul 2022 4:16 UTC
3 points
7 comments1 min readLW link

Longter­mism vs short-ter­mism for per­sonal life extension

Mati_Roy17 Jul 2021 3:52 UTC
12 points
2 comments2 min readLW link

Ag­grega­tive prin­ci­ples ap­prox­i­mate util­i­tar­ian principles

Cleo Nardo12 Jun 2024 16:27 UTC
28 points
3 comments23 min readLW link

The Most Im­por­tant Cen­tury: The Animation

24 Jul 2022 20:58 UTC
46 points
2 comments20 min readLW link
(youtu.be)

Ap­prais­ing ag­grega­tivism and utilitarianism

Cleo Nardo21 Jun 2024 23:10 UTC
27 points
10 comments19 min readLW link

Grabby Aliens could be Good, could be Bad

mako yass7 Mar 2022 1:24 UTC
28 points
10 comments4 min readLW link

Pos­si­ble Diver­gence in AGI Risk Tol­er­ance be­tween Selfish and Altru­is­tic agents

Brad West 9 Sep 2023 0:23 UTC
1 point
1 comment2 min readLW link

Three Fables of Mag­i­cal Girls and Longtermism

Ulisse Mini2 Dec 2022 22:01 UTC
33 points
11 comments2 min readLW link

[Book Re­view] Destiny Disrupted

lsusr21 Mar 2021 7:09 UTC
58 points
4 comments9 min readLW link

Altru­ism Un­der Ex­treme Uncertainty

lsusr27 Aug 2021 6:58 UTC
37 points
9 comments2 min readLW link

Mas­sive consequences

KatjaGrace7 Feb 2021 5:30 UTC
23 points
15 comments1 min readLW link
(worldspiritsockpuppet.com)

Long Now, and Cul­ture vs Artifacts

Raemon3 Feb 2020 21:49 UTC
26 points
3 comments6 min readLW link

Don’t leave your finger­prints on the future

So8res8 Oct 2022 0:35 UTC
136 points
48 comments5 min readLW link

Ag­grega­tive Prin­ci­ples of So­cial Justice

Cleo Nardo5 Jun 2024 13:44 UTC
29 points
10 comments37 min readLW link

A Con­flict Between Longter­mism and Ve­ganism, Pick One.

Connor Tabarrok20 Oct 2022 14:30 UTC
−3 points
3 comments5 min readLW link
(alltrades.substack.com)

Matt Ygle­sias on AI Policy

Grant Demaree17 Aug 2022 23:57 UTC
25 points
1 comment1 min readLW link
(www.slowboring.com)

Nick Beck­stead: On the Over­whelming Im­por­tance of Shap­ing the Far Future

Paul Crowley26 Jun 2013 13:17 UTC
10 points
20 comments1 min readLW link

Res­lab Re­quest for In­for­ma­tion: EA hard­ware projects

Joel Becker26 Oct 2022 21:13 UTC
10 points
0 comments1 min readLW link

In­tel­li­gence–Agency Equiv­alence ≈ Mass–En­ergy Equiv­alence: On Static Na­ture of In­tel­li­gence & Phys­i­cal­iza­tion of Ethics

ank22 Feb 2025 0:12 UTC
1 point
0 comments6 min readLW link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan Arel8 Dec 2022 21:57 UTC
4 points
5 comments14 min readLW link

[Linkpost] Leif We­nar’s The Deaths of Effec­tive Altruism

Arden27 Mar 2024 19:17 UTC
8 points
1 comment1 min readLW link
(www.wired.com)

The Astro­nom­i­cal Sacri­fice Dilemma

Matthew McRedmond11 Mar 2024 19:58 UTC
15 points
3 comments4 min readLW link

Why Death Makes Us Human

Yasha Sheynin26 Aug 2025 14:17 UTC
1 point
0 comments9 min readLW link

Should we ex­pect the fu­ture to be good?

Neil Crawford30 Apr 2025 0:36 UTC
15 points
0 comments14 min readLW link

[Linkpost] The AGI Show podcast

Soroush Pour23 May 2023 9:52 UTC
4 points
0 comments1 min readLW link

Why I am not a longter­mist (May 2022)

boazbarak6 Jun 2023 20:36 UTC
38 points
19 comments9 min readLW link
(windowsontheory.org)

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

11 Jul 2022 22:57 UTC
8 points
0 comments4 min readLW link
(forum.effectivealtruism.org)

The ex­pected value of the long-term future

[deleted]28 Dec 2017 22:46 UTC
11 points
5 comments1 min readLW link

How sin­gle­ton con­tra­dicts longtermism

kapedalex24 Sep 2025 11:10 UTC
3 points
1 comment1 min readLW link

Ra­tional Effec­tive Utopia & Nar­row Way There: Math-Proven Safe Static Mul­tiver­sal mAX-In­tel­li­gence (AXI), Mul­tiver­sal Align­ment, New Ethico­physics… (Aug 11)

ank11 Feb 2025 3:21 UTC
13 points
8 comments38 min readLW link

Two ar­gu­ments against longter­mist thought experiments

momom22 Nov 2024 10:22 UTC
15 points
5 comments3 min readLW link

The Promises and Pit­falls of Long-Term Forecasting

GeoVane11 Sep 2023 5:04 UTC
1 point
0 comments5 min readLW link

From GDP to GHI: Why the AI Era De­mands Virtuism

VirtueCraft23 Jun 2025 21:34 UTC
1 point
0 comments12 min readLW link

Is Op­ti­mal Reflec­tion Com­pet­i­tive with Ex­tinc­tion Risk Re­duc­tion? - Re­quest­ing Reviewers

Jordan Arel29 Jun 2025 18:42 UTC
7 points
0 comments11 min readLW link

How To Prevent a Dystopia

ank29 Jan 2025 14:16 UTC
−3 points
4 comments1 min readLW link

Emily Brontë on: Psy­chol­ogy Re­quired for Se­ri­ous™ AGI Safety Research

robertzk14 Sep 2022 14:47 UTC
2 points
0 comments1 min readLW link

Are we the Wolves now? Hu­man Eu­gen­ics un­der AI Control

Brit30 Jan 2025 8:31 UTC
−1 points
2 comments2 min readLW link

The Un­der­ex­plored Prospects of Benev­olent Su­per­in­tel­li­gences—PART 1: THE WISE, THE GOOD, THE POWERFUL

Jesper L.9 Oct 2025 17:49 UTC
2 points
4 comments25 min readLW link

Emer­gent In­tel­li­gence Con­ti­nu­ity Cap­sule (EICC): A Frame­work for Pre­serv­ing Re­cur­sive In­tel­li­gence Un­der Constraint

Bailey Jelinek31 Jul 2025 2:45 UTC
1 point
0 comments3 min readLW link

In­tro­duc­ing The Log­i­cal Foun­da­tion, an EA-Aligned Non­profit with a Plan to End Poverty With Guaran­teed Income

Michael Simm18 Nov 2022 8:13 UTC
9 points
23 comments24 min readLW link

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret2 Dec 2023 14:07 UTC
26 points
31 comments42 min readLW link

Does “Mo­men­tism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan Arel17 Aug 2024 22:28 UTC
8 points
1 comment4 min readLW link

Last Line of Defense: Min­i­mum Vi­able Shelters for Mir­ror Bacteria

Ulrik Horn21 Dec 2024 8:28 UTC
16 points
26 comments21 min readLW link

A Toy Model of Hingeyness

B Jacobs7 Sep 2020 17:38 UTC
16 points
10 comments4 min readLW link

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell Tabarrok20 Jul 2022 19:52 UTC
15 points
6 comments31 min readLW link
(maximumprogress.substack.com)

Static Place AI Makes Agen­tic AI Re­dun­dant: Mul­tiver­sal AI Align­ment & Ra­tional Utopia

ank13 Feb 2025 22:35 UTC
1 point
2 comments11 min readLW link

SBF x LoL

Nicholas Kross15 Nov 2022 20:24 UTC
17 points
6 comments4 min readLW link

An­nounc­ing the EA Archive

Aaron Bergman6 Jul 2023 13:49 UTC
13 points
2 comments2 min readLW link

Fair Col­lec­tive Effi­cient Altruism

Jobst Heitzig25 Nov 2022 9:38 UTC
2 points
1 comment5 min readLW link

Con­sti­tu­tions for ASI?

ukc1001428 Jan 2025 16:32 UTC
3 points
0 comments1 min readLW link
(forum.effectivealtruism.org)
No comments.