Tak­ing a sim­plified model

dominicqNov 16, 2021, 10:21 PM
9 points
8 comments1 min readLW link

The Greedy Doc­tor Problem

JanNov 16, 2021, 10:06 PM
6 points
10 comments12 min readLW link
(universalprior.substack.com)

Equity pre­mium puzzles

Nov 16, 2021, 8:50 PM
20 points
4 comments12 min readLW link
(www.metaculus.com)

Why I am no longer driven

dominicqNov 16, 2021, 8:43 PM
71 points
16 comments4 min readLW link

Su­per in­tel­li­gent AIs that don’t re­quire alignment

Yair HalberstadtNov 16, 2021, 7:55 PM
10 points
2 comments6 min readLW link

Why Save The Drown­ing Child: Ethics Vs Theory

Raymond DouglasNov 16, 2021, 7:07 PM
17 points
12 comments4 min readLW link

Two Stupid AI Align­ment Ideas

aphyerNov 16, 2021, 4:13 PM
27 points
3 comments4 min readLW link

[linkpost] Pro­ject Blueprint: ‘Mea­sur­ing and then max­i­mally re­vers­ing the quan­tified biolog­i­cal age of my or­gans’

matteodimaioNov 16, 2021, 2:48 AM
2 points
0 comments1 min readLW link

A pos­i­tive case for how we might suc­ceed at pro­saic AI alignment

evhubNov 16, 2021, 1:49 AM
81 points
46 comments6 min readLW link

Quan­tilizer ≡ Op­ti­mizer with a Bounded Amount of Output

itaibn0Nov 16, 2021, 1:03 AM
11 points
4 comments2 min readLW link

D&D.Sci Dun­geon­crawl­ing: The Crown of Com­mand Eval­u­a­tion & Ruleset

aphyerNov 16, 2021, 12:29 AM
29 points
12 comments9 min readLW link

Stream­ing Science on Twitch

A RayNov 15, 2021, 10:24 PM
21 points
1 comment3 min readLW link

Ngo and Yud­kowsky on al­ign­ment difficulty

Nov 15, 2021, 8:31 PM
259 points
151 comments99 min readLW link1 review

Dan Luu on Per­sis­tent Bad De­ci­sion Mak­ing (but maybe it’s no­ble?)

ElizabethNov 15, 2021, 8:05 PM
17 points
3 comments1 min readLW link
(danluu.com)

The po­etry of progress

jasoncrawfordNov 15, 2021, 7:24 PM
51 points
6 comments4 min readLW link
(rootsofprogress.org)

[Question] Worst Com­mon­sense Con­cepts?

abramdemskiNov 15, 2021, 6:22 PM
75 points
34 comments3 min readLW link

My un­der­stand­ing of the al­ign­ment problem

danieldeweyNov 15, 2021, 6:13 PM
43 points
3 comments3 min readLW link

“Sum­ma­riz­ing Books with Hu­man Feed­back” (re­cur­sive GPT-3)

gwernNov 15, 2021, 5:41 PM
24 points
4 commentsLW link
(openai.com)

How Hu­man­ity Lost Con­trol and Hu­mans Lost Liberty: From Our Brave New World to Analo­gia (Se­quence In­tro­duc­tion)

Justin BullockNov 15, 2021, 2:22 PM
8 points
4 comments3 min readLW link

Re: At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

lsusrNov 15, 2021, 10:02 AM
20 points
8 comments15 min readLW link

What the fu­ture will look like

avantika.mehraNov 15, 2021, 5:14 AM
7 points
1 comment3 min readLW link

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

ZviNov 15, 2021, 3:50 AM
197 points
49 comments16 min readLW link
(thezvi.wordpress.com)

An Emer­gency Fund for Effec­tive Altru­ists (sec­ond ver­sion)

biceNov 14, 2021, 6:28 PM
12 points
4 comments2 min readLW link

Tele­vised sports ex­ist to gam­ble with testos­terone lev­els us­ing pre­dic­tion skill

LucentNov 14, 2021, 6:24 PM
22 points
3 comments1 min readLW link

Im­prov­ing on the Karma System

RaelifinNov 14, 2021, 6:01 PM
106 points
36 comments19 min readLW link

[Linkpost] Paul Gra­ham 101

Gunnar_ZarnckeNov 14, 2021, 4:52 PM
12 points
4 comments1 min readLW link

My cur­rent un­cer­tain­ties re­gard­ing AI, al­ign­ment, and the end of the world

dominicqNov 14, 2021, 2:08 PM
2 points
3 comments2 min readLW link

Ed­u­ca­tion on My Homeworld

lsusrNov 14, 2021, 10:16 AM
37 points
19 comments5 min readLW link

What would we do if al­ign­ment were fu­tile?

Grant DemareeNov 14, 2021, 8:09 AM
75 points
39 comments3 min readLW link

A phar­ma­ceu­ti­cal stock pric­ing mystery

DirectedEvolutionNov 14, 2021, 1:19 AM
14 points
2 comments3 min readLW link

You are prob­a­bly un­der­es­ti­mat­ing how good self-love can be

Charlie Rogers-SmithNov 14, 2021, 12:41 AM
168 points
19 comments12 min readLW link1 review

Co­or­di­na­tion Skills I Wish I Had For the Pandemic

RaemonNov 13, 2021, 11:32 PM
96 points
9 comments6 min readLW link1 review

Sci-Hub sued in India

Connor_FlexmanNov 13, 2021, 11:12 PM
131 points
19 comments7 min readLW link

[Question] What’s the like­li­hood of only sub ex­po­nen­tial growth for AGI?

M. Y. ZuoNov 13, 2021, 10:46 PM
5 points
22 comments1 min readLW link

Com­ments on Car­l­smith’s “Is power-seek­ing AI an ex­is­ten­tial risk?”

So8resNov 13, 2021, 4:29 AM
139 points
15 comments40 min readLW link1 review

A FLI post­doc­toral grant ap­pli­ca­tion: AI al­ign­ment via causal anal­y­sis and de­sign of agents

PabloAMCNov 13, 2021, 1:44 AM
4 points
0 comments7 min readLW link

[Question] Is Func­tional De­ci­sion The­ory still an ac­tive area of re­search?

Grant DemareeNov 13, 2021, 12:30 AM
8 points
3 comments1 min readLW link

Aver­age prob­a­bil­ities, not log odds

AlexMennenNov 12, 2021, 9:39 PM
27 points
20 comments5 min readLW link

[linkpost] Crypto Cities

mike_hawkeNov 12, 2021, 9:26 PM
25 points
10 comments1 min readLW link
(vitalik.ca)

A Defense of Func­tional De­ci­sion Theory

HeighnNov 12, 2021, 8:59 PM
21 points
221 comments10 min readLW link

Why I’m ex­cited about Red­wood Re­search’s cur­rent project

paulfchristianoNov 12, 2021, 7:26 PM
114 points
6 comments7 min readLW link

Stop but­ton: to­wards a causal solution

tailcalledNov 12, 2021, 7:09 PM
25 points
37 comments9 min readLW link

Ran­domWalkNFT: A Game The­ory Exercise

AnnapurnaNov 12, 2021, 7:05 PM
7 points
10 comments2 min readLW link

Preprint is out! 100,000 lu­mens to treat sea­sonal af­fec­tive disorder

FabienneNov 12, 2021, 5:59 PM
170 points
10 comments1 min readLW link

ALERT⚠️ Not enough gud vibes 😎

Pee DoomNov 12, 2021, 11:25 AM
10 points
3 comments1 min readLW link

Avoid­ing Nega­tive Ex­ter­nal­ities—a the­ory with spe­cific ex­am­ples—Part 1

M. Y. ZuoNov 12, 2021, 4:09 AM
2 points
4 comments6 min readLW link

It’s Ok to Dance Again

jefftkNov 12, 2021, 2:50 AM
8 points
0 comments1 min readLW link
(www.jefftk.com)

Mea­sur­ing and Fore­cast­ing Risks from AI

jsteinhardtNov 12, 2021, 2:30 AM
24 points
0 comments3 min readLW link
(bounded-regret.ghost.io)

AGI is at least as far away as Nu­clear Fu­sion.

Logan ZoellnerNov 11, 2021, 9:33 PM
0 points
8 comments1 min readLW link

A Brief In­tro­duc­tion to Con­tainer Logistics

VitorNov 11, 2021, 3:58 PM
267 points
22 comments11 min readLW link1 review