Stream­ing Science on Twitch

A RayNov 15, 2021, 10:24 PM
21 points
1 comment3 min readLW link

Ngo and Yud­kowsky on al­ign­ment difficulty

Nov 15, 2021, 8:31 PM
259 points
151 comments99 min readLW link1 review

Dan Luu on Per­sis­tent Bad De­ci­sion Mak­ing (but maybe it’s no­ble?)

ElizabethNov 15, 2021, 8:05 PM
17 points
3 comments1 min readLW link
(danluu.com)

The po­etry of progress

jasoncrawfordNov 15, 2021, 7:24 PM
51 points
6 comments4 min readLW link
(rootsofprogress.org)

[Question] Worst Com­mon­sense Con­cepts?

abramdemskiNov 15, 2021, 6:22 PM
75 points
34 comments3 min readLW link

My un­der­stand­ing of the al­ign­ment problem

danieldeweyNov 15, 2021, 6:13 PM
43 points
3 comments3 min readLW link

“Sum­ma­riz­ing Books with Hu­man Feed­back” (re­cur­sive GPT-3)

gwernNov 15, 2021, 5:41 PM
24 points
4 commentsLW link
(openai.com)

How Hu­man­ity Lost Con­trol and Hu­mans Lost Liberty: From Our Brave New World to Analo­gia (Se­quence In­tro­duc­tion)

Justin BullockNov 15, 2021, 2:22 PM
8 points
4 comments3 min readLW link

Re: At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

lsusrNov 15, 2021, 10:02 AM
20 points
8 comments15 min readLW link

What the fu­ture will look like

avantika.mehraNov 15, 2021, 5:14 AM
7 points
1 comment3 min readLW link

At­tempted Gears Anal­y­sis of AGI In­ter­ven­tion Dis­cus­sion With Eliezer

ZviNov 15, 2021, 3:50 AM
197 points
49 comments16 min readLW link
(thezvi.wordpress.com)

An Emer­gency Fund for Effec­tive Altru­ists (sec­ond ver­sion)

biceNov 14, 2021, 6:28 PM
12 points
4 comments2 min readLW link

Tele­vised sports ex­ist to gam­ble with testos­terone lev­els us­ing pre­dic­tion skill

LucentNov 14, 2021, 6:24 PM
22 points
3 comments1 min readLW link

Im­prov­ing on the Karma System

RaelifinNov 14, 2021, 6:01 PM
106 points
36 comments19 min readLW link

[Linkpost] Paul Gra­ham 101

Gunnar_ZarnckeNov 14, 2021, 4:52 PM
12 points
4 comments1 min readLW link

My cur­rent un­cer­tain­ties re­gard­ing AI, al­ign­ment, and the end of the world

dominicqNov 14, 2021, 2:08 PM
2 points
3 comments2 min readLW link

Ed­u­ca­tion on My Homeworld

lsusrNov 14, 2021, 10:16 AM
37 points
19 comments5 min readLW link

What would we do if al­ign­ment were fu­tile?

Grant DemareeNov 14, 2021, 8:09 AM
75 points
39 comments3 min readLW link

A phar­ma­ceu­ti­cal stock pric­ing mystery

DirectedEvolutionNov 14, 2021, 1:19 AM
14 points
2 comments3 min readLW link

You are prob­a­bly un­der­es­ti­mat­ing how good self-love can be

Charlie Rogers-SmithNov 14, 2021, 12:41 AM
168 points
19 comments12 min readLW link1 review

Co­or­di­na­tion Skills I Wish I Had For the Pandemic

RaemonNov 13, 2021, 11:32 PM
96 points
9 comments6 min readLW link1 review

Sci-Hub sued in India

Connor_FlexmanNov 13, 2021, 11:12 PM
131 points
19 comments7 min readLW link

[Question] What’s the like­li­hood of only sub ex­po­nen­tial growth for AGI?

M. Y. ZuoNov 13, 2021, 10:46 PM
5 points
22 comments1 min readLW link

Com­ments on Car­l­smith’s “Is power-seek­ing AI an ex­is­ten­tial risk?”

So8resNov 13, 2021, 4:29 AM
139 points
15 comments40 min readLW link1 review

A FLI post­doc­toral grant ap­pli­ca­tion: AI al­ign­ment via causal anal­y­sis and de­sign of agents

PabloAMCNov 13, 2021, 1:44 AM
4 points
0 comments7 min readLW link

[Question] Is Func­tional De­ci­sion The­ory still an ac­tive area of re­search?

Grant DemareeNov 13, 2021, 12:30 AM
8 points
3 comments1 min readLW link

Aver­age prob­a­bil­ities, not log odds

AlexMennenNov 12, 2021, 9:39 PM
27 points
20 comments5 min readLW link

[linkpost] Crypto Cities

mike_hawkeNov 12, 2021, 9:26 PM
25 points
10 comments1 min readLW link
(vitalik.ca)

A Defense of Func­tional De­ci­sion Theory

HeighnNov 12, 2021, 8:59 PM
21 points
221 comments10 min readLW link

Why I’m ex­cited about Red­wood Re­search’s cur­rent project

paulfchristianoNov 12, 2021, 7:26 PM
114 points
6 comments7 min readLW link

Stop but­ton: to­wards a causal solution

tailcalledNov 12, 2021, 7:09 PM
25 points
37 comments9 min readLW link

Ran­domWalkNFT: A Game The­ory Exercise

AnnapurnaNov 12, 2021, 7:05 PM
7 points
10 comments2 min readLW link

Preprint is out! 100,000 lu­mens to treat sea­sonal af­fec­tive disorder

FabienneNov 12, 2021, 5:59 PM
170 points
10 comments1 min readLW link

ALERT⚠️ Not enough gud vibes 😎

Pee DoomNov 12, 2021, 11:25 AM
10 points
3 comments1 min readLW link

Avoid­ing Nega­tive Ex­ter­nal­ities—a the­ory with spe­cific ex­am­ples—Part 1

M. Y. ZuoNov 12, 2021, 4:09 AM
2 points
4 comments6 min readLW link

It’s Ok to Dance Again

jefftkNov 12, 2021, 2:50 AM
8 points
0 comments1 min readLW link
(www.jefftk.com)

Mea­sur­ing and Fore­cast­ing Risks from AI

jsteinhardtNov 12, 2021, 2:30 AM
24 points
0 comments3 min readLW link
(bounded-regret.ghost.io)

AGI is at least as far away as Nu­clear Fu­sion.

Logan ZoellnerNov 11, 2021, 9:33 PM
0 points
8 comments1 min readLW link

A Brief In­tro­duc­tion to Con­tainer Logistics

VitorNov 11, 2021, 3:58 PM
267 points
22 comments11 min readLW link1 review

Effec­tive Altru­ism Vir­tual Pro­grams Dec-Jan 2022

Yi-YangNov 11, 2021, 3:50 PM
3 points
0 comments1 min readLW link

Covid 11/​11: Win­ter and Effec­tive Treat­ments Are Coming

ZviNov 11, 2021, 2:50 PM
65 points
19 comments12 min readLW link
(thezvi.wordpress.com)

Us­ing blin­ders to help you see things for what they are

Adam ZernerNov 11, 2021, 7:07 AM
13 points
2 comments2 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJulesNov 11, 2021, 7:04 AM
2 points
2 comments1 min readLW link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

Nov 11, 2021, 3:01 AM
328 points
253 comments34 min readLW link1 review

Re­lax­ation-Based Search, From Every­day Life To Un­fa­mil­iar Territory

johnswentworthNov 10, 2021, 9:47 PM
60 points
3 comments8 min readLW link

[Question] Self-ed­u­ca­tion best practices

Sean McAnenyNov 10, 2021, 5:12 PM
12 points
5 comments1 min readLW link

[Question] What ex­actly is GPT-3′s base ob­jec­tive?

Daniel KokotajloNov 10, 2021, 12:57 AM
60 points
14 comments2 min readLW link

Robin Han­son’s Grabby Aliens model ex­plained—part 2

WriterNov 9, 2021, 5:43 PM
13 points
4 comments13 min readLW link
(youtu.be)

Come for the pro­duc­tivity, stay for the philosophy

lionhearted (Sebastian Marshall)Nov 9, 2021, 1:10 PM
23 points
6 comments1 min readLW link

Erase button

AstorNov 9, 2021, 9:39 AM
3 points
6 comments1 min readLW link