AGI is at least as far away as Nu­clear Fu­sion.

Logan ZoellnerNov 11, 2021, 9:33 PM
0 points
8 comments1 min readLW link

A Brief In­tro­duc­tion to Con­tainer Logistics

VitorNov 11, 2021, 3:58 PM
267 points
22 comments11 min readLW link1 review

Effec­tive Altru­ism Vir­tual Pro­grams Dec-Jan 2022

Yi-YangNov 11, 2021, 3:50 PM
3 points
0 comments1 min readLW link

Covid 11/​11: Win­ter and Effec­tive Treat­ments Are Coming

ZviNov 11, 2021, 2:50 PM
65 points
19 comments12 min readLW link
(thezvi.wordpress.com)

Us­ing blin­ders to help you see things for what they are

Adam ZernerNov 11, 2021, 7:07 AM
13 points
2 comments2 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJulesNov 11, 2021, 7:04 AM
2 points
2 comments1 min readLW link

Dis­cus­sion with Eliezer Yud­kowsky on AGI interventions

Nov 11, 2021, 3:01 AM
328 points
253 comments34 min readLW link1 review

Re­lax­ation-Based Search, From Every­day Life To Un­fa­mil­iar Territory

johnswentworthNov 10, 2021, 9:47 PM
60 points
3 comments8 min readLW link

[Question] Self-ed­u­ca­tion best practices

Sean McAnenyNov 10, 2021, 5:12 PM
12 points
5 comments1 min readLW link

[Question] What ex­actly is GPT-3′s base ob­jec­tive?

Daniel KokotajloNov 10, 2021, 12:57 AM
60 points
14 comments2 min readLW link

Robin Han­son’s Grabby Aliens model ex­plained—part 2

WriterNov 9, 2021, 5:43 PM
13 points
4 comments13 min readLW link
(youtu.be)

Come for the pro­duc­tivity, stay for the philosophy

lionhearted (Sebastian Marshall)Nov 9, 2021, 1:10 PM
23 points
6 comments1 min readLW link

Erase button

AstorNov 9, 2021, 9:39 AM
3 points
6 comments1 min readLW link

Ar­gu­ments in par­allel vs ar­gu­ments in series

Sunny from QADNov 9, 2021, 8:31 AM
11 points
8 comments2 min readLW link

Where did the 5 micron num­ber come from? Nowhere good. [Wired.com]

ElizabethNov 9, 2021, 7:14 AM
108 points
8 comments1 min readLW link1 review
(www.wired.com)

In Defence of Op­ti­miz­ing Rou­tine Tasks

leogaoNov 9, 2021, 5:09 AM
47 points
6 comments3 min readLW link1 review

[Question] Is there a clearly laid-out write-up of the case to drop Covid pre­cau­tions?

NoSignalNoNoiseNov 9, 2021, 2:46 AM
46 points
3 comments1 min readLW link

Pos­si­ble re­search di­rec­tions to im­prove the mechanis­tic ex­pla­na­tion of neu­ral networks

delton137Nov 9, 2021, 2:36 AM
31 points
8 comments9 min readLW link

ACX/​SSC/​LW Meetup

Sean AubinNov 9, 2021, 2:27 AM
11 points
0 comments1 min readLW link

How do we be­come con­fi­dent in the safety of a ma­chine learn­ing sys­tem?

evhubNov 8, 2021, 10:49 PM
134 points
5 comments31 min readLW link

Steel­man soli­taire: how to take play­ing devil’s ad­vo­cate to the next level

KatWoodsNov 8, 2021, 8:49 PM
63 points
4 comments5 min readLW link

Ex­cerpts from Veyne’s “Did the Greeks Believe in Their Myths?”

Rob BensingerNov 8, 2021, 8:23 PM
24 points
1 comment16 min readLW link

Worth check­ing your stock trad­ing skills

at_the_zooNov 8, 2021, 7:19 PM
48 points
37 comments3 min readLW link

What are red flags for Neu­ral Net­work suffer­ing?

Marius HobbhahnNov 8, 2021, 12:51 PM
29 points
15 comments12 min readLW link

Tran­script for Ge­off An­ders and Anna Sala­mon’s Oct. 23 conversation

Rob BensingerNov 8, 2021, 2:19 AM
83 points
97 comments58 min readLW link

[Question] How much of the sup­ply-chain is­sues are due to mon­e­tary policy?

ChristianKlNov 7, 2021, 9:21 PM
11 points
3 comments1 min readLW link

There Meat Come A Scan­dal...

Nicholas / Heather KrossNov 7, 2021, 8:52 PM
31 points
7 comments3 min readLW link
(www.thinkingmuchbetter.com)

D&D.Sci Dun­geon­crawl­ing: The Crown of Command

aphyerNov 7, 2021, 6:39 PM
36 points
27 comments4 min readLW link

ACX At­lanta Novem­ber Meetup—Novem­ber 13th

Steve FrenchNov 7, 2021, 6:37 PM
1 point
0 comments1 min readLW link

You Don’t Need An­throp­ics To Do Science

dadadarrenNov 7, 2021, 3:07 PM
6 points
4 comments2 min readLW link
(www.sleepingbeautyproblem.com)

High­light­ing New Comments

jefftkNov 7, 2021, 12:50 PM
14 points
1 comment1 min readLW link
(www.jefftk.com)

Us­ing Brain-Com­puter In­ter­faces to get more data for AI alignment

RobboNov 7, 2021, 12:00 AM
43 points
10 comments7 min readLW link

South Bay LW Pilot Meetup (Sun­ny­vale)

ISNov 6, 2021, 8:20 PM
19 points
0 comments1 min readLW link

App and book recom­men­da­tions for peo­ple who want to be hap­pier and more productive

KatWoodsNov 6, 2021, 5:40 PM
142 points
43 comments8 min readLW link

Chu are you?

Adele LopezNov 6, 2021, 5:39 PM
60 points
10 comments9 min readLW link
(adelelopez.com)

Sub­stack Ho?

ZviNov 6, 2021, 4:50 PM
27 points
17 comments4 min readLW link
(thezvi.wordpress.com)

CFAR, re­spon­si­bil­ity and bureaucracy

ChristianKlNov 6, 2021, 2:53 PM
22 points
1 comment8 min readLW link

Speak­ing of Stag Hunts

Duncan Sabien (Inactive)Nov 6, 2021, 8:20 AM
191 points
373 comments18 min readLW link

Con­cen­tra­tion of Force

Duncan Sabien (Inactive)Nov 6, 2021, 8:20 AM
245 points
23 comments12 min readLW link1 review

Study Guide

johnswentworthNov 6, 2021, 1:23 AM
303 points
50 comments16 min readLW link

Night­clubs in Heaven?

J BostockNov 5, 2021, 11:28 PM
10 points
3 comments2 min readLW link

Com­ments on OpenPhil’s In­ter­pretabil­ity RFP

paulfchristianoNov 5, 2021, 10:36 PM
91 points
5 comments7 min readLW link

How should we com­pare neu­ral net­work rep­re­sen­ta­tions?

jsteinhardtNov 5, 2021, 10:10 PM
24 points
0 comments3 min readLW link
(bounded-regret.ghost.io)

Drug ad­dicts and de­cep­tively al­igned agents—a com­par­a­tive analysis

JanNov 5, 2021, 9:42 PM
42 points
2 comments12 min readLW link
(universalprior.substack.com)

Model­ing the im­pact of safety agendas

Ben CottierNov 5, 2021, 7:46 PM
51 points
6 comments10 min readLW link

[Question] Sum­mary of the se­quences /​ Les­son plans for rationality

Space L ClotteyNov 5, 2021, 5:22 PM
5 points
4 comments1 min readLW link

[Ex­ter­nal Event] 2022 IEEE In­ter­na­tional Con­fer­ence on As­sured Au­ton­omy (ICAA) - sub­mis­sion dead­line extended

Aryeh EnglanderNov 5, 2021, 3:29 PM
13 points
0 comments3 min readLW link

Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

ozziegooenNov 5, 2021, 9:05 AM
46 points
10 comments3 min readLW link
(forum.effectivealtruism.org)

Y2K: Suc­cess­ful Prac­tice for AI Alignment

DarmaniNov 5, 2021, 6:09 AM
49 points
5 comments6 min readLW link

[Question] How does one learn to cre­ate mod­els?

ConorNov 5, 2021, 2:57 AM
3 points
1 comment1 min readLW link