Why ar­tifi­cial op­ti­mism?

jessicataJul 15, 2019, 9:41 PM
67 points
29 comments4 min readLW link
(unstableontology.com)

Offer­ing pub­lic com­ment in the Fed­eral rule­mak­ing process

ryan_bJul 15, 2019, 8:31 PM
18 points
0 comments1 min readLW link

In­tegrity and ac­countabil­ity are core parts of rationality

habrykaJul 15, 2019, 8:22 PM
171 points
68 comments6 min readLW link1 review

Jeff Hawk­ins on neu­ro­mor­phic AGI within 20 years

Steven ByrnesJul 15, 2019, 7:16 PM
170 points
24 comments12 min readLW link

Com­men­tary On “The Abo­li­tion of Man”

VaniverJul 15, 2019, 6:56 PM
64 points
13 comments5 min readLW link

Over­com­ing Akra­sia/​Pro­cras­ti­na­tion—Vol­un­teers Wanted

Matt GoldenbergJul 15, 2019, 6:29 PM
14 points
10 comments1 min readLW link

How Should We Cri­tique Re­search? A De­ci­sion Perspective

gwernJul 14, 2019, 10:51 PM
46 points
4 commentsLW link
(www.gwern.net)

Let’s Read: Su­per­hu­man AI for mul­ti­player poker

Yuxi_LiuJul 14, 2019, 6:22 AM
56 points
6 comments8 min readLW link

In­sights from Lin­ear Alge­bra Done Right

Rafael HarthJul 13, 2019, 6:24 PM
54 points
18 comments9 min readLW link

No non­sense ver­sion of the “racial al­gorithm bias”

Yuxi_LiuJul 13, 2019, 3:39 PM
115 points
20 comments2 min readLW link

Re­claiming Ed­die Willers

Swimmer963 (Miranda Dixon-Luinenburg) Jul 13, 2019, 3:32 PM
71 points
20 comments2 min readLW link

Job de­scrip­tion for an in­de­pen­dent AI al­ign­ment researcher

rmoehnJul 13, 2019, 9:47 AM
8 points
0 comments1 min readLW link

Raw Post: Talk­ing With My Brother

DirectedEvolutionJul 13, 2019, 2:57 AM
23 points
6 comments5 min readLW link

[Question] What are we pre­dict­ing for Neu­ral­ink event?

Dr_ManhattanJul 12, 2019, 7:33 PM
32 points
15 comments1 min readLW link

My Con­ver­sion from LW to Prag­ma­tism: Steelman

hunterglennzeroJul 12, 2019, 7:17 PM
3 points
1 comment7 min readLW link

Largest open col­lec­tion quotes about AI

teradimichJul 12, 2019, 5:18 PM
36 points
2 comments3 min readLW link
(docs.google.com)

[Question] By­s­tan­der effect false?

Ben PaceJul 12, 2019, 6:30 AM
17 points
4 comments1 min readLW link

[Question] If I knew how to make an omo­hun­dru op­ti­mizer, would I be able to do any­thing good with that knowl­edge?

mako yassJul 12, 2019, 1:40 AM
5 points
2 comments1 min readLW link

[Question] How much back­ground tech­ni­cal knowl­edge do LW read­ers have?

johnswentworthJul 11, 2019, 5:38 PM
30 points
22 comments1 min readLW link

New SSC meetup group in Lisbon

tamkin&popkinJul 11, 2019, 12:19 PM
1 point
0 comments1 min readLW link

[Question] Are we cer­tain that gpt-2 and similar al­gorithms are not self-aware?

OzyrusJul 11, 2019, 8:37 AM
0 points
12 comments1 min readLW link

[Question] Model­ing AI mile­stones to ad­just AGI ar­rival es­ti­mates?

OzyrusJul 11, 2019, 8:17 AM
10 points
3 comments1 min readLW link

Please give your links speak­ing names!

rmoehnJul 11, 2019, 7:47 AM
44 points
22 comments1 min readLW link

AI Align­ment “Scaf­fold­ing” Pro­ject Ideas (Re­quest for Ad­vice)

DirectedEvolutionJul 11, 2019, 4:39 AM
9 points
1 comment1 min readLW link

The AI Timelines Scam

jessicataJul 11, 2019, 2:52 AM
117 points
111 comments7 min readLW link3 reviews
(unstableontology.com)

Magic is Dead, Give me Attention

HazardJul 10, 2019, 8:15 PM
40 points
13 comments5 min readLW link

[Question] How can guessti­mates work?

Bird ConceptJul 10, 2019, 7:33 PM
24 points
9 comments1 min readLW link

Types of Boltz­mann Brains

avturchinJul 10, 2019, 8:22 AM
8 points
0 comments1 min readLW link
(philpapers.org)

Schism Begets Schism

Davis_KingsleyJul 10, 2019, 3:09 AM
24 points
25 comments3 min readLW link

[Question] Do bond yield curve in­ver­sions re­ally in­di­cate there is likely to be a re­ces­sion?

Ben GoldhaberJul 10, 2019, 1:23 AM
20 points
8 comments1 min readLW link

[Question] Would you join the So­ciety of the Free & Easy?

David GrossJul 10, 2019, 1:15 AM
18 points
1 comment3 min readLW link

Diver­sify Your Friend­ship Portfolio

Davis_KingsleyJul 9, 2019, 11:06 PM
74 points
13 comments2 min readLW link

The I Ching Series (2/​10): How should I pri­ori­tize my ca­reer-build­ing pro­jects?

DirectedEvolutionJul 9, 2019, 10:55 PM
14 points
6 comments3 min readLW link

[Question] Are there easy, low cost, ways to freeze per­sonal cell sam­ples for fu­ture ther­a­pies? And is this a good idea?

Eli TyreJul 9, 2019, 9:57 PM
20 points
4 comments1 min readLW link

Out­line of NIST draft plan for AI standards

ryan_bJul 9, 2019, 5:30 PM
21 points
1 comment7 min readLW link

[Question] How can I help re­search Friendly AI?

avichapmanJul 9, 2019, 12:15 AM
22 points
3 comments1 min readLW link

The Re­sults of My First LessWrong-in­spired I Ching Divination

DirectedEvolutionJul 8, 2019, 9:26 PM
21 points
3 comments6 min readLW link

“Ra­tion­al­iz­ing” and “Sit­ting Bolt Upright in Alarm.”

RaemonJul 8, 2019, 8:34 PM
45 points
56 comments4 min readLW link

Some Com­ments on Stu­art Arm­strong’s “Re­search Agenda v0.9”

Charlie SteinerJul 8, 2019, 7:03 PM
21 points
12 comments4 min readLW link

[AN #59] How ar­gu­ments for AI risk have changed over time

Rohin ShahJul 8, 2019, 5:20 PM
43 points
4 comments7 min readLW link
(mailchi.mp)

NIST: draft plan for AI stan­dards development

ryan_bJul 8, 2019, 2:13 PM
16 points
1 comment1 min readLW link
(www.nist.gov)

In­differ­ence: mul­ti­ple changes, mul­ti­ple agents

Stuart_ArmstrongJul 8, 2019, 1:36 PM
15 points
5 comments8 min readLW link

[Question] Can I au­to­mat­i­cally cross-post to LW via RSS?

lifelonglearnerJul 8, 2019, 5:04 AM
9 points
5 comments1 min readLW link

[Question] Is the sum in­di­vi­d­ual in­for­ma­tive­ness of two in­de­pen­dent vari­ables no more than their joint in­for­ma­tive­ness?

Ronny FernandezJul 8, 2019, 2:51 AM
10 points
3 comments1 min readLW link

[Question] How does the or­ga­ni­za­tion “EthAGI” fit into the broader AI safety land­scape?

Liam DonovanJul 8, 2019, 12:46 AM
4 points
2 comments1 min readLW link

Reli­gion as Goodhart

ShmiJul 8, 2019, 12:38 AM
21 points
6 comments2 min readLW link

First ap­pli­ca­tion round of the EAF Fund

JesseCliftonJul 8, 2019, 12:20 AM
20 points
0 comments3 min readLW link
(forum.effectivealtruism.org)

[Question] LW au­thors: How many clusters of norms do you (per­son­ally) want?

RaemonJul 7, 2019, 8:27 PM
38 points
40 comments2 min readLW link

How to make a gi­ant white­board for $14 (plus nails)

eukaryoteJul 7, 2019, 7:23 PM
29 points
1 comment1 min readLW link
(eukaryotewritesblog.com)

Mus­ings on Cu­mu­la­tive Cul­tural Evolu­tion and AI

caleboJul 7, 2019, 4:46 PM
19 points
5 comments7 min readLW link