LessLong Launch Party

RaemonAug 23, 2019, 10:18 PM
12 points
1 comment1 min readLW link

[Question] Is there a sim­ple pa­ram­e­ter that con­trols hu­man work­ing mem­ory ca­pac­ity, which has been set trag­i­cally low?

LironAug 23, 2019, 10:10 PM
17 points
8 comments1 min readLW link

Op­ti­miza­tion Provenance

Adele LopezAug 23, 2019, 8:08 PM
38 points
5 comments5 min readLW link

Troll Bridge

abramdemskiAug 23, 2019, 6:36 PM
86 points
59 comments12 min readLW link

Un­der­stand­ing understanding

mthqAug 23, 2019, 6:10 PM
24 points
1 comment2 min readLW link

Ac­tu­ally updating

SaraHaxAug 23, 2019, 5:46 PM
56 points
10 comments4 min readLW link

When do util­ity func­tions con­strain?

HoagyAug 23, 2019, 5:19 PM
30 points
8 comments7 min readLW link

Parables of Con­straint and Ac­tu­al­iza­tion

Spencer WymanAug 23, 2019, 4:56 PM
13 points
0 comments6 min readLW link

Thoughts on Retriev­ing Knowl­edge from Neu­ral Networks

Jaime RuizAug 23, 2019, 4:41 PM
11 points
2 comments5 min readLW link

Al­gorith­mic Similarity

LukasMAug 23, 2019, 4:39 PM
28 points
10 comments11 min readLW link

Soft take­off can still lead to de­ci­sive strate­gic advantage

Daniel KokotajloAug 23, 2019, 4:39 PM
122 points
47 comments8 min readLW link4 reviews

Moscow LW meetup in “Nauchka” library

Alexander230Aug 23, 2019, 12:40 PM
3 points
0 comments1 min readLW link

OpenGPT-2: We Repli­cated GPT-2 Be­cause You Can Too

avturchinAug 23, 2019, 11:32 AM
18 points
0 comments1 min readLW link
(medium.com)

Tor­ture and Dust Specks and Joy—Oh my! or: Non-Archimedean Utility Func­tions as Pseu­do­graded Vec­tor Spaces

Louis_BrownAug 23, 2019, 11:11 AM
19 points
29 comments8 min readLW link

Me­tal­ign­ment: De­con­fus­ing metaethics for AI al­ign­ment.

Guillaume CorlouerAug 23, 2019, 10:25 AM
13 points
7 comments3 min readLW link

[Question] A ba­sic prob­a­bil­ity question

ShmiAug 23, 2019, 7:13 AM
11 points
3 comments1 min readLW link

Towards an In­ten­tional Re­search Agenda

romeostevensitAug 23, 2019, 5:27 AM
21 points
8 comments3 min readLW link

[Question] Why are peo­ple so op­ti­mistic about su­per­in­tel­li­gence?

bipoloAug 23, 2019, 4:25 AM
6 points
3 comments1 min readLW link

Vague Thoughts and Ques­tions about Agent Structures

loriphosAug 23, 2019, 4:01 AM
9 points
3 comments2 min readLW link

For­mal­is­ing de­ci­sion the­ory is hard

Lukas FinnvedenAug 23, 2019, 3:27 AM
17 points
19 comments2 min readLW link

Creat­ing En­vi­ron­ments to De­sign and Test Embed­ded Agents

lemonhopeAug 23, 2019, 3:17 AM
13 points
5 comments8 min readLW link

Ta­boo­ing ‘Agent’ for Pro­saic Alignment

Hjalmar_WijkAug 23, 2019, 2:55 AM
57 points
10 comments6 min readLW link

Vaniver’s View on Fac­tored Cognition

VaniverAug 23, 2019, 2:54 AM
48 points
4 comments8 min readLW link

Redefin­ing Fast Takeoff

VojtaKovarikAug 23, 2019, 2:15 AM
10 points
1 comment1 min readLW link

[Question] Does Agent-like Be­hav­ior Im­ply Agent-like Ar­chi­tec­ture?

Scott GarrabrantAug 23, 2019, 2:01 AM
69 points
8 comments1 min readLW link

The Com­mit­ment Races problem

Daniel KokotajloAug 23, 2019, 1:58 AM
159 points
56 comments5 min readLW link

Anal­y­sis of a Se­cret Hitler Scenario

jaekAug 23, 2019, 1:24 AM
16 points
6 comments4 min readLW link

Thoughts from a Two Boxer

jaekAug 23, 2019, 12:24 AM
18 points
11 comments5 min readLW link

De­con­fuse Your­self about Agency

VojtaKovarikAug 23, 2019, 12:21 AM
15 points
9 comments4 min readLW link

Log­i­cal Op­ti­miz­ers

Donald HobsonAug 22, 2019, 11:54 PM
11 points
4 comments3 min readLW link

Towards a mechanis­tic un­der­stand­ing of corrigibility

evhubAug 22, 2019, 11:20 PM
47 points
26 comments4 min readLW link

Re­sponse to Glen Weyl on Tech­noc­racy and the Ra­tion­al­ist Community

John_MaxwellAug 22, 2019, 11:14 PM
66 points
9 comments10 min readLW link

[Question] Why so much var­i­ance in hu­man in­tel­li­gence?

Ben PaceAug 22, 2019, 10:36 PM
65 points
28 comments4 min readLW link

Log­i­cal Coun­ter­fac­tu­als and Propo­si­tion graphs, Part 1

Donald HobsonAug 22, 2019, 10:06 PM
20 points
0 comments3 min readLW link

Time Travel, AI and Trans­par­ent Newcomb

johnswentworthAug 22, 2019, 10:04 PM
11 points
7 comments1 min readLW link

Embed­ded Naive Bayes

johnswentworthAug 22, 2019, 9:40 PM
17 points
6 comments3 min readLW link

In­ten­tional Bucket Errors

Scott GarrabrantAug 22, 2019, 8:02 PM
55 points
6 comments3 min readLW link

Com­pu­ta­tional Model: Causal Di­a­grams with Symmetry

johnswentworth22 Aug 2019 17:54 UTC
53 points
29 comments4 min readLW link

[AN #62] Are ad­ver­sar­ial ex­am­ples caused by real but im­per­cep­ti­ble fea­tures?

Rohin Shah22 Aug 2019 17:10 UTC
28 points
10 comments9 min readLW link
(mailchi.mp)

Im­pli­ca­tions of Quan­tum Com­put­ing for Ar­tifi­cial In­tel­li­gence Align­ment Research

22 Aug 2019 10:33 UTC
24 points
3 comments13 min readLW link

Body Align­ment & Balance. Our Midline Anatomy & the Me­dian Plane.

leggi22 Aug 2019 10:24 UTC
15 points
6 comments4 min readLW link

[Question] Si­mu­la­tion Ar­gu­ment: Why aren’t an­ces­tor simu­la­tions out­num­bered by tran­shu­mans?

maximkazhenkov22 Aug 2019 9:07 UTC
9 points
11 comments1 min readLW link

Mar­kets are Univer­sal for Log­i­cal Induction

johnswentworth22 Aug 2019 6:44 UTC
77 points
2 comments5 min readLW link

An­nounce­ment: Writ­ing Day To­day (Thurs­day)

Ben Pace22 Aug 2019 4:48 UTC
29 points
5 comments1 min readLW link

Western Mas­sachusetts SSC meetup #15

a_lieb22 Aug 2019 0:53 UTC
1 point
0 comments1 min readLW link

Call for con­trib­u­tors to the Align­ment Newsletter

Rohin Shah21 Aug 2019 18:21 UTC
39 points
0 comments4 min readLW link

Two senses of “op­ti­mizer”

Joar Skalse21 Aug 2019 16:02 UTC
35 points
41 comments3 min readLW link

Para­dox­i­cal Ad­vice Thread

Hazard21 Aug 2019 14:50 UTC
13 points
10 comments1 min readLW link

Three Levels of Motivation

DragonGod21 Aug 2019 9:24 UTC
15 points
1 comment7 min readLW link

Odds are not easier

MrMind21 Aug 2019 8:34 UTC
9 points
6 comments1 min readLW link