[Question] Which of our on­line writ­ings was used to train GPT-3?

Mati_Roy30 Oct 2021 21:52 UTC
9 points
3 comments1 min readLW link

Why the Prob­lem of the Cri­te­rion Matters

Gordon Seidoh Worley30 Oct 2021 20:44 UTC
24 points
9 comments8 min readLW link

Bu­dapest Less Wrong/​ SSC

Timothy Underwood30 Oct 2021 18:27 UTC
6 points
0 comments1 min readLW link

Quick gen­eral thoughts on suffer­ing and consciousness

Rob Bensinger30 Oct 2021 18:05 UTC
37 points
46 comments21 min readLW link

Must true AI sleep?

YimbyGeorge30 Oct 2021 16:47 UTC
0 points
1 comment1 min readLW link

How Much is a Sweet?

jefftk30 Oct 2021 15:50 UTC
23 points
6 comments1 min readLW link
(www.jefftk.com)

God Is Great

Mahdi Complex30 Oct 2021 13:03 UTC
−9 points
7 comments11 min readLW link

We Live in a Post-Scarcity Society

lsusr30 Oct 2021 12:05 UTC
52 points
22 comments3 min readLW link

Tell the Truth

lsusr30 Oct 2021 10:27 UTC
62 points
40 comments2 min readLW link1 review

A Roadmap to a Post-Scarcity Economy

lorepieri30 Oct 2021 9:04 UTC
3 points
3 comments1 min readLW link

Start with a Title

lsusr30 Oct 2021 8:59 UTC
14 points
4 comments1 min readLW link

SSC/​Less­wrong San Diego Meetup

CitizenTen30 Oct 2021 0:15 UTC
2 points
1 comment1 min readLW link

Un­lock the Door

lincolnquirk29 Oct 2021 23:45 UTC
13 points
5 comments2 min readLW link

Naval Ravikant and Chris Dixon Didn’t Ex­plain Any Web3 Use Cases

Liron29 Oct 2021 21:54 UTC
16 points
0 comments1 min readLW link
(medium.com)

[TL;DR] “Train­ing for the New Alpinism” by Steve House and Scott John­ston

lsusr29 Oct 2021 21:20 UTC
23 points
1 comment6 min readLW link

True Sto­ries of Al­gorith­mic Improvement

johnswentworth29 Oct 2021 20:57 UTC
91 points
7 comments5 min readLW link

Good­hart’s Imperius

[DEACTIVATED] Duncan Sabien29 Oct 2021 20:19 UTC
79 points
6 comments7 min readLW link

A sys­tem of in­finite ethics

Chantiel29 Oct 2021 19:37 UTC
2 points
60 comments8 min readLW link

Stu­art Rus­sell and Me­lanie Mitchell on Munk Debates

Alex Flint29 Oct 2021 19:13 UTC
29 points
4 comments3 min readLW link

A very crude de­cep­tion eval is already passed

Beth Barnes29 Oct 2021 17:57 UTC
108 points
6 comments2 min readLW link

On the Univer­sal Distribution

Joe Carlsmith29 Oct 2021 17:50 UTC
35 points
4 comments32 min readLW link

Google an­nounces Path­ways: new gen­er­a­tion mul­ti­task AI Architecture

Ozyrus29 Oct 2021 11:55 UTC
6 points
1 comment1 min readLW link
(blog.google)

I Really Don’t Un­der­stand Eliezer Yud­kowsky’s Po­si­tion on Consciousness

J Bostock29 Oct 2021 11:09 UTC
102 points
120 comments4 min readLW link

Leadership

lsusr29 Oct 2021 7:29 UTC
30 points
4 comments1 min readLW link

Truth­ful and hon­est AI

29 Oct 2021 7:28 UTC
42 points
1 comment13 min readLW link

Interpretability

29 Oct 2021 7:28 UTC
60 points
13 comments12 min readLW link

Tech­niques for en­hanc­ing hu­man feedback

29 Oct 2021 7:27 UTC
22 points
0 comments2 min readLW link

Mea­sur­ing and fore­cast­ing risks

29 Oct 2021 7:27 UTC
20 points
0 comments12 min readLW link

Re­quest for pro­pos­als for pro­jects in AI al­ign­ment that work with deep learn­ing systems

29 Oct 2021 7:26 UTC
87 points
0 comments5 min readLW link

My cur­rent think­ing on money and low carb diets

Adam Zerner29 Oct 2021 6:50 UTC
11 points
17 comments10 min readLW link

[Question] What are fic­tion sto­ries re­lated to AI al­ign­ment?

Mati_Roy29 Oct 2021 2:59 UTC
14 points
22 comments1 min readLW link

[Question] How to gen­er­ate idea/​solu­tions to solve a prob­lem?

warrenjordan29 Oct 2021 0:53 UTC
2 points
5 comments1 min readLW link

Fore­cast­ing progress in lan­guage models

28 Oct 2021 20:40 UTC
62 points
6 comments11 min readLW link
(www.metaculus.com)

[AN #168]: Four tech­ni­cal top­ics for which Open Phil is so­lic­it­ing grant proposals

Rohin Shah28 Oct 2021 17:20 UTC
15 points
0 comments9 min readLW link
(mailchi.mp)

Bet­ter and Worse Ways of Stat­ing SIA

dadadarren28 Oct 2021 16:04 UTC
4 points
0 comments3 min readLW link

Recom­mend­ing Un­der­stand, a Game about Discern­ing the Rules

MondSemmel28 Oct 2021 14:53 UTC
96 points
53 comments4 min readLW link

Covid 10/​28: An Un­ex­pected Victory

Zvi28 Oct 2021 14:50 UTC
29 points
37 comments9 min readLW link
(thezvi.wordpress.com)

An Un­ex­pected Vic­tory: Con­tainer Stack­ing at the Port of Long Beach

Zvi28 Oct 2021 14:40 UTC
299 points
41 comments9 min readLW link
(thezvi.wordpress.com)

Save the kid, ruin the suit; Ac­cept­able util­ity ex­change rates; Distributed util­ity calcu­la­tions; Civic du­ties matter

spkoc28 Oct 2021 11:51 UTC
2 points
8 comments4 min readLW link

Vot­ing for peo­ple harms people

CraigMichael28 Oct 2021 8:29 UTC
13 points
6 comments2 min readLW link

Selfish­ness, prefer­ence falsifi­ca­tion, and AI alignment

jessicata28 Oct 2021 0:16 UTC
52 points
28 comments13 min readLW link
(unstableontology.com)

Rul­ing Out Every­thing Else

[DEACTIVATED] Duncan Sabien27 Oct 2021 21:50 UTC
190 points
51 comments21 min readLW link2 reviews

They don’t make ’em like they used to

jasoncrawford27 Oct 2021 19:44 UTC
39 points
84 comments2 min readLW link
(rootsofprogress.org)

Hegel vs. GPT-3

Bezzi27 Oct 2021 5:55 UTC
9 points
21 comments2 min readLW link

Every­thing Stud­ies on Cyn­i­cal Theories

DanielFilan27 Oct 2021 1:31 UTC
25 points
5 comments1 min readLW link
(everythingstudies.com)

Harry Pot­ter and the Meth­ods of Psy­chomagic | Chap­ter 2: The Global Neu­ronal Workspace

Henry Prowbell26 Oct 2021 18:54 UTC
52 points
8 comments9 min readLW link

X-Risk, An­throp­ics, & Peter Thiel’s In­vest­ment Thesis

Jackson Wagner26 Oct 2021 18:50 UTC
21 points
1 comment19 min readLW link

[Question] Would the world be a bet­ter place if we all agreed to form a world gov­ern­ment next Mon­day?

idontwanttodie26 Oct 2021 18:14 UTC
−3 points
5 comments1 min readLW link

Don’t Use the “God’s-Eye View” in An­thropic Prob­lems.

dadadarren26 Oct 2021 13:47 UTC
6 points
1 comment2 min readLW link

Im­pres­sive vs hon­est signaling

Adam Zerner26 Oct 2021 7:16 UTC
31 points
12 comments7 min readLW link