RSS

Public Discourse

TagLast edit: 15 Jul 2020 4:27 UTC by jacobjacob

Public discourse refers to our ability to have conversations in large groups, both as a society, and in smaller communities; as well as conversations between a few well-defined participants (such as presidential debates) that take place publicly.

This tag is for understanding the nature of public discourse (How good is it? What makes it succeed or fail?), and ways of improving it using technology or novel institutions.

See also: Conversation (topic)

[Question] What’s Your Best AI Safety “Quip”?

False Name26 Mar 2024 15:35 UTC
−9 points
0 comments1 min readLW link

Ten Modes of Cul­ture War Discourse

jchan31 Jan 2024 13:58 UTC
54 points
15 comments15 min readLW link

Anal­ogy Bank for AI Safety

Rocket29 Jan 2024 2:35 UTC
23 points
0 comments7 min readLW link

Why Im­prov­ing Dialogue Feels So Hard

matto20 Jan 2024 21:26 UTC
21 points
8 comments3 min readLW link

On the Con­trary, Steel­man­ning Is Nor­mal; ITT-Pass­ing Is Niche

Zack_M_Davis9 Jan 2024 23:12 UTC
39 points
31 comments4 min readLW link

[Question] Ter­minol­ogy: <some­thing>-ware for ML?

Oliver Sourbut3 Jan 2024 11:42 UTC
17 points
27 comments1 min readLW link

Stop talk­ing about p(doom)

Isaac King1 Jan 2024 10:57 UTC
37 points
22 comments3 min readLW link

Defense Against The Dark Arts: An Introduction

Lyrongolem25 Dec 2023 6:36 UTC
25 points
36 comments20 min readLW link

The Dark Arts

19 Dec 2023 4:41 UTC
131 points
49 comments9 min readLW link

“Model UN Solu­tions”

Arjun Panickssery8 Dec 2023 23:06 UTC
36 points
5 comments1 min readLW link
(open.substack.com)

Pro­posal for im­prov­ing the global on­line dis­course through per­son­al­ised com­ment or­der­ing on all websites

Roman Leventov6 Dec 2023 18:51 UTC
35 points
21 comments6 min readLW link

Cis fragility

[deactivated]30 Nov 2023 4:14 UTC
−51 points
9 comments3 min readLW link

Sapi­ence, un­der­stand­ing, and “AGI”

Seth Herd24 Nov 2023 15:13 UTC
15 points
3 comments6 min readLW link

Pro­pa­ganda or Science: A Look at Open Source AI and Bioter­ror­ism Risk

1a3orn2 Nov 2023 18:20 UTC
191 points
79 comments23 min readLW link

[Question] Snap­shot of nar­ra­tives and frames against reg­u­lat­ing AI

Jan_Kulveit1 Nov 2023 16:30 UTC
36 points
19 comments3 min readLW link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
3 points
0 comments10 min readLW link
(www.oliversourbut.net)

Ac­tu­ally, “per­sonal at­tacks af­ter ob­ject-level ar­gu­ments” is a pretty good rule of epistemic conduct

Max H17 Sep 2023 20:25 UTC
36 points
15 comments7 min readLW link

Book Re­view: Con­scious­ness Ex­plained (as the Great Cat­a­lyst)

Rafael Harth17 Sep 2023 15:30 UTC
16 points
12 comments22 min readLW link

Con­tra Yud­kowsky on Epistemic Con­duct for Author Criticism

Zack_M_Davis13 Sep 2023 15:33 UTC
69 points
38 comments7 min readLW link

As­sume Bad Faith

Zack_M_Davis25 Aug 2023 17:36 UTC
112 points
52 comments7 min readLW link

Memetic Judo #3: The In­tel­li­gence of Stochas­tic Par­rots v.2

Max TK20 Aug 2023 15:18 UTC
8 points
33 comments6 min readLW link

When dis­cussing AI doom bar­ri­ers pro­pose spe­cific plau­si­ble scenarios

anithite18 Aug 2023 4:06 UTC
5 points
0 comments3 min readLW link

Memetic Judo #1: On Dooms­day Prophets v.3

Max TK18 Aug 2023 0:14 UTC
25 points
17 comments3 min readLW link

Memetic Judo #2: In­cor­po­ral Switches and Lev­ers Compendium

Max TK14 Aug 2023 16:53 UTC
19 points
6 comments17 min readLW link

A re­sponse to the Richards et al.’s “The Illu­sion of AI’s Ex­is­ten­tial Risk”

Harrison Fell26 Jul 2023 17:34 UTC
1 point
0 comments10 min readLW link

Con­scious­ness as in­trin­si­cally val­ued in­ter­nal experience

Andrew_Critch10 Jul 2023 8:09 UTC
186 points
46 comments11 min readLW link

Why it’s so hard to talk about Consciousness

Rafael Harth2 Jul 2023 15:56 UTC
76 points
151 comments9 min readLW link

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
107 points
52 comments7 min readLW link

Yes, avoid­ing ex­tinc­tion from AI *is* an ur­gent pri­or­ity: a re­sponse to Seth Lazar, Jeremy Howard, and Arvind Narayanan.

Soroush Pour1 Jun 2023 13:38 UTC
17 points
0 comments5 min readLW link
(www.soroushjp.com)

[Question] What pro­jects and efforts are there to pro­mote AI safety re­search?

Christopher King24 May 2023 0:33 UTC
4 points
0 comments1 min readLW link

Let’s build a fire alarm for AGI

chaosmage15 May 2023 9:16 UTC
−2 points
0 comments2 min readLW link

[Question] What new tech­nol­ogy, for what in­sti­tu­tions?

bhauth14 May 2023 17:33 UTC
29 points
6 comments3 min readLW link

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher King13 May 2023 22:49 UTC
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

[SEE NEW EDITS] No, *You* Need to Write Clearer

NicholasKross29 Apr 2023 5:04 UTC
254 points
64 comments5 min readLW link
(www.thinkingmuchbetter.com)

Talk­ing pub­li­cly about AI risk

Jan_Kulveit21 Apr 2023 11:28 UTC
173 points
8 comments6 min readLW link

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

11 Apr 2023 17:30 UTC
141 points
11 comments1 min readLW link

A decade of lurk­ing, a month of posting

Max H9 Apr 2023 0:21 UTC
70 points
4 comments5 min readLW link

Guidelines for pro­duc­tive discussions

ambigram8 Apr 2023 6:00 UTC
37 points
0 comments5 min readLW link

AI scares and chang­ing pub­lic beliefs

Seth Herd6 Apr 2023 18:51 UTC
45 points
21 comments6 min readLW link

Miss­ing fore­cast­ing tools: from cat­a­logs to a new kind of pre­dic­tion market

MichaelLatowicki29 Mar 2023 9:55 UTC
14 points
0 comments5 min readLW link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Akash23 Mar 2023 17:10 UTC
107 points
24 comments6 min readLW link

Ca­pa­bil­ities De­nial: The Danger of Un­der­es­ti­mat­ing AI

Christopher King21 Mar 2023 1:24 UTC
6 points
5 comments3 min readLW link

“Pub­lish or Per­ish” (a quick note on why you should try to make your work leg­ible to ex­ist­ing aca­demic com­mu­ni­ties)

David Scott Krueger (formerly: capybaralet)18 Mar 2023 19:01 UTC
98 points
48 comments1 min readLW link

“Ra­tion­al­ist Dis­course” Is Like “Physi­cist Mo­tors”

Zack_M_Davis26 Feb 2023 5:58 UTC
131 points
152 comments9 min readLW link

Spread­ing mes­sages to help with the most im­por­tant century

HoldenKarnofsky25 Jan 2023 18:20 UTC
75 points
4 comments18 min readLW link
(www.cold-takes.com)

Public-fac­ing Cen­sor­ship Is Safety Theater, Caus­ing Rep­u­ta­tional Da­m­age

Yitz23 Sep 2022 5:08 UTC
149 points
42 comments6 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc1001414 Sep 2022 20:37 UTC
8 points
0 comments16 min readLW link

90% of any­thing should be bad (& the pre­ci­sion-re­call trade­off)

cartografie8 Sep 2022 1:20 UTC
33 points
22 comments6 min readLW link

Pitch­ing an Align­ment Softball

mu_(negative)7 Jun 2022 4:10 UTC
47 points
13 comments10 min readLW link

Pro­posal: Twit­ter dis­like button

KatjaGrace17 May 2022 19:40 UTC
13 points
7 comments1 min readLW link
(worldspiritsockpuppet.com)

[Question] Con­vince me that hu­man­ity *isn’t* doomed by AGI

Yitz15 Apr 2022 17:26 UTC
61 points
49 comments1 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

Yitz10 Apr 2022 21:02 UTC
92 points
141 comments2 min readLW link

Even more cu­rated con­ver­sa­tions with brilli­ant rationalists

spencerg21 Mar 2022 23:49 UTC
59 points
0 comments15 min readLW link

Re­quest for com­ment on a novel refer­ence work of understanding

ender12 Aug 2021 0:06 UTC
3 points
0 comments9 min readLW link

For Bet­ter Com­ment­ing, Take an Oath of Re­ply.

DirectedEvolution31 May 2021 6:01 UTC
42 points
17 comments2 min readLW link

Cu­rated con­ver­sa­tions with brilli­ant rationalists

spencerg28 May 2021 14:23 UTC
153 points
18 comments6 min readLW link

Ar­gu­ing from a Gap of Perspective

ideenrun1 May 2021 22:42 UTC
6 points
1 comment19 min readLW link

Redis­cov­ery, the Mind’s Curare

Erich_Grunewald10 Apr 2021 7:42 UTC
3 points
1 comment3 min readLW link
(www.erichgrunewald.com)

Idea selection

krbouchard1 Mar 2021 14:07 UTC
1 point
0 comments2 min readLW link

How Strong is Our Con­nec­tion to Truth?

anorangicc17 Feb 2021 0:10 UTC
1 point
5 comments3 min readLW link

[Question] Which head­lines and nar­ra­tives are mostly click­bait?

Pontor25 Oct 2020 1:19 UTC
5 points
5 comments2 min readLW link

Do­ing dis­course bet­ter: Stuff I wish I knew

dynomight29 Sep 2020 14:34 UTC
27 points
11 comments1 min readLW link
(dyno-might.github.io)

Up­dat­ing My LW Com­ment­ing Policy

curi18 Aug 2020 16:48 UTC
7 points
1 comment4 min readLW link

Ra­tion­ally End­ing Discussions

curi12 Aug 2020 20:34 UTC
−7 points
27 comments14 min readLW link

A re­ply to Agnes Callard

Vaniver28 Jun 2020 3:25 UTC
91 points
36 comments3 min readLW link

New York Times, Please Do Not Threaten The Safety of Scott Alexan­der By Re­veal­ing His True Name

Zvi23 Jun 2020 12:20 UTC
153 points
2 comments2 min readLW link
(thezvi.wordpress.com)

Creat­ing bet­ter in­fras­truc­ture for con­tro­ver­sial dis­course

Rudi C16 Jun 2020 15:17 UTC
66 points
11 comments2 min readLW link

[Question] Have epistemic con­di­tions always been this bad?

Wei Dai25 Jan 2020 4:42 UTC
205 points
106 comments4 min readLW link1 review

[Question] Has there been a “memetic col­lapse”?

Eli Tyre28 Dec 2019 5:36 UTC
32 points
7 comments1 min readLW link

Com­ment, Don’t Message

jefftk18 Nov 2019 16:00 UTC
30 points
5 comments2 min readLW link
(www.jefftk.com)

Poli­tics is work and work needs breaks

KatjaGrace4 Nov 2019 17:10 UTC
19 points
0 comments2 min readLW link
(meteuphoric.com)

Speak­ing up pub­li­cly is heroic

jefftk2 Nov 2019 12:00 UTC
43 points
2 comments1 min readLW link
(www.jefftk.com)

Cat­e­gory Qual­ifi­ca­tions (w/​ ex­er­cises)

Logan Riggs15 Sep 2019 16:28 UTC
23 points
22 comments5 min readLW link

Par­tial sum­mary of de­bate with Ben­quo and Jes­si­cata [pt 1]

Raemon14 Aug 2019 20:02 UTC
87 points
63 comments22 min readLW link3 reviews

Sta­tus 451 on Di­ag­no­sis: Rus­sell Aphasia

Zack_M_Davis6 Aug 2019 4:43 UTC
48 points
1 comment1 min readLW link
(status451.com)

Drive-By Low-Effort Criticism

lionhearted (Sebastian Marshall)31 Jul 2019 11:51 UTC
32 points
61 comments2 min readLW link

Ap­peal to Con­se­quence, Value Ten­sions, And Ro­bust Organizations

Matt Goldenberg19 Jul 2019 22:09 UTC
45 points
90 comments5 min readLW link

Dialogue on Ap­peals to Consequences

jessicata18 Jul 2019 2:34 UTC
33 points
82 comments7 min readLW link
(unstableontology.com)

Schism Begets Schism

Davis_Kingsley10 Jul 2019 3:09 UTC
24 points
25 comments3 min readLW link

Dis­in­cen­tives for par­ti­ci­pat­ing on LW/​AF

Wei Dai10 May 2019 19:46 UTC
86 points
42 comments2 min readLW link

The Forces of Bland­ness and the Disagree­able Majority

sarahconstantin28 Apr 2019 19:44 UTC
132 points
27 comments3 min readLW link2 reviews
(srconstantin.wordpress.com)

[Question] What’s the best ap­proach to cu­rat­ing a news­feed to max­i­mize use­ful con­trast­ing POV?

bgold26 Apr 2019 17:29 UTC
25 points
3 comments1 min readLW link

Has “poli­tics is the mind-kil­ler” been a mind-kil­ler?

SonnieBailey17 Mar 2019 3:05 UTC
31 points
26 comments3 min readLW link

You Get About Five Words

Raemon12 Mar 2019 20:30 UTC
197 points
76 comments1 min readLW link6 reviews

The Case for a Big­ger Audience

John_Maxwell9 Feb 2019 7:22 UTC
68 points
58 comments2 min readLW link

Lit­tle­wood’s Law and the Global Media

gwern12 Jan 2019 17:46 UTC
37 points
3 comments1 min readLW link
(www.gwern.net)

[Question] Why is so much dis­cus­sion hap­pen­ing in pri­vate Google Docs?

Wei Dai12 Jan 2019 2:19 UTC
100 points
22 comments1 min readLW link

One Web­site To Rule Them All?

anna_macdonald11 Jan 2019 19:14 UTC
30 points
23 comments10 min readLW link

[Question] Why Don’t Creators Switch to their Own Plat­forms?

Jacob Falkovich23 Dec 2018 4:46 UTC
42 points
17 comments1 min readLW link

LW Up­date 2018-12-06 – Table of Con­tents and Q&A

Raemon8 Dec 2018 0:47 UTC
55 points
28 comments4 min readLW link

Click­bait might not be de­stroy­ing our gen­eral Intelligence

Donald Hobson19 Nov 2018 0:13 UTC
25 points
13 comments2 min readLW link

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer Yudkowsky16 Nov 2018 23:06 UTC
189 points
61 comments5 min readLW link2 reviews

“Now here’s why I’m punch­ing you...”

philh16 Oct 2018 21:30 UTC
28 points
24 comments4 min readLW link
(reasonableapproximation.net)

What the Haters Hate

Jacob Falkovich1 Oct 2018 20:29 UTC
29 points
36 comments8 min readLW link

On memetic weapons

ioannes1 Sep 2018 3:25 UTC
42 points
28 comments5 min readLW link

Iso­lat­ing Con­tent can Create Affordances

Davis_Kingsley23 Aug 2018 8:28 UTC
49 points
12 comments1 min readLW link

Trust Me I’m Ly­ing: A Sum­mary and Review

quanticle13 Aug 2018 2:55 UTC
100 points
11 comments7 min readLW link
(quanticle.net)

Lo­cal Val­idity as a Key to San­ity and Civilization

Eliezer Yudkowsky7 Apr 2018 4:25 UTC
193 points
67 comments13 min readLW link5 reviews

Strength­en­ing the foun­da­tions un­der the Over­ton Win­dow with­out mov­ing it

KatjaGrace14 Mar 2018 2:20 UTC
12 points
7 comments3 min readLW link
(meteuphoric.wordpress.com)

Models of moderation

habryka2 Feb 2018 23:29 UTC
30 points
33 comments7 min readLW link

Ar­bital postmortem

alexei30 Jan 2018 13:48 UTC
227 points
110 comments19 min readLW link

Nice­ness Stealth-Bombing

things_which_are_not_on_fire8 Jan 2018 22:16 UTC
23 points
4 comments3 min readLW link

In the pres­ence of dis­in­for­ma­tion, col­lec­tive episte­mol­ogy re­quires lo­cal modeling

jessicata15 Dec 2017 9:54 UTC
77 points
39 comments5 min readLW link

Free Speech as Le­gal Right vs. Eth­i­cal Value

ozymandias28 Nov 2017 16:49 UTC
14 points
8 comments2 min readLW link

Modesty and di­ver­sity: a con­crete suggestion

[deleted]8 Nov 2017 20:42 UTC
30 points
6 comments1 min readLW link

Defense against discourse

Benquo17 Oct 2017 9:10 UTC
38 points
15 comments6 min readLW link
(benjaminrosshoffman.com)

There’s No Fire Alarm for Ar­tifi­cial Gen­eral Intelligence

Eliezer Yudkowsky13 Oct 2017 21:38 UTC
142 points
72 comments25 min readLW link

Avoid­ing Selec­tion Bias

the gears to ascension4 Oct 2017 19:10 UTC
20 points
17 comments1 min readLW link

Moder­a­tor’s Dilemma: The Risks of Par­tial Intervention

Chris_Leong29 Sep 2017 1:47 UTC
33 points
17 comments4 min readLW link

Wikipe­dia pageviews: still in decline

VipulNaik26 Sep 2017 23:03 UTC
24 points
19 comments3 min readLW link

Com­bin­ing Pre­dic­tion Tech­nolo­gies to Help Moder­ate Discussions

Wei Dai8 Dec 2016 0:19 UTC
21 points
15 comments1 min readLW link

Crowd­sourc­ing mod­er­a­tion with­out sac­ri­fic­ing quality

paulfchristiano2 Dec 2016 21:47 UTC
18 points
26 comments1 min readLW link
(sideways-view.com)

On the im­por­tance of Less Wrong, or an­other sin­gle con­ver­sa­tional locus

AnnaSalamon27 Nov 2016 17:13 UTC
173 points
365 comments4 min readLW link

A Re­turn to Discussion

sarahconstantin27 Nov 2016 13:59 UTC
58 points
32 comments6 min readLW link

Poli­ti­cal top­ics at­tract par­ti­ci­pants in­clined to use the norms of main­stream poli­ti­cal de­bate, risk­ing a tip­ping point to lower qual­ity discussion

emr26 Mar 2015 0:14 UTC
63 points
71 comments1 min readLW link

Don’t Be Afraid of Ask­ing Per­son­ally Im­por­tant Ques­tions of Less Wrong

Evan_Gaensbauer17 Mar 2015 6:54 UTC
78 points
47 comments3 min readLW link

Easy wins aren’t news

PhilGoetz19 Feb 2015 19:38 UTC
60 points
19 comments1 min readLW link

Break­ing the vi­cious cycle

XiXiDu23 Nov 2014 18:25 UTC
68 points
131 comments2 min readLW link

Un­pop­u­lar ideas at­tract poor ad­vo­cates: Be charitable

[deleted]15 Sep 2014 19:30 UTC
43 points
61 comments2 min readLW link

Poli­tics is hard mode

Rob Bensinger21 Jul 2014 22:14 UTC
57 points
109 comments6 min readLW link

Change Con­texts to Im­prove Arguments

palladias8 Jul 2014 15:51 UTC
42 points
19 comments2 min readLW link

False Friends and Tone Policing

palladias18 Jun 2014 18:20 UTC
71 points
49 comments3 min readLW link

[LINK] Why I’m not on the Ra­tion­al­ist Masterlist

Apprentice6 Jan 2014 0:16 UTC
40 points
882 comments1 min readLW link

Only You Can Prevent Your Mind From Get­ting Killed By Politics

ChrisHallquist26 Oct 2013 13:59 UTC
61 points
144 comments5 min readLW link

Mak­ing Fun of Things is Easy

katydee27 Sep 2013 3:10 UTC
47 points
76 comments1 min readLW link

The Paucity of Elites Online

JonahS31 May 2013 1:35 UTC
40 points
42 comments3 min readLW link

Rea­sons for some­one to “ig­nore” you

Wei Dai8 Oct 2012 19:50 UTC
37 points
57 comments3 min readLW link

Tak­ing “cor­re­la­tion does not im­ply cau­sa­tion” back from the internet

sixes_and_sevens3 Oct 2012 12:18 UTC
62 points
70 comments1 min readLW link

In Defense of Tone Arguments

OrphanWilde19 Jul 2012 19:48 UTC
32 points
175 comments2 min readLW link

Why Aca­demic Papers Are A Ter­rible Dis­cus­sion Forum

alyssavance20 Jun 2012 18:15 UTC
45 points
53 comments6 min readLW link

When None Dare Urge Res­traint, pt. 2

Jay_Schweikert30 May 2012 15:28 UTC
84 points
92 comments3 min readLW link

“Poli­tics is the mind-kil­ler” is the mind-killer

thomblake26 Jan 2012 15:55 UTC
58 points
99 comments1 min readLW link

Don’t Ap­ply the Prin­ci­ple of Char­ity to Yourself

UnclGhost19 Nov 2011 19:26 UTC
81 points
23 comments2 min readLW link

Find your­self a Wor­thy Op­po­nent: a Chavruta

Raw_Power6 Jul 2011 10:59 UTC
48 points
74 comments3 min readLW link

Offense ver­sus harm minimization

Scott Alexander16 Apr 2011 1:06 UTC
85 points
429 comments9 min readLW link

On De­bates with Trolls

prase12 Apr 2011 8:46 UTC
31 points
247 comments3 min readLW link

Defect­ing by Ac­ci­dent—A Flaw Com­mon to An­a­lyt­i­cal People

lionhearted (Sebastian Marshall)1 Dec 2010 8:25 UTC
119 points
432 comments15 min readLW link

Less Wrong Should Con­front Wrong­ness Wher­ever it Appears

jimrandomh21 Sep 2010 1:40 UTC
32 points
163 comments3 min readLW link

Why I’m Stay­ing On Blog­ging­heads.tv

Eliezer Yudkowsky7 Sep 2009 20:15 UTC
31 points
101 comments2 min readLW link

A so­cial norm against un­jus­tified opinions?

Kaj_Sotala29 May 2009 11:25 UTC
16 points
161 comments1 min readLW link

Well-Kept Gar­dens Die By Pacifism

Eliezer Yudkowsky21 Apr 2009 2:44 UTC
220 points
324 comments5 min readLW link

Col­lec­tive Apa­thy and the Internet

Eliezer Yudkowsky14 Apr 2009 0:02 UTC
49 points
34 comments2 min readLW link

You’re Cal­ling *Who* A Cult Leader?

Eliezer Yudkowsky22 Mar 2009 6:57 UTC
67 points
121 comments5 min readLW link

Rais­ing the San­ity Waterline

Eliezer Yudkowsky12 Mar 2009 4:28 UTC
236 points
232 comments3 min readLW link

...And Say No More Of It

Eliezer Yudkowsky9 Feb 2009 0:15 UTC
41 points
25 comments5 min readLW link

Ex­pect­ing Short In­fer­en­tial Distances

Eliezer Yudkowsky22 Oct 2007 23:42 UTC
337 points
106 comments3 min readLW link
No comments.