Ene­mies vs Malefactors

So8resFeb 28, 2023, 11:38 PM
226 points
69 commentsLW link4 reviews

Scarce Chan­nels and Ab­strac­tion Coupling

johnswentworthFeb 28, 2023, 11:26 PM
41 points
11 comments6 min readLW link

On “prep­ping” per­sonal house­holds for AI doom scenarios

coryfkleinFeb 28, 2023, 10:17 PM
0 points
0 comments2 min readLW link

Power-seek­ing can be prob­a­ble and pre­dic­tive for trained agents

Feb 28, 2023, 9:10 PM
56 points
22 comments9 min readLW link
(arxiv.org)

Can AI ex­pe­rience nihilism?

chenboFeb 28, 2023, 6:58 PM
−14 points
1 comment2 min readLW link

Help kill “teach a man to fish”

Tyler G HallFeb 28, 2023, 6:53 PM
24 points
3 comments1 min readLW link

The bur­den of knowing

arisAlexisFeb 28, 2023, 6:40 PM
5 points
0 comments2 min readLW link

In­ter­pret­ing Embed­ding Spaces by Conceptualization

Adi SimhiFeb 28, 2023, 6:38 PM
3 points
0 comments1 min readLW link
(arxiv.org)

A mostly crit­i­cal re­view of in­fra-Bayesianism

David MatolcsiFeb 28, 2023, 6:37 PM
108 points
9 comments29 min readLW link

Perfor­mance guaran­tees in clas­si­cal learn­ing the­ory and in­fra-Bayesianism

David MatolcsiFeb 28, 2023, 6:37 PM
9 points
4 comments31 min readLW link

[Question] Eth­i­cal and in­cen­tive-com­pat­i­ble way to share fi­nances with part­ner when you both work?

Chad NauseamFeb 28, 2023, 6:28 PM
14 points
3 comments2 min readLW link
(www.reddit.com)

My Ex­pe­rience With Lov­ing Kind­ness Meditation

maiaFeb 28, 2023, 6:18 PM
47 points
8 comments3 min readLW link
(particularvirtue.blogspot.com)

What does Bing Chat tell us about AI risk?

HoldenKarnofskyFeb 28, 2023, 5:40 PM
80 points
21 comments2 min readLW link
(www.cold-takes.com)

Digi­tal Molec­u­lar Assem­blers: What syn­thetic me­dia/​gen­er­a­tive AI ac­tu­ally rep­re­sents, and where I think it’s going

Yuli_BanFeb 28, 2023, 2:03 PM
26 points
4 comments15 min readLW link

What is the fer­til­ity rate?

Thomas SepulchreFeb 28, 2023, 12:55 PM
31 points
1 comment5 min readLW link

Evil au­to­com­plete: Ex­is­ten­tial Risk and Next-To­ken Predictors

YitzFeb 28, 2023, 8:47 AM
9 points
3 comments5 min readLW link

$20 Million in NSF Grants for Safety Research

Dan HFeb 28, 2023, 4:44 AM
165 points
12 comments1 min readLW link

Be­neath My Epistemic Dignity

David UdellFeb 28, 2023, 4:02 AM
6 points
3 comments2 min readLW link

Tran­script: Yud­kowsky on Ban­kless fol­low-up Q&A

vonkFeb 28, 2023, 3:46 AM
54 points
40 comments22 min readLW link

Tran­script: Test­ing ChatGPT’s Perfor­mance in Engineering

alxgoldstnFeb 28, 2023, 2:16 AM
17 points
3 comments7 min readLW link

Conversationism

scottviteriFeb 28, 2023, 12:09 AM
50 points
1 comment12 min readLW link

[S] D&D.Sci: All the D8a. Allllllll of it. Eval­u­a­tion and Ruleset

aphyerFeb 27, 2023, 11:15 PM
23 points
7 comments5 min readLW link

The Birth and Death of Syd­ney — The Bayesian Con­spir­acy Podcast

moridinamaelFeb 27, 2023, 10:36 PM
23 points
0 comments1 min readLW link
(www.thebayesianconspiracy.com)

A case for ca­pa­bil­ities work on AI as net pos­i­tive

Noosphere89Feb 27, 2023, 9:12 PM
10 points
37 comments1 min readLW link

Begin­ning to feel like a con­spir­acy theorist

JoulebitFeb 27, 2023, 8:05 PM
12 points
25 comments1 min readLW link

Some thoughts point­ing to slower AI take-off

BastiaanFeb 27, 2023, 7:53 PM
8 points
2 comments4 min readLW link

Pre­dic­tion Thread: Make Pre­dic­tions About How Differ­ent Fac­tors Affect AGI X-Risk.

MrThinkFeb 27, 2023, 7:15 PM
15 points
8 comments1 min readLW link

Count­ing-down vs. count­ing-up coherence

TsviBTFeb 27, 2023, 2:59 PM
29 points
4 comments13 min readLW link

Eliezer is still ridicu­lously op­ti­mistic about AI risk

johnlawrenceaspdenFeb 27, 2023, 2:21 PM
−10 points
34 comments2 min readLW link

Milk EA, Casu Marzu EA

jefftkFeb 27, 2023, 2:00 PM
18 points
0 comments2 min readLW link
(www.jefftk.com)

Normie re­sponse to Normie AI Safety Skepticism

GiulioFeb 27, 2023, 1:54 PM
10 points
1 comment2 min readLW link
(www.giuliostarace.com)

Fer­til­ity Rate Roundup #1

ZviFeb 27, 2023, 1:30 PM
52 points
20 comments11 min readLW link
(thezvi.wordpress.com)

Some­thing Un­fath­omable: Unal­igned Hu­man­ity and how we’re rac­ing against death with death

Yuli_BanFeb 27, 2023, 11:37 AM
13 points
14 comments19 min readLW link

The idea of an “al­igned su­per­in­tel­li­gence” seems misguided

ssadlerFeb 27, 2023, 11:19 AM
6 points
7 comments3 min readLW link
(ssadler.substack.com)

EA & LW Fo­rum Weekly Sum­mary (20th − 26th Feb 2023)

Zoe WilliamsFeb 27, 2023, 3:46 AM
4 points
0 commentsLW link

[Si­mu­la­tors sem­i­nar se­quence] #2 Semiotic physics—revamped

Feb 27, 2023, 12:25 AM
24 points
23 comments13 min readLW link

A Some­what Func­tional Defi­ni­tion of Philosophy

Richard HenageFeb 27, 2023, 12:25 AM
1 point
0 comments1 min readLW link

Re­spect Ch­ester­ton-Schel­ling Fences

ShmiFeb 27, 2023, 12:09 AM
58 points
17 comments1 min readLW link

Cu­ri­os­ity as a Solu­tion to AGI Alignment

Harsha G.Feb 26, 2023, 11:36 PM
7 points
7 comments3 min readLW link

Learn­ing How to Learn (And 20+ Stud­ies)

maxaFeb 26, 2023, 10:46 PM
63 points
12 comments6 min readLW link
(max2c.com)

Bayesian Sce­nario: Snipers & Soldiers

abstractapplicFeb 26, 2023, 9:48 PM
23 points
8 comments1 min readLW link
(h-b-p.github.io)

NYT: Lab Leak Most Likely Caused Pan­demic, En­ergy Dept. Says

trevorFeb 26, 2023, 9:21 PM
17 points
9 comments4 min readLW link
(www.nytimes.com)

[Link Post] Cy­ber Digi­tal Author­i­tar­i­anism (Na­tional In­tel­li­gence Coun­cil Re­port)

PhosphorousFeb 26, 2023, 8:51 PM
12 points
2 comments1 min readLW link
(www.dni.gov)

Reflec­tions on Zen and the Art of Mo­tor­cy­cle Maintenance

LoganStrohlFeb 26, 2023, 8:46 PM
33 points
3 comments23 min readLW link

Ta­boo “hu­man-level in­tel­li­gence”

SherrinfordFeb 26, 2023, 8:42 PM
12 points
7 comments1 min readLW link

[Link] Pe­ti­tion on brain preser­va­tion: Allow global ac­cess to high-qual­ity brain preser­va­tion as an op­tion rapidly af­ter death

Mati_RoyFeb 26, 2023, 3:56 PM
29 points
2 comments1 min readLW link
(www.change.org)

Some thoughts on the cults LW had

Noosphere89Feb 26, 2023, 3:46 PM
−4 points
28 comments1 min readLW link

A library for safety re­search in con­di­tion­ing on RLHF tasks

James ChuaFeb 26, 2023, 2:50 PM
10 points
2 comments1 min readLW link

The Prefer­ence Fulfill­ment Hypothesis

Kaj_SotalaFeb 26, 2023, 10:55 AM
66 points
62 comments11 min readLW link

All of my grand­par­ents were prodi­gies, I am ex­tremely bored at Oxford Univer­sity. Please let me in­tern/​work for you!

politicalpersuasionFeb 26, 2023, 7:50 AM
−17 points
7 comments3 min readLW link