Ene­mies vs Malefactors

So8res28 Feb 2023 23:38 UTC
203 points
60 comments1 min readLW link

Scarce Chan­nels and Ab­strac­tion Coupling

johnswentworth28 Feb 2023 23:26 UTC
40 points
11 comments6 min readLW link

On “prep­ping” per­sonal house­holds for AI doom scenarios

coryfklein28 Feb 2023 22:17 UTC
0 points
0 comments2 min readLW link

Power-seek­ing can be prob­a­ble and pre­dic­tive for trained agents

28 Feb 2023 21:10 UTC
56 points
22 comments9 min readLW link
(arxiv.org)

Can AI ex­pe­rience nihilism?

chenbo28 Feb 2023 18:58 UTC
−14 points
1 comment2 min readLW link

Help kill “teach a man to fish”

Tyler G Hall28 Feb 2023 18:53 UTC
24 points
3 comments1 min readLW link

The bur­den of knowing

arisAlexis28 Feb 2023 18:40 UTC
5 points
0 comments2 min readLW link

In­ter­pret­ing Embed­ding Spaces by Conceptualization

Adi Simhi28 Feb 2023 18:38 UTC
3 points
0 comments1 min readLW link
(arxiv.org)

Quick note: black­out cur­tains/​lu­mi­na­tors might be *caus­ing* your cir­ca­dian prob­lems

Adhiraj28 Feb 2023 18:38 UTC
2 points
0 comments1 min readLW link

A mostly crit­i­cal re­view of in­fra-Bayesianism

matolcsid28 Feb 2023 18:37 UTC
104 points
9 comments29 min readLW link

Perfor­mance guaran­tees in clas­si­cal learn­ing the­ory and in­fra-Bayesianism

matolcsid28 Feb 2023 18:37 UTC
9 points
4 comments31 min readLW link

[Question] Eth­i­cal and in­cen­tive-com­pat­i­ble way to share fi­nances with part­ner when you both work?

Chad Nauseam28 Feb 2023 18:28 UTC
14 points
3 comments2 min readLW link
(www.reddit.com)

My Ex­pe­rience With Lov­ing Kind­ness Meditation

maia28 Feb 2023 18:18 UTC
47 points
8 comments3 min readLW link
(particularvirtue.blogspot.com)

What does Bing Chat tell us about AI risk?

HoldenKarnofsky28 Feb 2023 17:40 UTC
80 points
21 comments2 min readLW link
(www.cold-takes.com)

Digi­tal Molec­u­lar Assem­blers: What syn­thetic me­dia/​gen­er­a­tive AI ac­tu­ally rep­re­sents, and where I think it’s going

Yuli_Ban28 Feb 2023 14:03 UTC
26 points
4 comments15 min readLW link

What is the fer­til­ity rate?

Thomas Sepulchre28 Feb 2023 12:55 UTC
30 points
1 comment5 min readLW link

Heuris­tics on bias to ac­tion ver­sus sta­tus quo?

Farkas28 Feb 2023 12:45 UTC
4 points
0 comments2 min readLW link

Evil au­to­com­plete: Ex­is­ten­tial Risk and Next-To­ken Predictors

Yitz28 Feb 2023 8:47 UTC
9 points
3 comments5 min readLW link

$20 Million in NSF Grants for Safety Research

Dan H28 Feb 2023 4:44 UTC
165 points
12 comments1 min readLW link

Be­neath My Epistemic Dignity

David Udell28 Feb 2023 4:02 UTC
6 points
3 comments2 min readLW link

Tran­script: Yud­kowsky on Ban­kless fol­low-up Q&A

vonk28 Feb 2023 3:46 UTC
54 points
40 comments22 min readLW link

Tran­script: Test­ing ChatGPT’s Perfor­mance in Engineering

alxgoldstn28 Feb 2023 2:16 UTC
17 points
3 comments7 min readLW link

Conversationism

scottviteri28 Feb 2023 0:09 UTC
50 points
1 comment12 min readLW link

[S] D&D.Sci: All the D8a. Allllllll of it. Eval­u­a­tion and Ruleset

aphyer27 Feb 2023 23:15 UTC
22 points
7 comments5 min readLW link

The Birth and Death of Syd­ney — The Bayesian Con­spir­acy Podcast

moridinamael27 Feb 2023 22:36 UTC
23 points
0 comments1 min readLW link
(www.thebayesianconspiracy.com)

A case for ca­pa­bil­ities work on AI as net pos­i­tive

Noosphere8927 Feb 2023 21:12 UTC
10 points
37 comments1 min readLW link

Begin­ning to feel like a con­spir­acy theorist

Joulebit27 Feb 2023 20:05 UTC
12 points
25 comments1 min readLW link

Some thoughts point­ing to slower AI take-off

Bastiaan27 Feb 2023 19:53 UTC
8 points
2 comments4 min readLW link

Pre­dic­tion Thread: Make Pre­dic­tions About How Differ­ent Fac­tors Affect AGI X-Risk.

MrThink27 Feb 2023 19:15 UTC
15 points
8 comments1 min readLW link

Count­ing-down vs. count­ing-up coherence

TsviBT27 Feb 2023 14:59 UTC
26 points
4 comments13 min readLW link

Eliezer is still ridicu­lously op­ti­mistic about AI risk

johnlawrenceaspden27 Feb 2023 14:21 UTC
−9 points
34 comments2 min readLW link

Milk EA, Casu Marzu EA

jefftk27 Feb 2023 14:00 UTC
18 points
0 comments2 min readLW link
(www.jefftk.com)

Normie re­sponse to Normie AI Safety Skepticism

Giulio27 Feb 2023 13:54 UTC
10 points
1 comment2 min readLW link
(www.giuliostarace.com)

Fer­til­ity Rate Roundup #1

Zvi27 Feb 2023 13:30 UTC
51 points
19 comments11 min readLW link
(thezvi.wordpress.com)

Some­thing Un­fath­omable: Unal­igned Hu­man­ity and how we’re rac­ing against death with death

Yuli_Ban27 Feb 2023 11:37 UTC
13 points
14 comments19 min readLW link

The idea of an “al­igned su­per­in­tel­li­gence” seems misguided

ssadler27 Feb 2023 11:19 UTC
6 points
7 comments3 min readLW link
(ssadler.substack.com)

EA & LW Fo­rum Weekly Sum­mary (20th − 26th Feb 2023)

Zoe Williams27 Feb 2023 3:46 UTC
4 points
0 comments1 min readLW link

[Si­mu­la­tors sem­i­nar se­quence] #2 Semiotic physics—revamped

27 Feb 2023 0:25 UTC
23 points
23 comments13 min readLW link

A Some­what Func­tional Defi­ni­tion of Philosophy

Richard Henage27 Feb 2023 0:25 UTC
1 point
0 comments1 min readLW link

Re­spect Ch­ester­ton-Schel­ling Fences

shminux27 Feb 2023 0:09 UTC
58 points
17 comments1 min readLW link

Cu­ri­os­ity as a Solu­tion to AGI Alignment

Harsha G.26 Feb 2023 23:36 UTC
7 points
7 comments3 min readLW link

Learn­ing How to Learn (And 20+ Stud­ies)

maxa26 Feb 2023 22:46 UTC
58 points
12 comments6 min readLW link
(max2c.com)

Bayesian Sce­nario: Snipers & Soldiers

abstractapplic26 Feb 2023 21:48 UTC
23 points
8 comments1 min readLW link
(h-b-p.github.io)

NYT: Lab Leak Most Likely Caused Pan­demic, En­ergy Dept. Says

trevor26 Feb 2023 21:21 UTC
17 points
9 comments4 min readLW link
(www.nytimes.com)

[Link Post] Cy­ber Digi­tal Author­i­tar­i­anism (Na­tional In­tel­li­gence Coun­cil Re­port)

Phosphorous26 Feb 2023 20:51 UTC
12 points
2 comments1 min readLW link
(www.dni.gov)

Reflec­tions on Zen and the Art of Mo­tor­cy­cle Maintenance

LoganStrohl26 Feb 2023 20:46 UTC
33 points
3 comments23 min readLW link

Ta­boo “hu­man-level in­tel­li­gence”

Sherrinford26 Feb 2023 20:42 UTC
12 points
7 comments1 min readLW link

[Link] Pe­ti­tion on brain preser­va­tion: Allow global ac­cess to high-qual­ity brain preser­va­tion as an op­tion rapidly af­ter death

Mati_Roy26 Feb 2023 15:56 UTC
29 points
2 comments1 min readLW link
(www.change.org)

Some thoughts on the cults LW had

Noosphere8926 Feb 2023 15:46 UTC
−7 points
28 comments1 min readLW link

A library for safety re­search in con­di­tion­ing on RLHF tasks

James Chua26 Feb 2023 14:50 UTC
10 points
2 comments1 min readLW link