[Question] What is the most prob­a­ble AI?

Zeruel01720 Jun 2022 23:26 UTC
−2 points
0 comments3 min readLW link

Eval­u­at­ing a Corsi-Rosen­thal Filter Cube

jefftk20 Jun 2022 19:40 UTC
13 points
4 comments1 min readLW link
(www.jefftk.com)

Sur­vey re AIS/​LTism office in NYC

RyanCarey20 Jun 2022 19:21 UTC
7 points
0 comments1 min readLW link

Is This Thing Sen­tient, Y/​N?

Thane Ruthenis20 Jun 2022 18:37 UTC
4 points
10 comments7 min readLW link

Steam

abramdemski20 Jun 2022 17:38 UTC
153 points
13 comments5 min readLW link1 review

Parable: The Bomb that doesn’t Explode

Lone Pine20 Jun 2022 16:41 UTC
14 points
5 comments2 min readLW link

On cor­rigi­bil­ity and its basin

Donald Hobson20 Jun 2022 16:33 UTC
18 points
3 comments2 min readLW link

An­nounc­ing the DWATV Discord

Zvi20 Jun 2022 15:50 UTC
20 points
9 comments1 min readLW link
(thezvi.wordpress.com)

Key Papers in Lan­guage Model Safety

aog20 Jun 2022 15:00 UTC
40 points
1 comment22 min readLW link

Re­la­tion­ship Ad­vice Repository

Ruby20 Jun 2022 14:39 UTC
110 points
36 comments38 min readLW link

Adap­ta­tion Ex­ecu­tors and the Telos Margin

Plinthist20 Jun 2022 13:06 UTC
2 points
8 comments5 min readLW link

Are we there yet?

theflowerpot20 Jun 2022 11:19 UTC
2 points
2 comments1 min readLW link

Causal con­fu­sion as an ar­gu­ment against the scal­ing hypothesis

20 Jun 2022 10:54 UTC
86 points
30 comments15 min readLW link

An AI defense-offense sym­me­try thesis

Chris van Merwijk20 Jun 2022 10:01 UTC
10 points
9 comments3 min readLW link

Let’s See You Write That Cor­rigi­bil­ity Tag

Eliezer Yudkowsky19 Jun 2022 21:11 UTC
125 points
70 comments1 min readLW link

Half-baked al­ign­ment idea: train­ing to generalize

Aaron Bergman19 Jun 2022 20:16 UTC
10 points
2 comments4 min readLW link

Where I agree and dis­agree with Eliezer

paulfchristiano19 Jun 2022 19:15 UTC
907 points
224 comments18 min readLW link2 reviews

[Question] AI mis­al­ign­ment risk from GPT-like sys­tems?

fiso6419 Jun 2022 17:35 UTC
10 points
8 comments1 min readLW link

[Link-post] On Defer­ence and Yud­kowsky’s AI Risk Estimates

bmg19 Jun 2022 17:25 UTC
29 points
8 comments1 min readLW link

Heb­bian Learn­ing Is More Com­mon Than You Think

Aleksi Liimatainen19 Jun 2022 15:57 UTC
8 points
2 comments1 min readLW link

The Malthu­sian Trap: An Ex­tremely Short Introduction

Davis Kedrosky19 Jun 2022 15:25 UTC
5 points
0 comments6 min readLW link
(daviskedrosky.substack.com)

Par­li­a­ments with­out the Parties

Yair Halberstadt19 Jun 2022 14:06 UTC
18 points
18 comments2 min readLW link

Lamda is not an LLM

Kevin19 Jun 2022 11:13 UTC
7 points
10 comments1 min readLW link
(www.wired.com)

[Linkpost] The im­por­tance of stu­pidity in sci­en­tific research

Pattern19 Jun 2022 5:17 UTC
17 points
1 comment1 min readLW link
(journals.biologists.com)

ETH is prob­a­bly un­der­val­ued right now

mukashi19 Jun 2022 2:20 UTC
−7 points
22 comments1 min readLW link

Juneberry Cake

jefftk19 Jun 2022 1:40 UTC
29 points
0 comments1 min readLW link
(www.jefftk.com)

Agent level parallelism

Johannes C. Mayer18 Jun 2022 20:56 UTC
5 points
5 comments1 min readLW link

What are our outs to play to?

Hastings18 Jun 2022 19:32 UTC
7 points
0 comments2 min readLW link

[Question] What’s the in­for­ma­tion value of gov­ern­ment hear­ings?

Kenny18 Jun 2022 17:13 UTC
6 points
4 comments2 min readLW link

The best ‘free solo’ (rock climb­ing) video

Kenny18 Jun 2022 15:29 UTC
14 points
4 comments2 min readLW link

[Question] What’s the name of this fal­lacy/​rea­son­ing an­tipat­tern?

David Gross18 Jun 2022 14:04 UTC
9 points
6 comments1 min readLW link

“Brain en­thu­si­asts” in AI Safety

18 Jun 2022 9:59 UTC
64 points
5 comments10 min readLW link
(universalprior.substack.com)

To what ex­tent have ideas and sci­en­tific dis­cov­er­ies got­ten harder to find?

lsusr18 Jun 2022 7:15 UTC
33 points
10 comments6 min readLW link

[Question] What’s the goal in life?

Konstantin Weitz18 Jun 2022 6:09 UTC
5 points
6 comments1 min readLW link

Can DALL-E un­der­stand sim­ple ge­om­e­try?

Isaac King18 Jun 2022 4:37 UTC
25 points
2 comments1 min readLW link

Scott Aaron­son is join­ing OpenAI to work on AI safety

peterbarnett18 Jun 2022 4:06 UTC
117 points
31 comments1 min readLW link
(scottaaronson.blog)

[Question] Why don’t we think we’re in the sim­plest uni­verse with in­tel­li­gent life?

ADifferentAnonymous18 Jun 2022 3:05 UTC
30 points
33 comments1 min readLW link

Do your­self a FAVAR: se­cu­rity mindset

lemonhope18 Jun 2022 2:08 UTC
20 points
2 comments2 min readLW link

Fore­cast­ing Fu­sion Power

Daniel Kokotajlo18 Jun 2022 0:04 UTC
29 points
8 comments1 min readLW link
(astralcodexten.substack.com)

Pivotal out­comes and pivotal processes

Andrew_Critch17 Jun 2022 23:43 UTC
97 points
31 comments4 min readLW link

Quan­tify­ing Gen­eral Intelligence

JasonBrown17 Jun 2022 21:57 UTC
9 points
6 comments13 min readLW link

Ap­ply for Pro­duc­tivity Coach­ing and AI Align­ment Mentorship

Nick17 Jun 2022 21:36 UTC
12 points
1 comment1 min readLW link

Things That Make Me En­joy Giv­ing Ca­reer Advice

Neel Nanda17 Jun 2022 20:49 UTC
16 points
0 comments9 min readLW link
(www.neelnanda.io)

The Unified The­ory of Nor­ma­tive Ethics

Thane Ruthenis17 Jun 2022 19:55 UTC
8 points
0 comments6 min readLW link

1689: Un­cov­er­ing the World New In­sti­tu­tion­al­ism Created

Davis Kedrosky17 Jun 2022 19:32 UTC
7 points
0 comments9 min readLW link
(daviskedrosky.substack.com)

[Question] Is there an unified way to make sense of ai failure modes?

walking_mushroom17 Jun 2022 18:00 UTC
3 points
1 comment1 min readLW link

In defense of flailing, with fore­word by Bill Burr

lc17 Jun 2022 16:40 UTC
88 points
6 comments4 min readLW link

An Ap­proach to Land Value Taxation

harsimony17 Jun 2022 15:53 UTC
4 points
12 comments4 min readLW link
(harsimony.wordpress.com)

Value ex­trap­o­la­tion vs Wireheading

Stuart_Armstrong17 Jun 2022 15:02 UTC
16 points
1 comment1 min readLW link

#SAT with Ten­sor Networks

Adam Jermyn17 Jun 2022 13:20 UTC
4 points
0 comments2 min readLW link