Ra­tion­al­ism and so­cial rationalism

philosophybearMar 10, 2023, 11:20 PM
17 points
5 comments10 min readLW link
(philosophybear.substack.com)

Meetup Tip: Nametags

ScrewtapeMar 10, 2023, 9:00 PM
16 points
2 comments3 min readLW link

[Question] Is ChatGPT (or other LLMs) more ‘sen­tient’/​’con­scious/​etc. then a baby with­out a brain?

M. Y. ZuoMar 10, 2023, 7:00 PM
−5 points
2 comments1 min readLW link

The hu­man­ity’s biggest mistake

RomanSMar 10, 2023, 4:30 PM
0 points
1 comment2 min readLW link

Oper­a­tional­iz­ing timelines

Zach Stein-PerlmanMar 10, 2023, 4:30 PM
7 points
1 comment3 min readLW link

[Question] What do you think is wrong with ra­tio­nal­ist cul­ture?

tailcalledMar 10, 2023, 1:17 PM
16 points
77 comments1 min readLW link

Dice De­ci­sion Making

Bart BussmannMar 10, 2023, 1:01 PM
20 points
14 comments3 min readLW link

Stop call­ing it “jailbreak­ing” ChatGPT

TemplarrrMar 10, 2023, 11:41 AM
7 points
9 comments2 min readLW link

Long-term mem­ory for LLM via self-repli­cat­ing prompt

avturchinMar 10, 2023, 10:28 AM
20 points
3 comments2 min readLW link

Thoughts on the OpenAI al­ign­ment plan: will AI re­search as­sis­tants be net-pos­i­tive for AI ex­is­ten­tial risk?

Jeffrey LadishMar 10, 2023, 8:21 AM
58 points
3 comments9 min readLW link

Reflec­tions On The Fea­si­bil­ity Of Scal­able-Oversight

Felix HofstätterMar 10, 2023, 7:54 AM
11 points
0 comments12 min readLW link

Ja­pan AI Align­ment Conference

Mar 10, 2023, 6:56 AM
64 points
7 comments1 min readLW link
(www.conjecture.dev)

Every­thing’s nor­mal un­til it’s not

Eleni AngelouMar 10, 2023, 2:02 AM
7 points
0 comments3 min readLW link

Acolytes, re­form­ers, and atheists

lcMar 10, 2023, 12:48 AM
9 points
0 comments4 min readLW link

The hot mess the­ory of AI mis­al­ign­ment: More in­tel­li­gent agents be­have less coherently

Jonathan YanMar 10, 2023, 12:20 AM
48 points
22 comments1 min readLW link
(sohl-dickstein.github.io)

Why Not Just Out­source Align­ment Re­search To An AI?

johnswentworthMar 9, 2023, 9:49 PM
151 points
50 comments9 min readLW link1 review

What’s Not Our Problem

Jacob FalkovichMar 9, 2023, 8:07 PM
22 points
6 comments9 min readLW link

Ques­tions about Con­je­cure’s CoEm proposal

Mar 9, 2023, 7:32 PM
51 points
4 comments2 min readLW link

What Ja­son has been read­ing, March 2023

jasoncrawfordMar 9, 2023, 6:46 PM
12 points
0 comments6 min readLW link
(rootsofprogress.org)

[Question] “Provide C++ code for a func­tion that out­puts a Fibonacci se­quence of n terms, where n is pro­vided as a pa­ram­e­ter to the function

Thembeka99Mar 9, 2023, 6:37 PM
−21 points
2 comments1 min readLW link

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenasterMar 9, 2023, 5:34 PM
17 points
1 comment22 min readLW link
(www.anthropic.com)

Why do we as­sume there is a “real” shog­goth be­hind the LLM? Why not masks all the way down?

Robert_AIZIMar 9, 2023, 5:28 PM
63 points
48 comments2 min readLW link

An­thropic’s Core Views on AI Safety

Zac Hatfield-DoddsMar 9, 2023, 4:55 PM
172 points
39 comments2 min readLW link
(www.anthropic.com)

Some ML-Re­lated Math I Now Un­der­stand Better

Fabien RogerMar 9, 2023, 4:35 PM
50 points
6 comments4 min readLW link

The Translu­cent Thoughts Hy­pothe­ses and Their Implications

Fabien RogerMar 9, 2023, 4:30 PM
142 points
7 comments19 min readLW link

IRL in Gen­eral Environments

michaelcohenMar 9, 2023, 1:32 PM
8 points
20 comments1 min readLW link

Utility un­cer­tainty vs. ex­pected in­for­ma­tion gain

michaelcohenMar 9, 2023, 1:32 PM
13 points
9 comments1 min readLW link

Value Learn­ing is only Asymp­tot­i­cally Safe

michaelcohenMar 9, 2023, 1:32 PM
5 points
19 comments1 min readLW link

Im­pact Mea­sure Test­ing with Honey Pots and Myopia

michaelcohenMar 9, 2023, 1:32 PM
13 points
9 comments1 min readLW link

Just Imi­tate Hu­mans?

michaelcohenMar 9, 2023, 1:31 PM
11 points
72 comments1 min readLW link

Build a Causal De­ci­sion Theorist

michaelcohenMar 9, 2023, 1:31 PM
−2 points
14 comments4 min readLW link

ChatGPT ex­plores the se­man­tic differential

Bill BenzonMar 9, 2023, 1:09 PM
7 points
2 comments7 min readLW link

AI #3

ZviMar 9, 2023, 12:20 PM
55 points
12 comments62 min readLW link
(thezvi.wordpress.com)

The Scien­tific Ap­proach To Any­thing and Everything

Rami RustomMar 9, 2023, 11:27 AM
6 points
5 comments16 min readLW link

Paper Sum­mary: The Effec­tive­ness of AI Ex­is­ten­tial Risk Com­mu­ni­ca­tion to the Amer­i­can and Dutch Public

otto.bartenMar 9, 2023, 10:47 AM
14 points
6 comments4 min readLW link

Speed run­ning ev­ery­one through the bad al­ign­ment bingo. $5k bounty for a LW con­ver­sa­tional agent

ArthurBMar 9, 2023, 9:26 AM
140 points
33 comments2 min readLW link

Chom­sky on ChatGPT (link)

mukashiMar 9, 2023, 7:00 AM
2 points
6 comments1 min readLW link

How bad a fu­ture do ML re­searchers ex­pect?

KatjaGraceMar 9, 2023, 4:50 AM
122 points
8 comments2 min readLW link
(aiimpacts.org)

Challenge: con­struct a Gra­di­ent Hacker

Mar 9, 2023, 2:38 AM
39 points
10 comments1 min readLW link

Ba­sic Facts Beanbag

ScrewtapeMar 9, 2023, 12:05 AM
6 points
0 comments4 min readLW link

A rank­ing scale for how se­vere the side effects of solu­tions to AI x-risk are

Christopher KingMar 8, 2023, 10:53 PM
3 points
0 comments2 min readLW link

Progress links and tweets, 2023-03-08

jasoncrawfordMar 8, 2023, 8:37 PM
16 points
0 comments1 min readLW link
(rootsofprogress.org)

Pro­ject “MIRI as a Ser­vice”

RomanSMar 8, 2023, 7:22 PM
42 points
4 comments1 min readLW link

2022 Sur­vey Results

ScrewtapeMar 8, 2023, 7:16 PM
48 points
8 comments20 min readLW link

Use the Nato Alphabet

CedarMar 8, 2023, 7:14 PM
6 points
10 comments1 min readLW link

LessWrong needs a sage mechanic

lcMar 8, 2023, 6:57 PM
34 points
5 comments1 min readLW link

[Question] Math­e­mat­i­cal mod­els of Ethics

VictorsMar 8, 2023, 5:40 PM
4 points
2 comments1 min readLW link

Against LLM Reductionism

Erich_GrunewaldMar 8, 2023, 3:52 PM
140 points
17 comments18 min readLW link
(www.erichgrunewald.com)

Agency, LLMs and AI Safety—A First Pass

GiulioMar 8, 2023, 3:42 PM
2 points
0 comments4 min readLW link
(www.giuliostarace.com)

Why Un­con­trol­lable AI Looks More Likely Than Ever

Mar 8, 2023, 3:41 PM
18 points
0 comments4 min readLW link
(time.com)