Ronny and Nate dis­cuss what sorts of minds hu­man­ity is likely to find by Ma­chine Learning

Dec 19, 2023, 11:39 PM
42 points
30 comments25 min readLW link

[Question] What are the best Siderea posts?

mike_hawkeDec 19, 2023, 11:07 PM
17 points
2 comments1 min readLW link

Mean­ing & Agency

abramdemskiDec 19, 2023, 10:27 PM
91 points
17 comments14 min readLW link

s/​acc: Safe Ac­cel­er­a­tionism Manifesto

lorepieriDec 19, 2023, 10:19 PM
−4 points
5 comments2 min readLW link
(lorenzopieri.com)

Don’t Share In­for­ma­tion Exfo­haz­ardous on Others’ AI-Risk Models

Thane RuthenisDec 19, 2023, 8:09 PM
68 points
11 comments1 min readLW link

Paper: Tell, Don’t Show- Declar­a­tive facts in­fluence how LLMs generalize

Dec 19, 2023, 7:14 PM
45 points
4 comments6 min readLW link
(arxiv.org)

In­ter­view: Ap­pli­ca­tions w/​ Alice Rigg

jacobhaimesDec 19, 2023, 7:03 PM
12 points
0 comments1 min readLW link
(into-ai-safety.github.io)

How does a toy 2 digit sub­trac­tion trans­former pre­dict the sign of the out­put?

Evan AndersDec 19, 2023, 6:56 PM
14 points
0 comments8 min readLW link
(evanhanders.blog)

In­cre­men­tal AI Risks from Proxy-Simulations

kmenouDec 19, 2023, 6:56 PM
2 points
0 comments1 min readLW link
(individual.utoronto.ca)

Goal-Com­plete­ness is like Tur­ing-Com­plete­ness for AGI

LironDec 19, 2023, 6:12 PM
51 points
26 comments3 min readLW link

So­ci­aLLM: pro­posal for a lan­guage model de­sign for per­son­al­ised apps, so­cial sci­ence, and AI safety research

Roman LeventovDec 19, 2023, 4:49 PM
17 points
5 comments3 min readLW link

Chord­ing “The Next Right Thing”

jefftkDec 19, 2023, 3:40 PM
11 points
0 comments2 min readLW link
(www.jefftk.com)

Monthly Roundup #13: De­cem­ber 2023

ZviDec 19, 2023, 3:10 PM
32 points
5 comments26 min readLW link
(thezvi.wordpress.com)

Effec­tive Asper­sions: How the Non­lin­ear In­ves­ti­ga­tion Went Wrong

TracingWoodgrainsDec 19, 2023, 12:00 PM
188 points
172 commentsLW link2 reviews

A Univer­sal Emer­gent De­com­po­si­tion of Retrieval Tasks in Lan­guage Models

Dec 19, 2023, 11:52 AM
84 points
3 comments10 min readLW link
(arxiv.org)

Assess­ment of AI safety agen­das: think about the down­side risk

Roman LeventovDec 19, 2023, 9:00 AM
13 points
1 comment1 min readLW link

Con­stel­la­tions are Younger than Continents

Jeffrey HeningerDec 19, 2023, 6:12 AM
264 points
21 comments2 min readLW link

The Dark Arts

Dec 19, 2023, 4:41 AM
134 points
49 comments9 min readLW link

When sci­en­tists con­sider whether their re­search will end the world

HarlanDec 19, 2023, 3:47 AM
30 points
4 comments11 min readLW link
(blog.aiimpacts.org)

Is the far fu­ture in­evitably zero sum?

Srdjan MileticDec 19, 2023, 1:45 AM
8 points
2 comments2 min readLW link
(dissent.blog)

The ‘Ne­glected Ap­proaches’ Ap­proach: AE Stu­dio’s Align­ment Agenda

Dec 18, 2023, 8:35 PM
177 points
23 comments12 min readLW link1 review

The Short­est Path Between Scylla and Charybdis

Thane RuthenisDec 18, 2023, 8:08 PM
50 points
8 comments5 min readLW link

OpenAI: Pre­pared­ness framework

Zach Stein-PerlmanDec 18, 2023, 6:30 PM
70 points
23 comments4 min readLW link
(openai.com)

[Valence se­ries] 5. “Valence Di­sor­ders” in Men­tal Health & Personality

Steven ByrnesDec 18, 2023, 3:26 PM
45 points
12 comments13 min readLW link

Dis­cus­sion: Challenges with Un­su­per­vised LLM Knowl­edge Discovery

Dec 18, 2023, 11:58 AM
147 points
21 comments10 min readLW link

In­ter­pret­ing the Learn­ing of Deceit

RogerDearnaleyDec 18, 2023, 8:12 AM
30 points
14 comments9 min readLW link

Talk: “AI Would Be A Lot Less Alarm­ing If We Un­der­stood Agents”

johnswentworthDec 17, 2023, 11:46 PM
58 points
3 comments1 min readLW link
(www.youtube.com)

∀: a story

Richard_NgoDec 17, 2023, 10:42 PM
38 points
1 comment8 min readLW link
(www.narrativeark.xyz)

Re­viv­ing a 2015 MacBook

jefftkDec 17, 2023, 9:00 PM
11 points
0 comments1 min readLW link
(www.jefftk.com)

A Com­mon-Sense Case For Mu­tu­ally-Misal­igned AGIs Ally­ing Against Humans

Thane RuthenisDec 17, 2023, 8:28 PM
29 points
7 comments11 min readLW link

The Limits of Ar­tifi­cial Con­scious­ness: A Biol­ogy-Based Cri­tique of Chalmers’ Fad­ing Qualia Argument

Štěpán LosDec 17, 2023, 7:11 PM
−6 points
9 comments17 min readLW link

What makes teach­ing math special

ViliamDec 17, 2023, 2:15 PM
45 points
27 comments11 min readLW link

The pre­dic­tive power of dis­si­pa­tive adaptation

dr_sDec 17, 2023, 2:01 PM
56 points
14 comments19 min readLW link

Linkpost: Francesca v Harvard

LinchDec 17, 2023, 6:18 AM
5 points
5 comments2 min readLW link
(www.francesca-v-harvard.org)

Les­sons from mas­sag­ing my­self, oth­ers, dogs, and cats

ChipmonkDec 17, 2023, 4:28 AM
2 points
27 comments5 min readLW link
(chipmonk.blog)

The Serendipity of Density

jefftkDec 17, 2023, 3:50 AM
40 points
4 comments1 min readLW link
(www.jefftk.com)

Bounty: Di­verse hard tasks for LLM agents

Dec 17, 2023, 1:04 AM
49 points
31 comments16 min readLW link

2022 (and All Time) Posts by Ping­back Count

RaemonDec 16, 2023, 9:17 PM
53 points
14 comments6 min readLW link

“Hu­man­ity vs. AGI” Will Never Look Like “Hu­man­ity vs. AGI” to Humanity

Thane RuthenisDec 16, 2023, 8:08 PM
191 points
34 comments5 min readLW link

A vi­sual anal­ogy for text gen­er­a­tion by LLMs?

Bill BenzonDec 16, 2023, 5:58 PM
3 points
0 comments1 min readLW link

Up­grad­ing the AI Safety Community

Dec 16, 2023, 3:34 PM
42 points
9 comments42 min readLW link

cold alu­minum for medicine

bhauthDec 16, 2023, 2:38 PM
42 points
4 comments4 min readLW link
(www.bhauth.com)

Scal­able Over­sight and Weak-to-Strong Gen­er­al­iza­tion: Com­pat­i­ble ap­proaches to the same problem

Dec 16, 2023, 5:49 AM
76 points
4 comments6 min readLW link1 review

Weak-to-Strong Gen­er­al­iza­tion: Elic­it­ing Strong Ca­pa­bil­ities With Weak Supervision

leogaoDec 16, 2023, 5:39 AM
55 points
5 comments1 min readLW link

Pope Fran­cis shares thoughts on re­spon­si­ble AI development

corruptedCatapillarDec 16, 2023, 3:49 AM
15 points
4 comments1 min readLW link
(www.vatican.va)

Cur­rent AIs Provide Nearly No Data Rele­vant to AGI Alignment

Thane RuthenisDec 15, 2023, 8:16 PM
132 points
157 comments8 min readLW link1 review

Ag­glomer­a­tion of ‘Ought’

DavidAndresBloomDec 15, 2023, 7:07 PM
1 point
1 comment11 min readLW link

Pre­dict­ing the fu­ture with the power of the In­ter­net (and piss­ing off Rob Miles)

WriterDec 15, 2023, 5:37 PM
23 points
9 comments4 min readLW link
(youtu.be)

Progress links di­gest, 2023-12-15: Vi­talik on d/​acc, $100M+ in prizes, and more

jasoncrawfordDec 15, 2023, 3:52 PM
20 points
0 comments12 min readLW link
(rootsofprogress.org)

“AI Align­ment” is a Danger­ously Over­loaded Term

RokoDec 15, 2023, 2:34 PM
108 points
100 comments3 min readLW link