The Parable Of The Fallen Pen­du­lum—Part 2

johnswentworthMar 12, 2024, 9:41 PM
78 points
8 comments4 min readLW link

Open con­sul­tancy: Let­ting un­trusted AIs choose what an­swer to ar­gue for

Fabien RogerMar 12, 2024, 8:38 PM
35 points
5 comments5 min readLW link

[Question] Is any­one work­ing on for­mally ver­ified AI toolchains?

metachiralityMar 12, 2024, 7:36 PM
17 points
4 comments1 min readLW link

Trans­former Debugger

Henk TillmanMar 12, 2024, 7:08 PM
26 points
0 comments1 min readLW link
(github.com)

Su­perfore­cast­ing the Ori­gins of the Covid-19 Pandemic

DanielFilanMar 12, 2024, 7:01 PM
64 points
0 comments1 min readLW link
(goodjudgment.substack.com)

min­i­mum vi­able action

Sindhu PrasadMar 12, 2024, 4:06 PM
1 point
0 comments3 min readLW link

Hard­ball ques­tions for the Gem­ini Con­gres­sional Hearing

Michael ThiessenMar 12, 2024, 3:27 PM
−11 points
2 comments1 min readLW link

OpenAI: The Board Expands

ZviMar 12, 2024, 2:00 PM
92 points
1 comment30 min readLW link
(thezvi.wordpress.com)

Up­date on Devel­op­ing an Ethics Calcu­la­tor to Align an AGI to

sweenesmMar 12, 2024, 12:33 PM
4 points
2 comments8 min readLW link

[Question] How do you iden­tify and coun­ter­act your bi­ases in de­ci­sion-mak­ing?

warrenjordanMar 12, 2024, 5:01 AM
2 points
1 comment1 min readLW link

How Much Have I Been Play­ing?

jefftkMar 12, 2024, 2:10 AM
9 points
0 comments1 min readLW link
(www.jefftk.com)

Bias-Aug­mented Con­sis­tency Train­ing Re­duces Bi­ased Rea­son­ing in Chain-of-Thought

Miles TurpinMar 11, 2024, 11:46 PM
16 points
0 comments1 min readLW link
(arxiv.org)

AI Safety Ac­tion Plan—A re­port com­mis­sioned by the US State Department

agucovaMar 11, 2024, 10:14 PM
22 points
1 commentLW link
(www.gladstone.ai)

A dis­cus­sion of AI risk and the cost/​benefit calcu­la­tion of stop­ping or paus­ing AI development

DuncanFowlerMar 11, 2024, 9:41 PM
1 point
0 comments1 min readLW link

Among the A.I. Doom­say­ers—The New Yorker

agucovaMar 11, 2024, 9:35 PM
12 points
1 commentLW link
(www.newyorker.com)

Be More Katja

Nathan YoungMar 11, 2024, 9:12 PM
53 points
0 comments3 min readLW link

AI In­ci­dent Re­port­ing: A Reg­u­la­tory Review

Mar 11, 2024, 9:03 PM
16 points
0 comments6 min readLW link

Re­sults from an Ad­ver­sar­ial Col­lab­o­ra­tion on AI Risk (FRI)

Mar 11, 2024, 8:00 PM
60 points
3 comments9 min readLW link
(forecastingresearch.org)

The Astro­nom­i­cal Sacri­fice Dilemma

Matthew McRedmondMar 11, 2024, 7:58 PM
15 points
3 comments4 min readLW link

Epiphe­nom­e­nal­ism leads to elimi­na­tivism about qualia

Clément LMar 11, 2024, 7:53 PM
4 points
0 comments7 min readLW link

The Best Es­say (Paul Gra­ham)

Chris_LeongMar 11, 2024, 7:25 PM
25 points
2 comments1 min readLW link
(paulgraham.com)

Open Thread Spring 2024

habrykaMar 11, 2024, 7:17 PM
22 points
160 comments1 min readLW link

New so­cial credit formalizations

KatjaGraceMar 11, 2024, 7:00 PM
23 points
3 comments2 min readLW link
(worldspiritsockpuppet.com)

How dis­agree­ments about Ev­i­den­tial Cor­re­la­tions could be settled

Martín SotoMar 11, 2024, 6:28 PM
11 points
3 comments4 min readLW link

“Ar­tifi­cial Gen­eral In­tel­li­gence”: an ex­tremely brief FAQ

Steven ByrnesMar 11, 2024, 5:49 PM
74 points
6 comments2 min readLW link

Some (prob­le­matic) aes­thet­ics of what con­sti­tutes good work in academia

Steven ByrnesMar 11, 2024, 5:47 PM
148 points
12 comments12 min readLW link

Storable Votes with a Pay as you win mechanism: a con­tri­bu­tion for in­sti­tu­tional design

Arturo MaciasMar 11, 2024, 3:58 PM
17 points
19 comments2 min readLW link

Tend to your clar­ity, not your confusion

Severin T. SeehrichMar 11, 2024, 3:09 PM
23 points
1 comment6 min readLW link

[Question] What do we know about the AI knowl­edge and views, es­pe­cially about ex­is­ten­tial risk, of the new OpenAI board mem­bers?

ZviMar 11, 2024, 2:55 PM
60 points
2 comments2 min readLW link

“How could I have thought that faster?”

mesaoptimizerMar 11, 2024, 10:56 AM
235 points
32 comments2 min readLW link
(twitter.com)

Sim­ple ver­sus Short: Higher-or­der de­gen­er­acy and er­ror-correction

Daniel MurfetMar 11, 2024, 7:52 AM
110 points
8 comments13 min readLW link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora BelroseMar 11, 2024, 5:58 AM
16 points
14 comments1 min readLW link
(www.youtube.com)

Ad­vice Needed: Does Us­ing a LLM Com­pomise My Per­sonal Epistemic Se­cu­rity?

NaomiMar 11, 2024, 5:57 AM
17 points
7 comments2 min readLW link

Some Thoughts on Con­cept For­ma­tion and Use in Agents

CatGoddessMar 11, 2024, 5:03 AM
12 points
0 comments8 min readLW link

Steel­man­ning as an es­pe­cially in­sidious form of strawmanning

Cornelius DybdahlMar 11, 2024, 2:25 AM
10 points
13 comments5 min readLW link

One-shot strat­egy games?

RaemonMar 11, 2024, 12:19 AM
41 points
42 comments1 min readLW link

Un­der­stand­ing SAE Fea­tures with the Logit Lens

Mar 11, 2024, 12:16 AM
68 points
0 comments14 min readLW link

Re­plac­ing the Water Heater’s Anode

jefftkMar 11, 2024, 12:00 AM
22 points
0 comments2 min readLW link
(www.jefftk.com)

Briefly Ex­tend­ing Differ­en­tial Op­ti­miza­tion to Distributions

J BostockMar 10, 2024, 8:41 PM
4 points
0 comments2 min readLW link

Evolu­tion did a sur­pris­ing good job at al­ign­ing hu­mans...to so­cial status

Eli TyreMar 10, 2024, 7:34 PM
24 points
37 comments1 min readLW link

Paus­ing AI is Pos­i­tive Ex­pected Value

LironMar 10, 2024, 5:10 PM
9 points
2 comments3 min readLW link
(twitter.com)

W2SG: Introduction

Maria KaprosMar 10, 2024, 4:25 PM
2 points
2 comments10 min readLW link

An Op­ti­mistic Solu­tion to the Fermi Paradox

Glenn Clayton10 Mar 2024 14:39 UTC
4 points
6 comments13 min readLW link

Coun­ter­fac­tual Civ­i­liza­tion Si­mu­la­tion Ver­sion −1.0 aka my ap­pli­ca­tion to Jo­hannes Mayer’s SPAR project

Morphism10 Mar 2024 10:10 UTC
1 point
0 comments14 min readLW link

Notes from a Prompt Factory

Richard_Ngo10 Mar 2024 5:13 UTC
104 points
19 comments9 min readLW link
(www.narrativeark.xyz)

In­ves­ti­gat­ing Basin Vol­ume with XOR Networks

CatGoddess10 Mar 2024 1:35 UTC
10 points
0 comments5 min readLW link

[Linkpost] MindEye2: Shared-Sub­ject Models En­able fMRI-To-Image With 1 Hour of Data

Bogdan Ionut Cirstea10 Mar 2024 1:30 UTC
10 points
0 comments1 min readLW link
(openreview.net)

0th Per­son and 1st Per­son Logic

Adele Lopez10 Mar 2024 0:56 UTC
60 points
28 comments6 min readLW link

Com­ple­tion Estimates

scarcegreengrass9 Mar 2024 22:56 UTC
7 points
2 comments3 min readLW link

Semi-Sim­pli­cial Types, Part I: Mo­ti­va­tion and History

astradiol9 Mar 2024 22:07 UTC
20 points
3 comments10 min readLW link