An­nounc­ing Athena—Women in AI Align­ment Research

Claire Short7 Nov 2023 21:46 UTC
80 points
2 comments3 min readLW link

Vote on In­ter­est­ing Disagreements

Ben Pace7 Nov 2023 21:35 UTC
159 points
129 comments1 min readLW link

What is democ­racy for?

Johnstone7 Nov 2023 18:17 UTC
−5 points
10 comments7 min readLW link

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

7 Nov 2023 17:59 UTC
36 points
2 comments2 min readLW link
(arxiv.org)

Im­ple­ment­ing De­ci­sion Theory

justinpombrio7 Nov 2023 17:55 UTC
21 points
12 comments3 min readLW link

Mir­ror, Mir­ror on the Wall: How Do Fore­cast­ers Fare by Their Own Call?

nikos7 Nov 2023 17:39 UTC
14 points
5 comments14 min readLW link

Sym­biotic self-al­ign­ment of AIs.

Spiritus Dei7 Nov 2023 17:18 UTC
1 point
0 comments3 min readLW link

AMA: Earn­ing to Give

jefftk7 Nov 2023 16:20 UTC
53 points
8 comments1 min readLW link
(www.jefftk.com)

The Stochas­tic Par­rot Hy­poth­e­sis is de­bat­able for the last gen­er­a­tion of LLMs

7 Nov 2023 16:12 UTC
50 points
20 comments6 min readLW link

Pre­face to the Se­quence on LLM Psychology

Quentin FEUILLADE--MONTIXI7 Nov 2023 16:12 UTC
31 points
0 comments2 min readLW link

What I’ve been read­ing, Novem­ber 2023

jasoncrawford7 Nov 2023 13:37 UTC
23 points
1 comment5 min readLW link
(rootsofprogress.org)

AI Align­ment [Progress] this Week (11/​05/​2023)

Logan Zoellner7 Nov 2023 13:26 UTC
24 points
0 comments4 min readLW link
(midwitalignment.substack.com)

On the UK Summit

Zvi7 Nov 2023 13:10 UTC
68 points
6 comments30 min readLW link
(thezvi.wordpress.com)

Box in­ver­sion revisited

Jan_Kulveit7 Nov 2023 11:09 UTC
40 points
3 comments8 min readLW link

AI Align­ment Re­search Eng­ineer Ac­cel­er­a­tor (ARENA): call for applicants

CallumMcDougall7 Nov 2023 9:43 UTC
56 points
0 comments1 min readLW link

The Per­ils of Professionalism

Screwtape7 Nov 2023 0:07 UTC
40 points
1 comment10 min readLW link

How to (hope­fully eth­i­cally) make money off of AGI

6 Nov 2023 23:35 UTC
127 points
75 comments32 min readLW link

cost es­ti­ma­tion for 2 grid en­ergy stor­age systems

bhauth6 Nov 2023 23:32 UTC
16 points
12 comments7 min readLW link
(www.bhauth.com)

A bet on crit­i­cal pe­ri­ods in neu­ral networks

6 Nov 2023 23:21 UTC
24 points
1 comment6 min readLW link

Job list­ing: Com­mu­ni­ca­tions Gen­er­al­ist /​ Pro­ject Manager

Gretta Duleba6 Nov 2023 20:21 UTC
49 points
7 comments1 min readLW link

Aske­sis: a model of the cerebellum

MadHatter6 Nov 2023 20:19 UTC
7 points
2 comments1 min readLW link
(github.com)

LQPR: An Al­gorithm for Re­in­force­ment Learn­ing with Prov­able Safety Guarantees

MadHatter6 Nov 2023 20:17 UTC
6 points
0 comments1 min readLW link
(github.com)

ACX Meetup Leipzig

Roman Leipe6 Nov 2023 18:33 UTC
1 point
0 comments1 min readLW link

[Question] Does bulemia work?

lc6 Nov 2023 17:58 UTC
6 points
18 comments1 min readLW link

Why build­ing ven­tures in AI Safety is par­tic­u­larly challenging

Heramb6 Nov 2023 16:27 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

What is true is already so. Own­ing up to it doesn’t make it worse.

RamblinDash6 Nov 2023 15:49 UTC
20 points
2 comments1 min readLW link

An illus­tra­tive model of back­fire risks from paus­ing AI research

Maxime Riché6 Nov 2023 14:30 UTC
33 points
3 comments11 min readLW link

Pro­posal for im­prov­ing state of al­ign­ment research

Iknownothing6 Nov 2023 13:55 UTC
2 points
0 comments1 min readLW link

Are lan­guage mod­els good at mak­ing pre­dic­tions?

dynomight6 Nov 2023 13:10 UTC
76 points
14 comments4 min readLW link
(dynomight.net)

Tips, tricks, les­sons and thoughts on host­ing hackathons

gergogaspar6 Nov 2023 11:03 UTC
3 points
0 comments11 min readLW link

An­nounc­ing TAIS 2024

Blaine6 Nov 2023 8:38 UTC
23 points
0 comments1 min readLW link
(tais2024.cc)

Ta­boo Wall

Screwtape6 Nov 2023 3:51 UTC
18 points
0 comments3 min readLW link

Ram­ble on pro­gres­sively con­strained agent design

Iris of Rosebloom5 Nov 2023 23:34 UTC
3 points
0 comments8 min readLW link

When and why should you use the Kelly crite­rion?

5 Nov 2023 23:26 UTC
26 points
25 comments16 min readLW link

On Over­hangs and Tech­nolog­i­cal Change

Roko5 Nov 2023 22:58 UTC
50 points
19 comments2 min readLW link

xAI an­nounces Grok, beats GPT-3.5

nikola5 Nov 2023 22:11 UTC
10 points
6 comments1 min readLW link
(x.ai)

Disen­tan­gling four mo­ti­va­tions for act­ing in ac­cor­dance with UDT

Julian Stastny5 Nov 2023 21:26 UTC
33 points
3 comments7 min readLW link

AI as Su­per-Demagogue

RationalDino5 Nov 2023 21:21 UTC
−2 points
9 comments9 min readLW link

EA orgs’ le­gal struc­ture in­hibits risk tak­ing and in­for­ma­tion shar­ing on the margin

Elizabeth5 Nov 2023 19:13 UTC
135 points
17 comments4 min readLW link

Eric Sch­midt on re­cur­sive self-improvement

nikola5 Nov 2023 19:05 UTC
24 points
3 comments1 min readLW link
(www.youtube.com)

Pivotal Acts might Not be what You Think they are

Johannes C. Mayer5 Nov 2023 17:23 UTC
41 points
13 comments3 min readLW link

The As­sumed In­tent Bias

silentbob5 Nov 2023 16:28 UTC
51 points
13 comments6 min readLW link

Go flash blink­ing lights at printed text right now

lukehmiles5 Nov 2023 7:29 UTC
15 points
9 comments1 min readLW link

Life of GPT

Odd anon5 Nov 2023 4:55 UTC
6 points
2 comments5 min readLW link

Light­ning Talks

Screwtape5 Nov 2023 3:27 UTC
6 points
3 comments4 min readLW link

Utility is not the se­lec­tion target

tailcalled4 Nov 2023 22:48 UTC
24 points
1 comment1 min readLW link

Stuxnet, not Skynet: Hu­man­ity’s dis­em­pow­er­ment by AI

Roko4 Nov 2023 22:23 UTC
106 points
23 comments6 min readLW link

The 6D effect: When com­pa­nies take risks, one email can be very pow­er­ful.

scasper4 Nov 2023 20:08 UTC
261 points
40 comments3 min readLW link

Ge­netic fit­ness is a mea­sure of se­lec­tion strength, not the se­lec­tion target

Kaj_Sotala4 Nov 2023 19:02 UTC
55 points
43 comments18 min readLW link

The Soul Key

Richard_Ngo4 Nov 2023 17:51 UTC
91 points
9 comments8 min readLW link
(www.narrativeark.xyz)