Ac­cel­er­at­ing sci­ence through evolv­able institutions

jasoncrawford4 Dec 2023 23:21 UTC
19 points
9 comments6 min readLW link
(rootsofprogress.org)

Speak­ing to Con­gres­sional staffers about AI risk

4 Dec 2023 23:08 UTC
289 points
23 comments16 min readLW link

Open Thread – Win­ter 2023/​2024

habryka4 Dec 2023 22:59 UTC
35 points
160 comments1 min readLW link

In­ter­view with Vanessa Kosoy on the Value of The­o­ret­i­cal Re­search for AI

WillPetillo4 Dec 2023 22:58 UTC
35 points
0 comments35 min readLW link

2023 Align­ment Re­search Up­dates from FAR AI

4 Dec 2023 22:32 UTC
18 points
0 comments8 min readLW link
(far.ai)

What’s new at FAR AI

4 Dec 2023 21:18 UTC
40 points
0 comments5 min readLW link
(far.ai)

n of m ring signatures

DanielFilan4 Dec 2023 20:00 UTC
49 points
7 comments1 min readLW link
(danielfilan.com)

Mechanis­tic in­ter­pretabil­ity through clustering

Alistair Fraser4 Dec 2023 18:49 UTC
1 point
0 comments1 min readLW link

Agents which are EU-max­i­miz­ing as a group are not EU-max­i­miz­ing individually

Mlxa4 Dec 2023 18:49 UTC
3 points
2 comments2 min readLW link

Plan­ning in LLMs: In­sights from AlphaGo

jco4 Dec 2023 18:48 UTC
8 points
10 comments11 min readLW link

Non-clas­sic sto­ries about schem­ing (Sec­tion 2.3.2 of “Schem­ing AIs”)

Joe Carlsmith4 Dec 2023 18:44 UTC
9 points
0 comments20 min readLW link

6. The Mutable Values Prob­lem in Value Learn­ing and CEV

RogerDearnaley4 Dec 2023 18:31 UTC
12 points
0 comments49 min readLW link

Up­dates to Open Phil’s ca­reer de­vel­op­ment and tran­si­tion fund­ing program

4 Dec 2023 18:10 UTC
28 points
0 comments2 min readLW link

[Valence se­ries] 1. Introduction

Steven Byrnes4 Dec 2023 15:40 UTC
87 points
14 comments15 min readLW link

South Bay Meetup 12/​9

David Friedman4 Dec 2023 7:32 UTC
2 points
0 comments1 min readLW link

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul Bricman4 Dec 2023 7:31 UTC
12 points
6 comments16 min readLW link
(arxiv.org)

A call for a quan­ti­ta­tive re­port card for AI bioter­ror­ism threat models

Juno4 Dec 2023 6:35 UTC
12 points
0 comments10 min readLW link

FTL travel summary

Isaac King4 Dec 2023 5:17 UTC
1 point
3 comments3 min readLW link

Dis­ap­point­ing Table Refinishing

jefftk4 Dec 2023 2:50 UTC
14 points
3 comments1 min readLW link
(www.jefftk.com)

the micro-fulfill­ment cam­brian explosion

bhauth4 Dec 2023 1:15 UTC
54 points
5 comments4 min readLW link
(www.bhauth.com)

Niet­zsche’s Mo­ral­ity in Plain English

Arjun Panickssery4 Dec 2023 0:57 UTC
73 points
13 comments4 min readLW link
(arjunpanickssery.substack.com)

Med­i­ta­tions on Mot

Richard_Ngo4 Dec 2023 0:19 UTC
52 points
11 comments8 min readLW link
(www.mindthefuture.info)

The Witness

Richard_Ngo3 Dec 2023 22:27 UTC
103 points
4 comments14 min readLW link
(www.narrativeark.xyz)

Does schem­ing lead to ad­e­quate fu­ture em­pow­er­ment? (Sec­tion 2.3.1.2 of “Schem­ing AIs”)

Joe Carlsmith3 Dec 2023 18:32 UTC
9 points
0 comments17 min readLW link

[Question] How do you do post mortems?

matto3 Dec 2023 14:46 UTC
9 points
2 comments1 min readLW link

The benefits and risks of op­ti­mism (about AI safety)

Karl von Wendt3 Dec 2023 12:45 UTC
−11 points
6 comments5 min readLW link

Book Re­view: 1948 by Benny Morris

Yair Halberstadt3 Dec 2023 10:29 UTC
41 points
9 comments12 min readLW link

Quick takes on “AI is easy to con­trol”

So8res2 Dec 2023 22:31 UTC
26 points
49 comments4 min readLW link

Sher­lock­ian Ab­duc­tion Master List

Cole Wyeth2 Dec 2023 22:10 UTC
10 points
31 comments7 min readLW link

The goal-guard­ing hy­poth­e­sis (Sec­tion 2.3.1.1 of “Schem­ing AIs”)

Joe Carlsmith2 Dec 2023 15:20 UTC
8 points
1 comment15 min readLW link

The Method of Loci: With some brief re­marks, in­clud­ing trans­form­ers and eval­u­at­ing AIs

Bill Benzon2 Dec 2023 14:36 UTC
6 points
0 comments3 min readLW link

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret2 Dec 2023 14:07 UTC
26 points
31 comments42 min readLW link

Out-of-dis­tri­bu­tion Bioattacks

jefftk2 Dec 2023 12:20 UTC
66 points
15 comments2 min readLW link
(www.jefftk.com)

After Align­ment — Dialogue be­tween RogerDear­naley and Seth Herd

2 Dec 2023 6:03 UTC
15 points
2 comments25 min readLW link

List of strate­gies for miti­gat­ing de­cep­tive alignment

joshc2 Dec 2023 5:56 UTC
34 points
2 comments6 min readLW link

[Question] What is known about in­var­i­ants in self-mod­ify­ing sys­tems?

mishka2 Dec 2023 5:04 UTC
9 points
2 comments1 min readLW link

2023 Unoffi­cial LessWrong Cen­sus/​Survey

Screwtape2 Dec 2023 4:41 UTC
169 points
81 comments1 min readLW link

Pro­tect­ing against sud­den ca­pa­bil­ity jumps dur­ing training

nikola2 Dec 2023 4:22 UTC
8 points
0 comments2 min readLW link

South Bay Pre-Holi­day Gathering

IS2 Dec 2023 3:21 UTC
10 points
2 comments1 min readLW link

MATS Sum­mer 2023 Retrospective

1 Dec 2023 23:29 UTC
77 points
34 comments26 min readLW link

Com­plex sys­tems re­search as a field (and its rele­vance to AI Align­ment)

1 Dec 2023 22:10 UTC
64 points
9 comments19 min readLW link

[Question] Could there be “nat­u­ral im­pact reg­u­lariza­tion” or “im­pact reg­u­lariza­tion by de­fault”?

tailcalled1 Dec 2023 22:01 UTC
24 points
6 comments1 min readLW link

Bench­mark­ing Bowtie2 Threading

jefftk1 Dec 2023 20:20 UTC
9 points
0 comments1 min readLW link
(www.jefftk.com)

Please Bet On My Quan­tified Self De­ci­sion Markets

niplav1 Dec 2023 20:07 UTC
36 points
6 comments6 min readLW link

Speci­fi­ca­tion Gam­ing: How AI Can Turn Your Wishes Against You [RA Video]

Writer1 Dec 2023 19:30 UTC
19 points
0 comments5 min readLW link
(youtu.be)

Carv­ing up prob­lems at their joints

Jakub Smékal1 Dec 2023 18:48 UTC
1 point
0 comments2 min readLW link
(jakubsmekal.com)

Queu­ing the­ory: Benefits of op­er­at­ing at 60% capacity

ampdot1 Dec 2023 18:48 UTC
40 points
4 comments1 min readLW link
(less.works)

Re­searchers and writ­ers can ap­ply for proxy ac­cess to the GPT-3.5 base model (code-davinci-002)

ampdot1 Dec 2023 18:48 UTC
14 points
0 comments1 min readLW link
(airtable.com)

Kol­mogorov Com­plex­ity Lays Bare the Soul

jakej1 Dec 2023 18:29 UTC
5 points
8 comments2 min readLW link

Thoughts on “AI is easy to con­trol” by Pope & Belrose

Steven Byrnes1 Dec 2023 17:30 UTC
189 points
55 comments13 min readLW link