Me­tac­u­lus’s New Side­bar Helps You Find Fore­casts Faster

ChristianWilliamsNov 8, 2023, 8:56 PM
15 points
0 commentsLW link
(www.metaculus.com)

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo Nov 8, 2023, 8:10 PM
1 point
0 comments8 min readLW link

Open Agency model can solve the AI reg­u­la­tion dilemma

Roman LeventovNov 8, 2023, 8:00 PM
22 points
1 comment2 min readLW link

Gothen­burg LW /​ ACX meetup

StefanNov 8, 2023, 7:52 PM
1 point
0 comments1 min readLW link

[Question] Why is less­wrong block­ing wget and curl (scrape)?

nick lacombeNov 8, 2023, 7:42 PM
21 points
15 comments1 min readLW link

[Question] Is there a less­wrong archive of all pub­lic posts?

nick lacombeNov 8, 2023, 7:26 PM
12 points
7 comments1 min readLW link

Five pro­jects from AI Safety Hub Labs 2023

charlie_griffinNov 8, 2023, 7:19 PM
47 points
1 comment6 min readLW link
(www.aisafetyhub.org)

[Question] Can a stupid per­son be­come in­tel­li­gent?

A. T.Nov 8, 2023, 7:01 PM
12 points
24 comments2 min readLW link

Pros­thetic Intelligence

KrantzNov 8, 2023, 7:01 PM
7 points
9 comments2 min readLW link

[Question] Do you have a satis­fac­tory work­flow for learn­ing about a line of re­search us­ing GPT4, Claude, etc?

ryan_bNov 8, 2023, 6:05 PM
9 points
3 comments1 min readLW link

What’s go­ing on? LLMs and IS-A sen­tences

Bill BenzonNov 8, 2023, 4:58 PM
6 points
15 comments4 min readLW link

[Question] What will hap­pen with real es­tate prices dur­ing a slow take­off?

Ricardo MeneghinNov 8, 2023, 11:58 AM
8 points
1 comment1 min readLW link

Tall Tales at Differ­ent Scales: Eval­u­at­ing Scal­ing Trends For De­cep­tion In Lan­guage Models

Nov 8, 2023, 11:37 AM
49 points
0 comments18 min readLW link

How well does your re­search adress the the­ory-prac­tice gap?

Jonas HallgrenNov 8, 2023, 11:27 AM
18 points
0 comments10 min readLW link

Growth and Form in a Toy Model of Superposition

Nov 8, 2023, 11:08 AM
89 points
7 comments14 min readLW link

Run­ning your own work­shop on han­dling hos­tile disagreements

Camille Berger Nov 8, 2023, 10:28 AM
12 points
1 comment7 min readLW link

Think­ing By The Clock

ScrewtapeNov 8, 2023, 7:40 AM
197 points
29 comments8 min readLW link1 review

[Question] Im­pres­sions from base-GPT-4?

mishkaNov 8, 2023, 5:43 AM
25 points
25 comments1 min readLW link

Quan­topian con­test, but for food in­take and weight

LucentNov 8, 2023, 5:41 AM
40 points
9 comments3 min readLW link

How I Think, Part Two: Distrust­ing Individuals

Richard HenageNov 8, 2023, 4:06 AM
4 points
6 comments3 min readLW link

How I Think, Part One: In­vest­ing in Fun

Richard HenageNov 8, 2023, 4:00 AM
5 points
2 comments5 min readLW link

Con­crete pos­i­tive vi­sions for a fu­ture with­out AGI

Max HNov 8, 2023, 3:12 AM
41 points
28 comments8 min readLW link

South Bay ACX/​LW/​EA Meetup & Ve­gans­giv­ing Potluck

ISNov 8, 2023, 2:30 AM
10 points
0 comments1 min readLW link

Progress links di­gest, 2023-11-07: Techno-op­ti­mism and more

jasoncrawfordNov 8, 2023, 2:05 AM
17 points
7 comments11 min readLW link
(rootsofprogress.org)

An­nounc­ing Athena—Women in AI Align­ment Research

Claire ShortNov 7, 2023, 9:46 PM
80 points
2 comments3 min readLW link

Vote on In­ter­est­ing Disagreements

Ben PaceNov 7, 2023, 9:35 PM
159 points
131 comments1 min readLW link

What is democ­racy for?

JohnstoneNov 7, 2023, 6:17 PM
−5 points
10 comments7 min readLW link

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

Nov 7, 2023, 5:59 PM
38 points
2 comments2 min readLW link
(arxiv.org)

Im­ple­ment­ing De­ci­sion Theory

justinpombrioNov 7, 2023, 5:55 PM
22 points
12 comments3 min readLW link

Mir­ror, Mir­ror on the Wall: How Do Fore­cast­ers Fare by Their Own Call?

nikosNov 7, 2023, 5:39 PM
14 points
5 comments14 min readLW link

Sym­biotic self-al­ign­ment of AIs.

Spiritus DeiNov 7, 2023, 5:18 PM
1 point
0 comments3 min readLW link

AMA: Earn­ing to Give

jefftkNov 7, 2023, 4:20 PM
53 points
8 comments1 min readLW link
(www.jefftk.com)

The Stochas­tic Par­rot Hy­poth­e­sis is de­bat­able for the last gen­er­a­tion of LLMs

Nov 7, 2023, 4:12 PM
52 points
21 comments6 min readLW link

Pre­face to the Se­quence on LLM Psychology

Quentin FEUILLADE--MONTIXINov 7, 2023, 4:12 PM
33 points
0 comments2 min readLW link

What I’ve been read­ing, Novem­ber 2023

jasoncrawfordNov 7, 2023, 1:37 PM
23 points
1 comment5 min readLW link
(rootsofprogress.org)

AI Align­ment [Progress] this Week (11/​05/​2023)

Logan ZoellnerNov 7, 2023, 1:26 PM
24 points
0 comments4 min readLW link
(midwitalignment.substack.com)

On the UK Summit

ZviNov 7, 2023, 1:10 PM
74 points
6 comments30 min readLW link
(thezvi.wordpress.com)

Box in­ver­sion revisited

Jan_KulveitNov 7, 2023, 11:09 AM
40 points
3 comments8 min readLW link

AI Align­ment Re­search Eng­ineer Ac­cel­er­a­tor (ARENA): call for applicants

CallumMcDougallNov 7, 2023, 9:43 AM
56 points
0 commentsLW link

The Per­ils of Professionalism

ScrewtapeNov 7, 2023, 12:07 AM
45 points
1 comment10 min readLW link

How to (hope­fully eth­i­cally) make money off of AGI

Nov 6, 2023, 11:35 PM
171 points
95 comments32 min readLW link1 review

cost es­ti­ma­tion for 2 grid en­ergy stor­age systems

bhauthNov 6, 2023, 11:32 PM
16 points
12 comments7 min readLW link
(www.bhauth.com)

A bet on crit­i­cal pe­ri­ods in neu­ral networks

Nov 6, 2023, 11:21 PM
24 points
1 comment6 min readLW link

Job list­ing: Com­mu­ni­ca­tions Gen­er­al­ist /​ Pro­ject Manager

Gretta DulebaNov 6, 2023, 8:21 PM
49 points
7 comments1 min readLW link

Aske­sis: a model of the cerebellum

MadHatterNov 6, 2023, 8:19 PM
7 points
2 comments1 min readLW link
(github.com)

LQPR: An Al­gorithm for Re­in­force­ment Learn­ing with Prov­able Safety Guarantees

MadHatterNov 6, 2023, 8:17 PM
6 points
0 comments1 min readLW link
(github.com)

ACX Meetup Leipzig

Roman LeipeNov 6, 2023, 6:33 PM
1 point
0 comments1 min readLW link

[Question] Does bulemia work?

lcNov 6, 2023, 5:58 PM
5 points
18 comments1 min readLW link

Why build­ing ven­tures in AI Safety is par­tic­u­larly challenging

HerambNov 6, 2023, 4:27 PM
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

What is true is already so. Own­ing up to it doesn’t make it worse.

RamblinDash6 Nov 2023 15:49 UTC
20 points
2 comments1 min readLW link