[Question] “I Can’t Believe It Both Is and Is Not En­cephal­itis!” Or: What do you do when the ev­i­dence is crazy?

Erhannis19 Mar 2024 22:08 UTC
20 points
3 comments11 min readLW link

Delta’s of Change

Jonas Kgomo19 Mar 2024 21:03 UTC
1 point
0 comments4 min readLW link

In­creas­ing IQ by 10 Points is Possible

George3d619 Mar 2024 20:48 UTC
24 points
50 comments5 min readLW link
(morelucid.substack.com)

Are ex­treme prob­a­bil­ities for P(doom) epistem­i­cally jus­tifed?

19 Mar 2024 20:32 UTC
19 points
11 comments7 min readLW link

Have I Solved the Two En­velopes Prob­lem Once and For All?

JackOfAllTrades19 Mar 2024 19:57 UTC
−5 points
5 comments3 min readLW link

[Question] How can one be less wrong, if their con­ver­sa­tion part­ner loses the in­ter­est on dis­cussing the topic with them?

Ooker19 Mar 2024 18:11 UTC
−10 points
3 comments1 min readLW link

Carlo: un­cer­tainty anal­y­sis in Google Sheets

ProbabilityEnjoyer19 Mar 2024 17:59 UTC
6 points
0 comments1 min readLW link
(carlo.app)

NAIRA—An ex­er­cise in reg­u­la­tory, com­pet­i­tive safety gov­er­nance [AI Gover­nance In­sti­tu­tional De­sign idea]

Heramb19 Mar 2024 17:43 UTC
2 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

AI Safety Eval­u­a­tions: A Reg­u­la­tory Review

19 Mar 2024 15:05 UTC
21 points
1 comment11 min readLW link

Mechanism for fea­ture learn­ing in neu­ral net­works and back­prop­a­ga­tion-free ma­chine learn­ing models

Matt Goldenberg19 Mar 2024 14:55 UTC
8 points
1 comment1 min readLW link
(www.science.org)

Monthly Roundup #16: March 2024

Zvi19 Mar 2024 13:10 UTC
33 points
4 comments55 min readLW link
(thezvi.wordpress.com)

Claude es­ti­mates 30-50% like­li­hood x-risk

amelia19 Mar 2024 2:22 UTC
3 points
2 comments2 min readLW link

Ex­per­i­men­ta­tion (Part 7 of “The Sense Of Phys­i­cal Ne­ces­sity”)

LoganStrohl18 Mar 2024 21:25 UTC
33 points
0 comments10 min readLW link

INTERVIEW: Round 2 - StakeOut.AI w/​ Dr. Peter Park

jacobhaimes18 Mar 2024 21:21 UTC
5 points
0 comments1 min readLW link
(into-ai-safety.github.io)

Neu­ro­science and Alignment

Garrett Baker18 Mar 2024 21:09 UTC
40 points
25 comments2 min readLW link

GPT, the mag­i­cal col­lab­o­ra­tion zone, Lex Frid­man and Sam Altman

Bill Benzon18 Mar 2024 20:04 UTC
3 points
1 comment3 min readLW link

Mea­sur­ing Co­her­ence of Poli­cies in Toy Environments

18 Mar 2024 17:59 UTC
59 points
9 comments14 min readLW link

AtP*: An effi­cient and scal­able method for lo­cal­iz­ing LLM be­havi­our to components

18 Mar 2024 17:28 UTC
19 points
0 comments1 min readLW link
(arxiv.org)

Com­mu­nity Notes by X

NicholasKees18 Mar 2024 17:13 UTC
123 points
15 comments7 min readLW link

[Question] Is the Basilisk pre­tend­ing to be hid­den in this simu­la­tion so that it can check what I would do if con­di­tioned by a world with­out the Basilisk?

maybefbi18 Mar 2024 16:05 UTC
−18 points
1 comment1 min readLW link

On Devin

Zvi18 Mar 2024 13:20 UTC
147 points
30 comments11 min readLW link
(thezvi.wordpress.com)

RLLMv10 experiment

MiguelDev18 Mar 2024 8:32 UTC
5 points
0 comments2 min readLW link

Join the AI Eval­u­a­tion Tasks Bounty Hackathon

Esben Kran18 Mar 2024 8:15 UTC
12 points
1 comment1 min readLW link

5 Physics Problems

18 Mar 2024 8:05 UTC
60 points
0 comments15 min readLW link

In­fer­ring the model di­men­sion of API-pro­tected LLMs

Ege Erdil18 Mar 2024 6:19 UTC
32 points
3 comments4 min readLW link
(arxiv.org)

AI strat­egy given the need for good reflection

owencb18 Mar 2024 0:48 UTC
7 points
0 comments1 min readLW link

XAI re­leases Grok base model

Jacob G-W18 Mar 2024 0:47 UTC
11 points
3 comments1 min readLW link
(x.ai)

Chap­ter 9: The Three Powers

SashaWu17 Mar 2024 22:28 UTC
0 points
0 comments4 min readLW link

Toki pona FAQ

dkl917 Mar 2024 21:44 UTC
36 points
8 comments1 min readLW link
(dkl9.net)

EA ErFiN Pro­ject work

Max_He-Ho17 Mar 2024 20:42 UTC
2 points
0 comments1 min readLW link

EA ErFiN Pro­ject work

Max_He-Ho17 Mar 2024 20:37 UTC
2 points
0 comments1 min readLW link

[Question] Alice and Bob is de­bat­ing on a tech­nique. Alice says Bob should try it be­fore deny­ing it. Is it a fal­lacy or some­thing similar?

Ooker17 Mar 2024 20:01 UTC
0 points
19 comments2 min readLW link

Is there a way to calcu­late the P(we are in a 2nd cold war)?

cloak17 Mar 2024 20:01 UTC
−9 points
2 comments1 min readLW link

The Worst Form Of Govern­ment (Ex­cept For Every­thing Else We’ve Tried)

johnswentworth17 Mar 2024 18:11 UTC
136 points
46 comments4 min readLW link

Ap­ply­ing simu­lacrum lev­els to hob­bies, in­ter­ests and goals

DMMF17 Mar 2024 16:18 UTC
14 points
2 comments4 min readLW link
(danfrank.ca)

What is the best ar­gu­ment that LLMs are shog­goths?

JoshuaFox17 Mar 2024 11:36 UTC
26 points
22 comments1 min readLW link

In­vi­ta­tion to the Prince­ton AI Align­ment and Safety Seminar

Sadhika Malladi17 Mar 2024 1:10 UTC
6 points
1 comment1 min readLW link

Anx­iety vs. Depression

Sable17 Mar 2024 0:15 UTC
84 points
35 comments3 min readLW link
(affablyevil.substack.com)

Celiefs

TheLemmaLlama16 Mar 2024 23:56 UTC
3 points
6 comments1 min readLW link

My PhD the­sis: Al­gorith­mic Bayesian Epistemology

Eric Neyman16 Mar 2024 22:56 UTC
251 points
14 comments7 min readLW link
(arxiv.org)

How peo­ple stopped dy­ing from di­ar­rhea so much (& other life-sav­ing de­ci­sions)

Writer16 Mar 2024 16:00 UTC
45 points
0 comments1 min readLW link
(youtu.be)

Trans­for­ma­tive trust­build­ing via ad­vance­ments in de­cen­tral­ized lie detection

trevor16 Mar 2024 5:56 UTC
17 points
7 comments38 min readLW link
(www.ncbi.nlm.nih.gov)

En­ter the Wor­ld­sEnd

Akram Choudhary16 Mar 2024 1:34 UTC
−25 points
8 comments1 min readLW link

Strong-Misal­ign­ment: Does Yud­kowsky (or Chris­ti­ano, or TurnTrout, or Wolfram, or…etc.) Have an Ele­va­tor Speech I’m Miss­ing?

Benjamin Bourlier15 Mar 2024 23:17 UTC
−4 points
3 comments16 min readLW link

In­tro­duc­ing METR’s Au­ton­omy Eval­u­a­tion Resources

15 Mar 2024 23:16 UTC
90 points
0 comments1 min readLW link
(metr.github.io)

Are AIs con­scious? It might depend

Logan Zoellner15 Mar 2024 23:09 UTC
7 points
6 comments3 min readLW link

Beyond Max­ipok — good re­flec­tive gov­er­nance as a tar­get for action

owencb15 Mar 2024 22:22 UTC
20 points
0 comments1 min readLW link

Mid­dle Child Phenomenon

PhilosophicalSoul15 Mar 2024 20:47 UTC
3 points
3 comments2 min readLW link

Ca­pa­bil­ity or Align­ment? Re­spect the LLM Base Model’s Ca­pa­bil­ity Dur­ing Alignment

Jingfeng Yang15 Mar 2024 17:56 UTC
7 points
0 comments24 min readLW link

Ra­tional An­i­ma­tions offers an­i­ma­tion pro­duc­tion and writ­ing ser­vices!

Writer15 Mar 2024 17:26 UTC
30 points
0 comments1 min readLW link