A Bench­mark for De­ci­sion Theories

StrivingForLegibility11 Jan 2024 18:54 UTC
10 points
0 comments2 min readLW link

An even deeper atheism

Joe Carlsmith11 Jan 2024 17:28 UTC
125 points
47 comments15 min readLW link

Mo­ti­vat­ing Align­ment of LLM-Pow­ered Agents: Easy for AGI, Hard for ASI?

RogerDearnaley11 Jan 2024 12:56 UTC
22 points
4 comments39 min readLW link

Re­pro­gram­ing the Mind: Med­i­ta­tion as a Tool for Cog­ni­tive Optimization

Jonas Hallgren11 Jan 2024 12:03 UTC
28 points
3 comments11 min readLW link

AI-Gen­er­ated Mu­sic for Learning

ethanmorse11 Jan 2024 4:11 UTC
9 points
1 comment1 min readLW link
(210ethan.github.io)

In­tro­duce a Speed Maximum

jefftk11 Jan 2024 2:50 UTC
35 points
28 comments2 min readLW link
(www.jefftk.com)

[Question] Pre­dic­tion mar­kets are con­sis­tently un­der­con­fi­dent. Why?

Sinclair Chen11 Jan 2024 2:44 UTC
11 points
4 comments1 min readLW link

Try­ing to al­ign hu­mans with in­clu­sive ge­netic fitness

peterbarnett11 Jan 2024 0:13 UTC
23 points
5 comments10 min readLW link

Univer­sal Love In­te­gra­tion Test: Hitler

Raemon10 Jan 2024 23:55 UTC
76 points
65 comments9 min readLW link

The Per­cep­tron Controversy

Yuxi_Liu10 Jan 2024 23:07 UTC
65 points
18 comments1 min readLW link
(yuxi-liu-wired.github.io)

The Aspiring Ra­tion­al­ist Congregation

maia10 Jan 2024 22:52 UTC
86 points
23 comments10 min readLW link

An Ac­tu­ally In­tu­itive Ex­pla­na­tion of the Oberth Effect

Isaac King10 Jan 2024 20:23 UTC
60 points
33 comments6 min readLW link

Be­ware the sub­op­ti­mal routine

jwfiredragon10 Jan 2024 19:02 UTC
12 points
3 comments3 min readLW link

The true cost of fences

pleiotroth10 Jan 2024 19:01 UTC
3 points
2 comments4 min readLW link

“Dark Con­sti­tu­tion” for con­strain­ing some superintelligences

Valentine10 Jan 2024 16:02 UTC
2 points
9 comments1 min readLW link
(www.anarchonomicon.com)

[Question] rab­bit (a new AI com­pany) and Large Ac­tion Model (LAM)

MiguelDev10 Jan 2024 13:57 UTC
17 points
3 comments1 min readLW link

Sav­ing the world sucks

Defective Altruism10 Jan 2024 5:55 UTC
47 points
29 comments3 min readLW link

[Question] Ques­tions about Solomonoff induction

mukashi10 Jan 2024 1:16 UTC
7 points
11 comments1 min readLW link

AI as a nat­u­ral disaster

Neil 10 Jan 2024 0:42 UTC
11 points
1 comment7 min readLW link

Stop be­ing sur­prised by the pas­sage of time

10 Jan 2024 0:36 UTC
−2 points
1 comment3 min readLW link

A dis­cus­sion of nor­ma­tive ethics

9 Jan 2024 23:29 UTC
10 points
6 comments25 min readLW link

On the Con­trary, Steel­man­ning Is Nor­mal; ITT-Pass­ing Is Niche

Zack_M_Davis9 Jan 2024 23:12 UTC
39 points
31 comments4 min readLW link

[Question] What’s the pro­to­col for if a novice has ML ideas that are un­likely to work, but might im­prove ca­pa­bil­ities if they do work?

drocta9 Jan 2024 22:51 UTC
6 points
2 comments2 min readLW link

Good­bye, Shog­goth: The Stage, its An­i­ma­tron­ics, & the Pup­peteer – a New Metaphor

RogerDearnaley9 Jan 2024 20:42 UTC
46 points
8 comments36 min readLW link

Bent or Blunt Hoods?

jefftk9 Jan 2024 20:10 UTC
23 points
0 comments1 min readLW link
(www.jefftk.com)

2024 ACX Pre­dic­tions: Blind/​Buy/​Sell/​Hold

Zvi9 Jan 2024 19:30 UTC
33 points
2 comments31 min readLW link
(thezvi.wordpress.com)

An­nounc­ing the Dou­ble Crux Bot

9 Jan 2024 18:54 UTC
44 points
6 comments3 min readLW link

Does AI risk “other” the AIs?

Joe Carlsmith9 Jan 2024 17:51 UTC
59 points
3 comments8 min readLW link

AI de­mands un­prece­dented reliability

Jono9 Jan 2024 16:30 UTC
22 points
5 comments2 min readLW link

Uncer­tainty in all its flavours

Cleo Nardo9 Jan 2024 16:21 UTC
25 points
6 comments35 min readLW link

Com­pen­sat­ing for Life Biases

Jonathan Moregård9 Jan 2024 14:39 UTC
24 points
6 comments3 min readLW link
(honestliving.substack.com)

Can Mo­ral­ity Be Quan­tified?

Julius9 Jan 2024 6:35 UTC
3 points
0 comments5 min readLW link

Learn­ing Math in Time for Alignment

NicholasKross9 Jan 2024 1:02 UTC
32 points
3 comments3 min readLW link

Brief Thoughts on Jus­tifi­ca­tions for Paternalism

Srdjan Miletic9 Jan 2024 0:36 UTC
4 points
0 comments4 min readLW link
(dissent.blog)

Hiring de­ci­sions are not suit­able for pre­dic­tion markets

SimonM8 Jan 2024 21:11 UTC
12 points
6 comments1 min readLW link

Bet­ter Anomia

jefftk8 Jan 2024 18:40 UTC
8 points
0 comments1 min readLW link
(www.jefftk.com)

A starter guide for evals

8 Jan 2024 18:24 UTC
44 points
2 comments12 min readLW link
(www.apolloresearch.ai)

Is it jus­tifi­able for non-ex­perts to have strong opinions about Gaza?

8 Jan 2024 17:31 UTC
23 points
12 comments30 min readLW link

Pro­ject ideas: Backup plans & Co­op­er­a­tive AI

Lukas Finnveden8 Jan 2024 17:19 UTC
18 points
0 comments1 min readLW link
(lukasfinnveden.substack.com)

Hackathon and Stay­ing Up-to-Date in AI

jacobhaimes8 Jan 2024 17:10 UTC
11 points
0 comments1 min readLW link
(into-ai-safety.github.io)

When “yang” goes wrong

Joe Carlsmith8 Jan 2024 16:35 UTC
72 points
6 comments13 min readLW link

Task vec­tors & anal­ogy mak­ing in LLMs

Sergii8 Jan 2024 15:17 UTC
8 points
1 comment4 min readLW link
(grgv.xyz)

[Question] How to find trans­la­tions of a book?

Viliam8 Jan 2024 14:57 UTC
9 points
8 comments1 min readLW link

[Question] Why aren’t Yud­kowsky & Bostrom get­ting more at­ten­tion now?

JoshuaFox8 Jan 2024 14:42 UTC
14 points
8 comments1 min readLW link

2023 Pre­dic­tion Evaluations

Zvi8 Jan 2024 14:40 UTC
46 points
0 comments28 min readLW link
(thezvi.wordpress.com)

There is no sharp bound­ary be­tween de­on­tol­ogy and consequentialism

quetzal_rainbow8 Jan 2024 11:01 UTC
8 points
2 comments1 min readLW link

Reflec­tions on my first year of AI safety research

Jay Bailey8 Jan 2024 7:49 UTC
52 points
3 comments1 min readLW link

Why There Is Hope For An Align­ment Solution

Darklight8 Jan 2024 6:58 UTC
9 points
0 comments12 min readLW link

Sled­ding Among Hazards

jefftk8 Jan 2024 3:30 UTC
19 points
5 comments1 min readLW link
(www.jefftk.com)

Utility is relative

CrimsonChin8 Jan 2024 2:31 UTC
2 points
4 comments2 min readLW link