Are AIs like An­i­mals? Per­spec­tives and Strate­gies from Biology

Jackson Emanuel16 May 2023 23:39 UTC
1 point
0 comments21 min readLW link

A Mechanis­tic In­ter­pretabil­ity Anal­y­sis of a GridWorld Agent-Si­mu­la­tor (Part 1 of N)

Joseph Bloom16 May 2023 22:59 UTC
36 points
2 comments16 min readLW link

A TAI which kills all hu­mans might also doom itself

Jeffrey Heninger16 May 2023 22:36 UTC
1 point
3 comments3 min readLW link

Brief notes on the Se­nate hear­ing on AI oversight

Diziet16 May 2023 22:29 UTC
77 points
2 comments2 min readLW link

$500 Bounty/​Prize Prob­lem: Chan­nel Ca­pac­ity Us­ing “Insen­si­tive” Functions

johnswentworth16 May 2023 21:31 UTC
40 points
11 comments2 min readLW link

Progress links and tweets, 2023-05-16

jasoncrawford16 May 2023 20:54 UTC
14 points
0 comments1 min readLW link
(rootsofprogress.org)

AI Will Not Want to Self-Improve

petersalib16 May 2023 20:53 UTC
20 points
23 comments20 min readLW link

Nice in­tro video to RSI

Nathan Helm-Burger16 May 2023 18:48 UTC
12 points
0 comments1 min readLW link
(youtu.be)

[In­ter­view w/​ Zvi Mow­show­itz] Should we halt progress in AI?

fowlertm16 May 2023 18:12 UTC
18 points
2 comments3 min readLW link

AI Risk & Policy Fore­casts from Me­tac­u­lus & FLI’s AI Path­ways Workshop

_will_16 May 2023 18:06 UTC
11 points
4 comments8 min readLW link

[Question] Why doesn’t the pres­ence of log-loss for prob­a­bil­is­tic mod­els (e.g. se­quence pre­dic­tion) im­ply that any util­ity func­tion ca­pa­ble of pro­duc­ing a “fairly ca­pa­ble” agent will have at least some non-neg­ligible frac­tion of over­lap with hu­man val­ues?

Thoth Hermes16 May 2023 18:02 UTC
2 points
0 comments1 min readLW link

De­ci­sion The­ory with the Magic Parts Highlighted

moridinamael16 May 2023 17:39 UTC
174 points
24 comments5 min readLW link

We learn long-last­ing strate­gies to pro­tect our­selves from dan­ger and rejection

Richard_Ngo16 May 2023 16:36 UTC
78 points
5 comments5 min readLW link

Pro­posal: Align Sys­tems Ear­lier In Training

OneManyNone16 May 2023 16:24 UTC
18 points
0 comments11 min readLW link

Pro­ce­du­ral Ex­ec­u­tive Func­tion, Part 2

DaystarEld16 May 2023 16:22 UTC
18 points
0 comments18 min readLW link
(daystareld.com)

My cur­rent work­flow to study the in­ter­nal mechanisms of LLM

Yulu Pi16 May 2023 15:27 UTC
3 points
0 comments1 min readLW link

Pro­posal: we should start refer­ring to the risk from un­al­igned AI as a type of *ac­ci­dent risk*

Christopher King16 May 2023 15:18 UTC
22 points
6 comments2 min readLW link

AI Safety Newslet­ter #6: Ex­am­ples of AI safety progress, Yoshua Ben­gio pro­poses a ban on AI agents, and les­sons from nu­clear arms control

16 May 2023 15:14 UTC
31 points
0 comments6 min readLW link
(newsletter.safe.ai)

Lazy Baked Mac and Cheese

jefftk16 May 2023 14:40 UTC
18 points
2 comments1 min readLW link
(www.jefftk.com)

Tyler Cowen’s challenge to de­velop an ‘ac­tual math­e­mat­i­cal model’ for AI X-Risk

Joe Brenton16 May 2023 11:57 UTC
6 points
4 comments1 min readLW link

Eval­u­at­ing Lan­guage Model Be­havi­ours for Shut­down Avoidance in Tex­tual Scenarios

16 May 2023 10:53 UTC
22 points
0 comments13 min readLW link

[Re­view] Two Peo­ple Smok­ing Be­hind the Supermarket

lsusr16 May 2023 7:25 UTC
32 points
1 comment1 min readLW link

Su­per­po­si­tion and Dropout

Edoardo Pona16 May 2023 7:24 UTC
21 points
5 comments6 min readLW link

[Question] What is the liter­a­ture on long term wa­ter fasts?

lc16 May 2023 3:23 UTC
16 points
4 comments1 min readLW link

Les­sons learned from offer­ing in-office nu­tri­tional testing

Elizabeth15 May 2023 23:20 UTC
84 points
11 comments14 min readLW link
(acesounderglass.com)

Judg­ments of­ten smug­gle in im­plicit standards

Richard_Ngo15 May 2023 18:50 UTC
83 points
4 comments3 min readLW link

Ra­tional re­tire­ment plans

Ik15 May 2023 17:49 UTC
5 points
17 comments1 min readLW link

[Question] (Cross­post) Ask­ing for on­line calls on AI s-risks dis­cus­sions

jackchang11015 May 2023 17:42 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

Sim­ple ex­per­i­ments with de­cep­tive alignment

Andreas_Moe15 May 2023 17:41 UTC
7 points
0 comments4 min readLW link

Some Sum­maries of Agent Foun­da­tions Work

mattmacdermott15 May 2023 16:09 UTC
56 points
1 comment13 min readLW link

Face­book In­creased Visibility

jefftk15 May 2023 15:40 UTC
15 points
1 comment1 min readLW link
(www.jefftk.com)

Un-un­plug­ga­bil­ity—can’t we just un­plug it?

Oliver Sourbut15 May 2023 13:23 UTC
26 points
10 comments12 min readLW link
(www.oliversourbut.net)

[Question] Can we learn much by study­ing the be­havi­our of RL poli­cies?

AidanGoth15 May 2023 12:56 UTC
1 point
0 comments1 min readLW link

How I ap­ply (so-called) Non-Violent Communication

Kaj_Sotala15 May 2023 9:56 UTC
82 points
25 comments3 min readLW link

Let’s build a fire alarm for AGI

chaosmage15 May 2023 9:16 UTC
−1 points
0 comments2 min readLW link

From fear to excitement

Richard_Ngo15 May 2023 6:23 UTC
104 points
8 comments3 min readLW link

Re­ward is the op­ti­miza­tion tar­get (of ca­pa­bil­ities re­searchers)

Max H15 May 2023 3:22 UTC
32 points
4 comments5 min readLW link

The Light­cone The­o­rem: A Bet­ter Foun­da­tion For Nat­u­ral Ab­strac­tion?

johnswentworth15 May 2023 2:24 UTC
69 points
25 comments6 min readLW link

GovAI: Towards best prac­tices in AGI safety and gov­er­nance: A sur­vey of ex­pert opinion

Zach Stein-Perlman15 May 2023 1:42 UTC
28 points
11 comments1 min readLW link
(arxiv.org)

[Question] Why don’t quan­tiliz­ers also cut off the up­per end of the dis­tri­bu­tion?

Alex_Altair15 May 2023 1:40 UTC
25 points
2 comments1 min readLW link

Sup­port Struc­tures for Nat­u­ral­ist Study

LoganStrohl15 May 2023 0:25 UTC
47 points
6 comments10 min readLW link

Catas­trophic Re­gres­sional Good­hart: Appendix

15 May 2023 0:10 UTC
22 points
1 comment9 min readLW link

Helping your Se­na­tor Pre­pare for the Up­com­ing Sam Alt­man Hearing

Tiago de Vassal14 May 2023 22:45 UTC
69 points
2 comments1 min readLW link
(aisafetytour.com)

Difficul­ties in mak­ing pow­er­ful al­igned AI

DanielFilan14 May 2023 20:50 UTC
41 points
1 comment10 min readLW link
(danielfilan.com)

How much do mar­kets value Open AI?

Xodarap14 May 2023 19:28 UTC
21 points
5 comments1 min readLW link

Misal­igned AGI Death Match

Nate Reinar Windwood14 May 2023 18:00 UTC
1 point
0 comments1 min readLW link

[Question] What new tech­nol­ogy, for what in­sti­tu­tions?

bhauth14 May 2023 17:33 UTC
29 points
6 comments3 min readLW link

A strong mind con­tinues its tra­jec­tory of creativity

TsviBT14 May 2023 17:24 UTC
22 points
8 comments6 min readLW link

On­tolo­gies Should Be Back­wards-Compatible

Thoth Hermes14 May 2023 17:21 UTC
3 points
3 comments4 min readLW link
(thothhermes.substack.com)

Jaan Tal­linn’s 2022 Philan­thropy Overview

jaan14 May 2023 15:35 UTC
64 points
2 comments1 min readLW link
(jaan.online)