Up­date on Book Re­view Dom­i­nant As­surance Contract

Arjun PanicksseryFeb 3, 2023, 11:16 PM
9 points
0 commentsLW link

[Question] 2+2=π√2+n

Logan ZoellnerFeb 3, 2023, 10:27 PM
16 points
15 comments1 min readLW link

[Question] If I en­counter a ca­pa­bil­ities pa­per that kinda spooks me, what should I do with it?

the gears to ascensionFeb 3, 2023, 9:37 PM
28 points
8 comments1 min readLW link

[Question] What Are The Pre­con­di­tions/​Pr­ereq­ui­sites for Asymp­totic Anal­y­sis?

DragonGodFeb 3, 2023, 9:26 PM
8 points
2 comments1 min readLW link

[Linkpost] Google in­vested $300M in An­thropic in late 2022

Orpheus16Feb 3, 2023, 7:13 PM
73 points
14 comments1 min readLW link
(www.ft.com)

Many AI gov­er­nance pro­pos­als have a trade­off be­tween use­ful­ness and feasibility

Feb 3, 2023, 6:49 PM
22 points
2 comments2 min readLW link

Re­ply to Dun­can Sa­bien on Strawmanning

Zack_M_DavisFeb 3, 2023, 5:57 PM
42 points
11 comments4 min readLW link

Semi-rare plain lan­guage words that are great to remember

LVSNFeb 3, 2023, 4:33 PM
4 points
7 comments1 min readLW link

[Question] What qual­ities does an AGI need to have to re­al­ize the risk of false vac­uum, with­out hard­cod­ing physics the­o­ries into it?

RationalSieveFeb 3, 2023, 4:00 PM
1 point
4 comments1 min readLW link

Hous­ing and Tran­sit Roundup #3

ZviFeb 3, 2023, 3:10 PM
21 points
6 comments16 min readLW link
(thezvi.wordpress.com)

Ta­boo P(doom)

NathanBarnardFeb 3, 2023, 10:37 AM
14 points
10 comments1 min readLW link

ChatGPT: Tan­tal­iz­ing af­terthoughts in search of story tra­jec­to­ries [in­duc­tion heads]

Bill BenzonFeb 3, 2023, 10:35 AM
4 points
0 comments20 min readLW link

Jor­dan Peter­son: Guru/​Villain

Bryan FrancesFeb 3, 2023, 9:02 AM
−14 points
6 comments9 min readLW link

[Question] What is the risk of ask­ing a coun­ter­fac­tual or­a­cle a ques­tion that already had its an­swer erased?

Chris_LeongFeb 3, 2023, 3:13 AM
7 points
0 comments1 min readLW link

I don’t think MIRI “gave up”

RaemonFeb 3, 2023, 12:26 AM
106 points
64 comments4 min readLW link

What fact that you know is true but most peo­ple aren’t ready to ac­cept it?

lorepieriFeb 3, 2023, 12:06 AM
47 points
211 comments1 min readLW link

[Question] Monotonous Work

Gideon BauerFeb 2, 2023, 9:35 PM
1 point
0 comments1 min readLW link

Is AI risk as­sess­ment too an­thro­pocen­tric?

Craig MattsonFeb 2, 2023, 9:34 PM
3 points
6 comments1 min readLW link

Hal­i­fax Monthly Meetup: In­tro­duc­tion to Effec­tive Altruism

IdeopunkFeb 2, 2023, 9:10 PM
10 points
0 comments1 min readLW link

Con­di­tion­ing Pre­dic­tive Models: Outer al­ign­ment via care­ful conditioning

Feb 2, 2023, 8:28 PM
72 points
15 comments57 min readLW link

Con­di­tion­ing Pre­dic­tive Models: Large lan­guage mod­els as predictors

Feb 2, 2023, 8:28 PM
88 points
4 comments13 min readLW link

Nor­ma­tive vs De­scrip­tive Models of Agency

mattmacdermottFeb 2, 2023, 8:28 PM
26 points
5 comments4 min readLW link

An­drew Hu­ber­man on How to Op­ti­mize Sleep

Leon LangFeb 2, 2023, 8:17 PM
37 points
6 comments6 min readLW link

[Question] How can I help in­flam­ma­tion-based nerve dam­age be tem­po­rary?

Optimization ProcessFeb 2, 2023, 7:20 PM
17 points
4 comments1 min readLW link

More find­ings on max­i­mal data dimension

Marius HobbhahnFeb 2, 2023, 6:33 PM
27 points
1 comment11 min readLW link

Her­i­ta­bil­ity, Be­hav­iorism, and Within-Life­time RL

Steven ByrnesFeb 2, 2023, 4:34 PM
39 points
3 comments4 min readLW link

Covid 2/​2/​23: The Emer­gency Ends on 5/​11

ZviFeb 2, 2023, 2:00 PM
22 points
6 comments7 min readLW link
(thezvi.wordpress.com)

You are prob­a­bly not a good al­ign­ment re­searcher, and other blatant lies

junk heap homotopyFeb 2, 2023, 1:55 PM
83 points
16 comments2 min readLW link

Don’t Judge a Tool by its Aver­age Output

silentbobFeb 2, 2023, 1:42 PM
12 points
2 comments4 min readLW link

Epoch Im­pact Re­port 2022

JsevillamolFeb 2, 2023, 1:09 PM
16 points
0 commentsLW link

You Don’t Ex­ist, Duncan

Duncan Sabien (Inactive)Feb 2, 2023, 8:37 AM
252 points
107 comments9 min readLW link

Tem­po­rally Lay­ered Ar­chi­tec­ture for Adap­tive, Distributed and Con­tin­u­ous Control

Roman LeventovFeb 2, 2023, 6:29 AM
6 points
4 comments1 min readLW link
(arxiv.org)

Re­search agenda: For­mal­iz­ing ab­strac­tions of computations

Erik JennerFeb 2, 2023, 4:29 AM
93 points
10 comments31 min readLW link

Progress links and tweets, 2023-02-01

jasoncrawfordFeb 2, 2023, 2:25 AM
10 points
0 comments1 min readLW link
(rootsofprogress.org)

Ret­ro­spec­tive on the AI Safety Field Build­ing Hub

Vael GatesFeb 2, 2023, 2:06 AM
30 points
0 commentsLW link

How to ex­port An­droid Chrome tabs to an HTML file in Linux (as of Fe­bru­ary 2023)

Adam ScherlisFeb 2, 2023, 2:03 AM
7 points
3 comments2 min readLW link
(adam.scherlis.com)

Hacked Ac­count Spam

jefftkFeb 2, 2023, 1:50 AM
13 points
5 comments1 min readLW link
(www.jefftk.com)

A sim­ple tech­nique to re­duce nega­tive rumination

cranberry_bearFeb 2, 2023, 1:33 AM
9 points
0 comments1 min readLW link

A Brief Overview of AI Safety/​Align­ment Orgs, Fields, Re­searchers, and Re­sources for ML Researchers

Austin WitteFeb 2, 2023, 1:02 AM
18 points
1 comment2 min readLW link

In­ter­views with 97 AI Re­searchers: Quan­ti­ta­tive Analysis

Feb 2, 2023, 1:01 AM
23 points
0 comments7 min readLW link

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

Feb 2, 2023, 1:00 AM
43 points
1 commentLW link

Pre­dict­ing re­searcher in­ter­est in AI alignment

Vael GatesFeb 2, 2023, 12:58 AM
25 points
0 commentsLW link

Fo­cus on the places where you feel shocked ev­ery­one’s drop­ping the ball

So8resFeb 2, 2023, 12:27 AM
464 points
64 comments4 min readLW link3 reviews

Ex­er­cise is Good, Actually

Gordon Seidoh WorleyFeb 2, 2023, 12:09 AM
91 points
27 comments3 min readLW link

Product safety is a poor model for AI governance

Richard Korzekwa Feb 1, 2023, 10:40 PM
36 points
0 comments5 min readLW link
(aiimpacts.org)

Hin­ton: “mor­tal” effi­cient ana­log hard­ware may be learned-in-place, uncopyable

the gears to ascensionFeb 1, 2023, 10:19 PM
12 points
3 comments1 min readLW link

Can we “cure” can­cer?

jasoncrawfordFeb 1, 2023, 10:03 PM
41 points
31 comments2 min readLW link
(rootsofprogress.org)

Eli Lifland on Nav­i­gat­ing the AI Align­ment Landscape

ozziegooenFeb 1, 2023, 9:17 PM
9 points
1 comment31 min readLW link
(quri.substack.com)

Schizophre­nia as a defi­ciency in long-range cor­tex-to-cor­tex communication

Steven ByrnesFeb 1, 2023, 7:32 PM
35 points
38 comments11 min readLW link

AI Safety Ar­gu­ments: An In­ter­ac­tive Guide

Lukas TrötzmüllerFeb 1, 2023, 7:26 PM
20 points
0 comments3 min readLW link