[Question] Killing Re­cur­rent Me­mory Over Self At­ten­tion?

Del NoboloJun 6, 2023, 11:02 PM
3 points
0 comments1 min readLW link

[Job Ad] SERI MATS is (still) hiring for our sum­mer program

Jun 6, 2023, 9:07 PM
12 points
0 comments7 min readLW link

Why I am not a longter­mist (May 2022)

boazbarakJun 6, 2023, 8:36 PM
38 points
19 comments9 min readLW link
(windowsontheory.org)

So­ciety Library seek­ing con­tri­bu­tions for canon­i­cal AI Safety de­bate map

Jarred FilmerJun 6, 2023, 6:15 PM
36 points
0 comments1 min readLW link
(www.societylibrary.org)

A Play­book for AI Risk Re­duc­tion (fo­cused on mis­al­igned AI)

HoldenKarnofskyJun 6, 2023, 6:05 PM
90 points
42 comments14 min readLW link1 review

A “bot­tom-up” ap­proach to AI as a more trans­par­ent al­ter­na­tive to “top-down” LLMs

Paul JorionJun 6, 2023, 6:00 PM
1 point
0 comments1 min readLW link

Why Yud­kowsky Is Wrong And What He Does Can Be More Dangerous

idontagreewiththatJun 6, 2023, 5:59 PM
−38 points
4 comments3 min readLW link

The Base Rate Times, news through pre­dic­tion markets

vandemonianJun 6, 2023, 5:42 PM
268 points
41 comments4 min readLW link1 review

Monthly Roundup #7: June 2023

ZviJun 6, 2023, 5:40 PM
23 points
13 comments43 min readLW link
(thezvi.wordpress.com)

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted SandersJun 6, 2023, 5:36 PM
33 points
117 comments5 min readLW link
(arxiv.org)

AISN #9: State­ment on Ex­tinc­tion Risks, Com­pet­i­tive Pres­sures, and When Will AI Reach Hu­man-Level?

Dan HJun 6, 2023, 4:10 PM
12 points
0 comments7 min readLW link
(newsletter.safe.ai)

An Eter­nal Company

moyamoJun 6, 2023, 3:56 PM
7 points
8 comments4 min readLW link

AISC end of pro­gram presentations

Jun 6, 2023, 3:45 PM
18 points
0 comments1 min readLW link

Why the Solu­tions to AI Align­ment are Likely Out­side the Over­ton Window

williamsaeJun 6, 2023, 2:21 PM
−6 points
0 comments3 min readLW link

Stampy’s AI Safety Info—New Distil­la­tions #3 [May 2023]

markovJun 6, 2023, 2:18 PM
16 points
0 comments2 min readLW link
(aisafety.info)

Agen­tic Mess (A Failure Story)

Jun 6, 2023, 1:09 PM
46 points
5 comments13 min readLW link

Ber­lin AI Align­ment Open Meetup June 2023

GuyPJun 6, 2023, 10:04 AM
5 points
0 comments1 min readLW link

The Sharp Right Turn: sud­den de­cep­tive al­ign­ment as a con­ver­gent goal

avturchinJun 6, 2023, 9:59 AM
38 points
5 comments1 min readLW link

Open Thread: June 2023 (In­line Re­acts!)

RaemonJun 6, 2023, 7:40 AM
19 points
57 comments1 min readLW link

[Linkpost] Given Ex­tinc­tion Wor­ries, Why Don’t AI Re­searchers Quit? Well, Sev­eral Reasons

Daniel_EthJun 6, 2023, 7:31 AM
10 points
0 commentsLW link

Is the 10% Giv­ing What We Can Pledge Core to EA’s Rep­u­ta­tion?

DirectedEvolutionJun 6, 2023, 6:21 AM
10 points
1 commentLW link

Rishi to out­line his vi­sion for Bri­tain to take the world lead in polic­ing AI threats when he meets Joe Biden

Mati_RoyJun 6, 2023, 4:47 AM
25 points
1 comment1 min readLW link
(www.dailymail.co.uk)

In­tel­li­gence Offi­cials Say U.S. Has Retrieved Craft of Non-Hu­man Origin

lcJun 6, 2023, 3:54 AM
33 points
151 comments1 min readLW link
(thedebrief.org)

Al­gorith­mic Im­prove­ment Is Prob­a­bly Faster Than Scal­ing Now

johnswentworthJun 6, 2023, 2:57 AM
146 points
25 comments2 min readLW link

Con­tra Mask Status

jefftkJun 6, 2023, 2:10 AM
10 points
0 comments1 min readLW link
(www.jefftk.com)

An­drew Ng wants to have a con­ver­sa­tion about ex­tinc­tion risk from AI

Leon LangJun 5, 2023, 10:29 PM
32 points
2 comments1 min readLW link
(twitter.com)

True Re­jec­tion Challenges

ScrewtapeJun 5, 2023, 10:17 PM
20 points
11 comments5 min readLW link

AISafety.info “How can I help?” FAQ

Jun 5, 2023, 10:09 PM
59 points
0 comments2 min readLW link

An­swer to a ques­tion: what do I think about God’s com­mu­ni­ca­tion pat­terns?

Jim PivarskiJun 5, 2023, 9:40 PM
2 points
16 comments8 min readLW link

The In­trin­sic In­ter­play of Hu­man Values and Ar­tifi­cial In­tel­li­gence: Nav­i­gat­ing the Op­ti­miza­tion Challenge

Joe KwonJun 5, 2023, 8:41 PM
2 points
1 comment18 min readLW link

The (lo­cal) unit of in­tel­li­gence is FLOPs

boazbarakJun 5, 2023, 6:23 PM
42 points
7 comments5 min readLW link

Tu­tor-GPT & Ped­a­gog­i­cal Reasoning

courtlandleerJun 5, 2023, 5:53 PM
26 points
3 comments4 min readLW link

Not an­other bias!

LionelJun 5, 2023, 5:50 PM
3 points
0 comments1 min readLW link
(lionelpage.substack.com)

What I’ve been read­ing, June 2023

jasoncrawfordJun 5, 2023, 5:08 PM
16 points
0 comments7 min readLW link
(rootsofprogress.org)

Hu­mans don’t un­der­stand how we do most things

Nathan1123Jun 5, 2023, 2:35 PM
2 points
2 comments2 min readLW link

Wild­fire of strategicness

TsviBTJun 5, 2023, 1:59 PM
38 points
19 comments1 min readLW link

Speak­ing off-meta

EpiritoJun 5, 2023, 1:56 PM
4 points
0 comments1 min readLW link

Some Thoughts on Con­di­tional Fore­casts – Les­sons from the 2020 Election

JavierJun 5, 2023, 11:58 AM
14 points
2 comments4 min readLW link

5/​23

CelerJun 5, 2023, 5:50 AM
10 points
0 comments1 min readLW link
(keller.substack.com)

We Are Less Wrong than E. T. Jaynes on Loss Func­tions in Hu­man Society

Zack_M_DavisJun 5, 2023, 5:34 AM
46 points
14 comments2 min readLW link

Monthly Shorts 8/​21

CelerJun 5, 2023, 5:30 AM
13 points
2 comments3 min readLW link
(keller.substack.com)

Ages Sur­vey: Results

jefftkJun 5, 2023, 2:10 AM
57 points
10 comments5 min readLW link
(www.jefftk.com)

Meta-con­ver­sa­tion shouldn’t be taboo

Adam ZernerJun 5, 2023, 12:19 AM
34 points
36 comments4 min readLW link

The ants and the grasshopper

Richard_NgoJun 4, 2023, 10:00 PM
465 points
44 comments5 min readLW link4 reviews
(www.narrativeark.xyz)

[Question] im­pli­ca­tions of NN de­sign for education

bhauthJun 4, 2023, 8:50 PM
9 points
3 comments1 min readLW link

Na­ture < Nur­ture for AIs

scottviteriJun 4, 2023, 8:38 PM
14 points
22 comments7 min readLW link

One im­ple­men­ta­tion of reg­u­la­tory GPU restrictions

porbyJun 4, 2023, 8:34 PM
42 points
6 comments5 min readLW link

How to em­bark on a jour­ney of self-dis­cov­ery (and po­ten­tially suc­ceed)

Ester DobiášováJun 4, 2023, 6:46 PM
6 points
0 comments14 min readLW link
(ladyesik.wordpress.com)

AI Safety Fun­da­men­tals: An In­for­mal Co­hort Start­ing Soon!

Tiago de VassalJun 4, 2023, 5:15 PM
4 points
0 comments1 min readLW link

How to Think About Ac­ti­va­tion Patching

Neel NandaJun 4, 2023, 2:17 PM
50 points
5 comments20 min readLW link
(www.neelnanda.io)