In­flec­tion AI an­nounces $1.3 billion of fund­ing led by cur­rent in­vestors, Microsoft, and NVIDIA

SandXbox30 Jun 2023 21:32 UTC
7 points
0 comments1 min readLW link
(inflection.ai)

Introduction

30 Jun 2023 20:45 UTC
7 points
0 comments2 min readLW link

In­her­ently In­ter­pretable Architectures

30 Jun 2023 20:43 UTC
4 points
0 comments7 min readLW link

Pos­i­tive Attractors

30 Jun 2023 20:43 UTC
6 points
0 comments13 min readLW link

Agency from a causal perspective

30 Jun 2023 17:37 UTC
38 points
5 comments6 min readLW link

On house­hold dust

Nina Rimsky30 Jun 2023 17:03 UTC
74 points
12 comments5 min readLW link

Lit­tle at­ten­tion seems to be on dis­cour­ag­ing hard­ware progress

RussellThor30 Jun 2023 10:14 UTC
5 points
3 comments1 min readLW link

In­tro­duc­ing EffiS­ciences’ AI Safety Unit

30 Jun 2023 7:44 UTC
64 points
0 comments12 min readLW link

Con­tra An­ton 🏴‍☠️ on Kol­mogorov com­plex­ity and re­cur­sive self improvement

DaemonicSigil30 Jun 2023 5:15 UTC
25 points
12 comments2 min readLW link

Foom Liability

PeterMcCluskey30 Jun 2023 3:55 UTC
20 points
10 comments6 min readLW link
(bayesianinvestor.com)

I Think Eliezer Should Go on Glenn Beck

Lao Mein30 Jun 2023 3:12 UTC
25 points
21 comments1 min readLW link

[Question] Should MS open-source the ex­ten­sion for GitHub Copi­lot?

Sheikh Abdur Raheem Ali29 Jun 2023 23:14 UTC
17 points
4 comments1 min readLW link

Ben­gio’s FAQ on Catas­trophic AI Risks

Vaniver29 Jun 2023 23:04 UTC
39 points
0 comments1 min readLW link
(yoshuabengio.org)

AGI & War

Calecute29 Jun 2023 22:20 UTC
9 points
1 comment1 min readLW link

Biosafety Reg­u­la­tions (BMBL) and their rele­vance for AI

Štěpán Los29 Jun 2023 19:22 UTC
4 points
0 comments4 min readLW link

Na­ture Re­leases A Stupid Edi­to­rial On AI Risk

omnizoid29 Jun 2023 19:00 UTC
2 points
1 comment3 min readLW link

AI Safety with­out Align­ment: How hu­mans can WIN against AI

vicchain29 Jun 2023 17:53 UTC
1 point
1 comment2 min readLW link

Challenge pro­posal: small­est pos­si­ble self-hard­en­ing back­door for RLHF

Christopher King29 Jun 2023 16:56 UTC
7 points
0 comments2 min readLW link

AI #18: The Great De­bate Debate

Zvi29 Jun 2023 16:20 UTC
47 points
9 comments52 min readLW link
(thezvi.wordpress.com)

Bruce Ster­ling on the AI ma­nia of 2023

Mitchell_Porter29 Jun 2023 5:00 UTC
25 points
1 comment1 min readLW link
(www.newsweek.com)

Cheat sheet of AI X-risk

momom229 Jun 2023 4:28 UTC
19 points
1 comment7 min readLW link

An­throp­i­cally Blind: the an­thropic shadow is re­flec­tively inconsistent

Christopher King29 Jun 2023 2:36 UTC
40 points
38 comments10 min readLW link

One path to co­her­ence: con­di­tion­al­iza­tion

porby29 Jun 2023 1:08 UTC
28 points
4 comments4 min readLW link

AXRP an­nounce­ment: Sur­vey, Store Clos­ing, Patreon

DanielFilan28 Jun 2023 23:40 UTC
14 points
0 comments1 min readLW link

Me­taphors for AI, and why I don’t like them

boazbarak28 Jun 2023 22:47 UTC
33 points
18 comments12 min readLW link

Trans­form­ing Democ­racy: A Unique Fund­ing Op­por­tu­nity for US Fed­eral Ap­proval Voting

Aaron Hamlin28 Jun 2023 22:07 UTC
25 points
6 comments2 min readLW link

AGI x An­i­mal Welfare: A High-EV Outreach Op­por­tu­nity?

simeon_c28 Jun 2023 20:44 UTC
29 points
0 comments1 min readLW link

A “weak” AGI may at­tempt an un­likely-to-suc­ceed takeover

RobertM28 Jun 2023 20:31 UTC
54 points
17 comments3 min readLW link

Progress links and tweets, 2023-06-28: “We can do big things again in Penn­syl­va­nia”

jasoncrawford28 Jun 2023 20:23 UTC
14 points
1 comment1 min readLW link
(rootsofprogress.org)

[Question] What money-pumps ex­ist, if any, for de­on­tol­o­gists?

Daniel Kokotajlo28 Jun 2023 19:08 UTC
39 points
35 comments1 min readLW link

[Question] What is your fi­nan­cial port­fo­lio?

Algon28 Jun 2023 18:39 UTC
11 points
11 comments1 min readLW link

Levels of safety for AI and other technologies

jasoncrawford28 Jun 2023 18:35 UTC
16 points
0 comments2 min readLW link
(rootsofprogress.org)

LeCun says mak­ing a util­ity func­tion is intractable

Iknownothing28 Jun 2023 18:02 UTC
2 points
3 comments1 min readLW link

My re­search agenda in agent foundations

Alex_Altair28 Jun 2023 18:00 UTC
70 points
9 comments11 min readLW link

AI In­ci­dent Shar­ing—Best prac­tices from other fields and a com­pre­hen­sive list of ex­ist­ing platforms

Štěpán Los28 Jun 2023 17:21 UTC
20 points
0 comments4 min readLW link

The Case for Over­con­fi­dence is Overstated

Kevin Dorst28 Jun 2023 17:21 UTC
50 points
13 comments8 min readLW link
(kevindorst.substack.com)

When do “brains beat brawn” in Chess? An experiment

titotal28 Jun 2023 13:33 UTC
293 points
79 comments7 min readLW link
(titotal.substack.com)

Giv­ing an evolu­tion­ary ex­pla­na­tion for Kah­ne­man and Tver­sky’s in­sights on sub­jec­tive satisfaction

Lionel28 Jun 2023 12:17 UTC
−7 points
1 comment1 min readLW link
(lionelpage.substack.com)

Na­ture: “Stop talk­ing about to­mor­row’s AI dooms­day when AI poses risks to­day”

Ben Smith28 Jun 2023 5:59 UTC
40 points
8 comments2 min readLW link
(www.nature.com)

Re­quest: Put Carl Shul­man’s re­cent pod­cast into an or­ga­nized writ­ten format

Aryeh Englander28 Jun 2023 2:58 UTC
19 points
4 comments1 min readLW link

Pre­dic­tion Mar­ket: Will I Pull “The One Ring To Rule Them All?”

Connor Tabarrok28 Jun 2023 2:41 UTC
1 point
0 comments1 min readLW link
(manifold.markets)

Carl Shul­man on The Lu­nar So­ciety (7 hour, two-part pod­cast)

ESRogs28 Jun 2023 1:23 UTC
79 points
17 comments1 min readLW link
(www.dwarkeshpatel.com)

Brief sum­mary of ai-plans.com

Iknownothing28 Jun 2023 0:33 UTC
9 points
4 comments2 min readLW link
(ai-plans.com)

Catas­trophic Risks from AI #6: Dis­cus­sion and FAQ

27 Jun 2023 23:23 UTC
24 points
1 comment13 min readLW link
(arxiv.org)

Catas­trophic Risks from AI #5: Rogue AIs

27 Jun 2023 22:06 UTC
15 points
0 comments22 min readLW link
(arxiv.org)

AISN #12: Policy Pro­pos­als from NTIA’s Re­quest for Com­ment and Re­con­sid­er­ing In­stru­men­tal Convergence

Dan H27 Jun 2023 17:20 UTC
6 points
0 comments1 min readLW link

The Weight of the Fu­ture (Why The Apoca­lypse Can Be A Relief)

Sable27 Jun 2023 17:18 UTC
18 points
14 comments3 min readLW link
(affablyevil.substack.com)

Align­ing AI by op­ti­miz­ing for “wis­dom”

27 Jun 2023 15:20 UTC
22 points
7 comments12 min readLW link

Free­dom un­der Nat­u­ral­is­tic Dualism

Arturo Macias27 Jun 2023 14:34 UTC
1 point
32 comments13 min readLW link

Munk AI de­bate: con­fu­sions and pos­si­ble cruxes

Steven Byrnes27 Jun 2023 14:18 UTC
244 points
21 comments8 min readLW link