[Question] AI in­ter­pretabil­ity could be harm­ful?

Roman Leventov10 May 2023 20:43 UTC
13 points
2 comments1 min readLW link

Athens, Greece – ACX Mee­tups Every­where Spring 2023

Spyros Dovas10 May 2023 19:45 UTC
1 point
0 comments1 min readLW link

Bet­ter debates

TsviBT10 May 2023 19:34 UTC
57 points
7 comments3 min readLW link

Men­tal Health and the Align­ment Prob­lem: A Com­pila­tion of Re­sources (up­dated April 2023)

10 May 2023 19:04 UTC
251 points
53 comments21 min readLW link

A Cor­rigi­bil­ity Me­taphore—Big Gambles

WCargo10 May 2023 18:13 UTC
16 points
0 comments4 min readLW link

Roadmap for a col­lab­o­ra­tive pro­to­type of an Open Agency Architecture

Deger Turan10 May 2023 17:41 UTC
30 points
0 comments12 min readLW link

AGI-Au­to­mated In­ter­pretabil­ity is Suicide

__RicG__10 May 2023 14:20 UTC
23 points
33 comments7 min readLW link

Class-Based Addressing

jefftk10 May 2023 13:40 UTC
22 points
6 comments1 min readLW link
(www.jefftk.com)

In defence of epistemic mod­esty [dis­til­la­tion]

Luise10 May 2023 9:44 UTC
17 points
2 comments9 min readLW link

[Question] How much of a con­cern are open-source LLMs in the short, medium and long terms?

JavierCC10 May 2023 9:14 UTC
5 points
0 comments1 min readLW link

10 great rea­sons why Lex Frid­man should in­vite Eliezer and Robin to re-do the FOOM de­bate on his podcast

chaosmage10 May 2023 8:27 UTC
−7 points
1 comment1 min readLW link
(www.reddit.com)

New OpenAI Paper—Lan­guage mod­els can ex­plain neu­rons in lan­guage models

MrThink10 May 2023 7:46 UTC
47 points
14 comments1 min readLW link

Nat­u­ral­ist Experimentation

LoganStrohl10 May 2023 4:28 UTC
57 points
14 comments10 min readLW link

[Question] Could A Su­per­in­tel­li­gence Out-Ar­gue A Doomer?

tjaffee10 May 2023 2:40 UTC
−16 points
6 comments1 min readLW link

Gra­di­ent hack­ing via ac­tual hacking

Max H10 May 2023 1:57 UTC
12 points
7 comments3 min readLW link

Red team­ing: challenges and re­search directions

joshc10 May 2023 1:40 UTC
30 points
1 comment10 min readLW link

[Question] Look­ing for a post I read if any­one rec­og­nizes it

SilverFlame10 May 2023 1:24 UTC
2 points
2 comments1 min readLW link

Re­search Re­port: In­cor­rect­ness Cas­cades (Cor­rected)

Robert_AIZI9 May 2023 21:54 UTC
9 points
0 comments9 min readLW link
(aizi.substack.com)

Stop­ping dan­ger­ous AI: Ideal US behavior

Zach Stein-Perlman9 May 2023 21:00 UTC
17 points
0 comments3 min readLW link

Stop­ping dan­ger­ous AI: Ideal lab behavior

Zach Stein-Perlman9 May 2023 21:00 UTC
8 points
0 comments2 min readLW link

Progress links and tweets, 2023-05-09

jasoncrawford9 May 2023 20:22 UTC
14 points
0 comments2 min readLW link
(rootsofprogress.org)

[Question] Have you heard about MIT’s “liquid neu­ral net­works”? What do you think about them?

Ppau9 May 2023 20:16 UTC
36 points
14 comments1 min readLW link

Re­spect for Boundaries as non-ar­bir­trary co­or­di­na­tion norms

Jonas Hallgren9 May 2023 19:42 UTC
9 points
3 comments7 min readLW link

Solv­ing the Mechanis­tic In­ter­pretabil­ity challenges: EIS VII Challenge 1

9 May 2023 19:41 UTC
119 points
1 comment10 min readLW link

Fore­cast­ing as a tool for teach­ing the gen­eral pub­lic to make bet­ter judge­ments?

Dominik Hajduk | České priority9 May 2023 17:35 UTC
3 points
0 comments3 min readLW link

Lan­guage mod­els can ex­plain neu­rons in lan­guage models

nz9 May 2023 17:29 UTC
23 points
0 comments1 min readLW link
(openai.com)

Asi­mov on build­ing robots with­out the First Law

rossry9 May 2023 16:44 UTC
4 points
1 comment2 min readLW link

Mak­ing Up Baby Signs

jefftk9 May 2023 16:40 UTC
44 points
6 comments2 min readLW link
(www.jefftk.com)

Ex­cit­ing New In­ter­pretabil­ity Paper!

research_prime_space9 May 2023 16:39 UTC
12 points
1 comment1 min readLW link

Re­sult Of The Bounty/​Con­test To Ex­plain In­fra-Bayes In The Lan­guage Of Game Theory

johnswentworth9 May 2023 16:35 UTC
79 points
0 comments1 min readLW link

The Bleak Har­mony of Diets and Sur­vival: A Glimpse into Na­ture’s Un­for­giv­ing Balance

bardstale9 May 2023 16:08 UTC
−16 points
0 comments1 min readLW link

En­tropic Abyss

bardstale9 May 2023 15:59 UTC
−12 points
0 comments2 min readLW link

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

9 May 2023 15:26 UTC
28 points
1 comment4 min readLW link
(newsletter.safe.ai)

A Search for More ChatGPT /​ GPT-3.5 /​ GPT-4 “Un­speak­able” Glitch Tokens

Martin Fell9 May 2023 14:36 UTC
23 points
9 comments6 min readLW link

How to In­ter­pret Pre­dic­tion Mar­ket Prices as Probabilities

SimonM9 May 2023 14:12 UTC
14 points
1 comment4 min readLW link

Stampy’s AI Safety Info—New Distil­la­tions #2 [April 2023]

markov9 May 2023 13:31 UTC
25 points
1 comment1 min readLW link
(aisafety.info)

Quote quiz answer

jasoncrawford9 May 2023 13:27 UTC
19 points
0 comments4 min readLW link
(rootsofprogress.org)

[Question] Does re­versible com­pu­ta­tion let you com­pute the com­plex­ity class PSPACE as effi­ciently as nor­mal com­put­ers com­pute the com­plex­ity class P?

Noosphere899 May 2023 13:18 UTC
6 points
14 comments1 min readLW link

EconTalk pod­cast: “Eliezer Yud­kowsky on the Dangers of AI”

TekhneMakre9 May 2023 11:14 UTC
15 points
1 comment1 min readLW link
(www.econtalk.org)

Most peo­ple should prob­a­bly feel safe most of the time

Kaj_Sotala9 May 2023 9:35 UTC
95 points
28 comments10 min readLW link

Sum­maries of top fo­rum posts (1st to 7th May 2023)

Zoe Williams9 May 2023 9:30 UTC
21 points
0 comments1 min readLW link

Fo­cus­ing on longevity re­search as a way to avoid the AI apocalypse

Random Trader9 May 2023 4:47 UTC
14 points
2 comments2 min readLW link

When is Good­hart catas­trophic?

9 May 2023 3:59 UTC
167 points
23 comments8 min readLW link

Chilean AIS Hackathon Retrospective

agucova9 May 2023 1:34 UTC
9 points
0 comments1 min readLW link

An­nounc­ing “Key Phenom­ena in AI Risk” (fa­cil­i­tated read­ing group)

9 May 2023 0:31 UTC
65 points
4 comments2 min readLW link

Yoshua Ben­gio ar­gues for tool-AI and to ban “ex­ec­u­tive-AI”

habryka9 May 2023 0:13 UTC
53 points
15 comments7 min readLW link
(yoshuabengio.org)

South Bay ACX/​LW Meetup

IS8 May 2023 23:55 UTC
2 points
0 comments1 min readLW link

H-JEPA might be tech­ni­cally al­ignable in a mod­ified form

Roman Leventov8 May 2023 23:04 UTC
12 points
2 comments7 min readLW link

All AGI Safety ques­tions wel­come (es­pe­cially ba­sic ones) [May 2023]

steven04618 May 2023 22:30 UTC
33 points
44 comments2 min readLW link

Pre­dictable up­dat­ing about AI risk

Joe Carlsmith8 May 2023 21:53 UTC
288 points
23 comments36 min readLW link