[Question] Term/​Cat­e­gory for AI with Neu­tral Im­pact?

isomicMay 11, 2023, 10:00 PM
6 points
1 comment1 min readLW link

Thoughts on LessWrong norms, the Art of Dis­course, and mod­er­a­tor mandate

RubyMay 11, 2023, 9:20 PM
37 points
20 comments5 min readLW link

Align­ment, Goals, and The Gut-Head Gap: A Re­view of Ngo. et al.

Violet HourMay 11, 2023, 6:06 PM
20 points
2 comments13 min readLW link

Se­quence opener: Jor­dan Harbinger’s 6 minute networking

Severin T. SeehrichMay 11, 2023, 5:06 PM
4 points
0 comments1 min readLW link

Ad­vice for newly busy people

Severin T. SeehrichMay 11, 2023, 4:46 PM
150 points
3 comments5 min readLW link

AI #11: In Search of a Moat

ZviMay 11, 2023, 3:40 PM
67 points
28 comments81 min readLW link
(thezvi.wordpress.com)

[Question] Bayesian up­date from sen­sa­tion­al­is­tic sources

houkimeMay 11, 2023, 3:26 PM
1 point
0 comments1 min readLW link

I bet $500 on AI win­ning the IMO gold medal by 2026

azsantoskMay 11, 2023, 2:46 PM
37 points
29 comments1 min readLW link

Fate­book for Slack: Track your fore­casts, right where your team works

May 11, 2023, 2:11 PM
24 points
3 comments1 min readLW link

Con­tra Caller Signs

jefftkMay 11, 2023, 1:10 PM
10 points
0 comments1 min readLW link
(www.jefftk.com)

Notes on the im­por­tance and im­ple­men­ta­tion of safety-first cog­ni­tive ar­chi­tec­tures for AI

Brendon_WongMay 11, 2023, 10:03 AM
3 points
0 comments3 min readLW link

A more grounded idea of AI risk

IknownothingMay 11, 2023, 9:48 AM
3 points
4 comments1 min readLW link

Separat­ing the “con­trol prob­lem” from the “al­ign­ment prob­lem”

Yi-YangMay 11, 2023, 9:41 AM
12 points
1 comment4 min readLW link

[Question] Is In­fra-Bayesi­anism Ap­pli­ca­ble to Value Learn­ing?

RogerDearnaleyMay 11, 2023, 8:17 AM
5 points
4 comments1 min readLW link

[Question] How should we think about the de­ci­sion rele­vance of mod­els es­ti­mat­ing p(doom)?

Mo PuteraMay 11, 2023, 4:16 AM
11 points
1 comment3 min readLW link

The Aca­demic Field Pyra­mid—any point to en­courag­ing broad but shal­low AI risk en­gage­ment?

Matthew_OpitzMay 11, 2023, 1:32 AM
20 points
1 comment6 min readLW link

[Question] How should one feel morally about us­ing chat­bots?

Adam ZernerMay 11, 2023, 1:01 AM
18 points
4 comments1 min readLW link

[Question] AI in­ter­pretabil­ity could be harm­ful?

Roman LeventovMay 10, 2023, 8:43 PM
13 points
2 comments1 min readLW link

Athens, Greece – ACX Mee­tups Every­where Spring 2023

Spyros DovasMay 10, 2023, 7:45 PM
1 point
0 comments1 min readLW link

Bet­ter debates

TsviBTMay 10, 2023, 7:34 PM
78 points
7 comments3 min readLW link

Men­tal Health and the Align­ment Prob­lem: A Com­pila­tion of Re­sources (up­dated April 2023)

May 10, 2023, 7:04 PM
256 points
54 comments21 min readLW link

A Cor­rigi­bil­ity Me­taphore—Big Gambles

WCargoMay 10, 2023, 6:13 PM
16 points
0 comments4 min readLW link

Roadmap for a col­lab­o­ra­tive pro­to­type of an Open Agency Architecture

Deger TuranMay 10, 2023, 5:41 PM
31 points
0 comments12 min readLW link

AGI-Au­to­mated In­ter­pretabil­ity is Suicide

__RicG__May 10, 2023, 2:20 PM
25 points
33 comments7 min readLW link

Class-Based Addressing

jefftkMay 10, 2023, 1:40 PM
22 points
6 comments1 min readLW link
(www.jefftk.com)

In defence of epistemic mod­esty [dis­til­la­tion]

LuiseMay 10, 2023, 9:44 AM
17 points
2 comments9 min readLW link

[Question] How much of a con­cern are open-source LLMs in the short, medium and long terms?

JavierCCMay 10, 2023, 9:14 AM
5 points
0 comments1 min readLW link

10 great rea­sons why Lex Frid­man should in­vite Eliezer and Robin to re-do the FOOM de­bate on his podcast

chaosmageMay 10, 2023, 8:27 AM
−7 points
1 comment1 min readLW link
(www.reddit.com)

New OpenAI Paper—Lan­guage mod­els can ex­plain neu­rons in lan­guage models

MrThinkMay 10, 2023, 7:46 AM
47 points
14 comments1 min readLW link

Nat­u­ral­ist Experimentation

LoganStrohlMay 10, 2023, 4:28 AM
62 points
14 comments10 min readLW link

[Question] Could A Su­per­in­tel­li­gence Out-Ar­gue A Doomer?

tjaffeeMay 10, 2023, 2:40 AM
−16 points
6 comments1 min readLW link

Gra­di­ent hack­ing via ac­tual hacking

Max HMay 10, 2023, 1:57 AM
12 points
7 comments3 min readLW link

Red team­ing: challenges and re­search directions

joshcMay 10, 2023, 1:40 AM
31 points
1 comment10 min readLW link

[Question] Look­ing for a post I read if any­one rec­og­nizes it

SilverFlameMay 10, 2023, 1:24 AM
2 points
2 comments1 min readLW link

Re­search Re­port: In­cor­rect­ness Cas­cades (Cor­rected)

Robert_AIZIMay 9, 2023, 9:54 PM
9 points
0 comments9 min readLW link
(aizi.substack.com)

Stop­ping dan­ger­ous AI: Ideal US behavior

Zach Stein-PerlmanMay 9, 2023, 9:00 PM
17 points
0 comments3 min readLW link

Stop­ping dan­ger­ous AI: Ideal lab behavior

Zach Stein-PerlmanMay 9, 2023, 9:00 PM
8 points
0 comments2 min readLW link

Progress links and tweets, 2023-05-09

jasoncrawfordMay 9, 2023, 8:22 PM
14 points
0 comments2 min readLW link
(rootsofprogress.org)

[Question] Have you heard about MIT’s “liquid neu­ral net­works”? What do you think about them?

PpauMay 9, 2023, 8:16 PM
35 points
14 comments1 min readLW link

Re­spect for Boundaries as non-ar­bir­trary co­or­di­na­tion norms

Jonas HallgrenMay 9, 2023, 7:42 PM
9 points
3 comments7 min readLW link

Solv­ing the Mechanis­tic In­ter­pretabil­ity challenges: EIS VII Challenge 1

May 9, 2023, 7:41 PM
119 points
1 comment10 min readLW link

Fore­cast­ing as a tool for teach­ing the gen­eral pub­lic to make bet­ter judge­ments?

Dominik Hajduk | České priorityMay 9, 2023, 5:35 PM
3 points
0 comments3 min readLW link

Lan­guage mod­els can ex­plain neu­rons in lan­guage models

nzMay 9, 2023, 5:29 PM
23 points
0 comments1 min readLW link
(openai.com)

Asi­mov on build­ing robots with­out the First Law

rossryMay 9, 2023, 4:44 PM
4 points
1 comment2 min readLW link

Mak­ing Up Baby Signs

jefftkMay 9, 2023, 4:40 PM
44 points
6 comments2 min readLW link
(www.jefftk.com)

Ex­cit­ing New In­ter­pretabil­ity Paper!

research_prime_spaceMay 9, 2023, 4:39 PM
12 points
1 comment1 min readLW link

Re­sult Of The Bounty/​Con­test To Ex­plain In­fra-Bayes In The Lan­guage Of Game Theory

johnswentworthMay 9, 2023, 4:35 PM
79 points
0 comments1 min readLW link

The Bleak Har­mony of Diets and Sur­vival: A Glimpse into Na­ture’s Un­for­giv­ing Balance

bardstaleMay 9, 2023, 4:08 PM
−16 points
0 comments1 min readLW link

En­tropic Abyss

bardstaleMay 9, 2023, 3:59 PM
−12 points
0 comments2 min readLW link

AI Safety Newslet­ter #5: Ge­offrey Hin­ton speaks out on AI risk, the White House meets with AI labs, and Tro­jan at­tacks on lan­guage models

May 9, 2023, 3:26 PM
28 points
1 comment4 min readLW link
(newsletter.safe.ai)