[Question] What are MIRI’s big achieve­ments in AI al­ign­ment?

tailcalledMar 7, 2023, 9:30 PM
29 points
7 comments1 min readLW link

A Brief Defense of Ath­let­i­cism

WofsenMar 7, 2023, 8:48 PM
46 points
5 comments1 min readLW link

[Question] How “grifty” is the Fore­sight In­sti­tute? Are they mak­ing but­ton soup?

CedarMar 7, 2023, 7:43 PM
7 points
3 comments1 min readLW link

[Question] What‘s in your list of un­solved prob­lems in AI al­ign­ment?

jacquesthibsMar 7, 2023, 6:58 PM
60 points
9 comments1 min readLW link

In­tro­duc­ing AI Align­ment Inc., a Cal­ifor­nia pub­lic benefit cor­po­ra­tion...

TherapistAIMar 7, 2023, 6:47 PM
1 point
4 comments1 min readLW link

Abuse in LessWrong and ra­tio­nal­ist com­mu­ni­ties in Bloomberg News

whistleblower67Mar 7, 2023, 6:45 PM
10 points
72 comments7 min readLW link
(www.bloomberg.com)

Test post for formatting

Solenoid_EntityMar 7, 2023, 5:48 PM
0 points
2 comments1 min readLW link

The Pinnacle

nemMar 7, 2023, 5:07 PM
11 points
0 comments8 min readLW link

Pod­cast Tran­script: Daniela and Dario Amodei on Anthropic

rememberMar 7, 2023, 4:47 PM
46 points
2 comments79 min readLW link
(futureoflife.org)

The View from 30,000 Feet: Pre­face to the Se­cond EleutherAI Retrospective

Mar 7, 2023, 4:22 PM
14 points
0 comments4 min readLW link
(blog.eleuther.ai)

Break­ing Rank (Cal­ibra­tion Game)

jennMar 7, 2023, 3:40 PM
11 points
0 comments2 min readLW link

Ou­trangeous (Cal­ibra­tion Game)

jennMar 7, 2023, 3:29 PM
37 points
3 comments9 min readLW link

[Linkpost] Some high-level thoughts on the Deep­Mind al­ign­ment team’s strategy

Mar 7, 2023, 11:55 AM
128 points
13 comments5 min readLW link
(drive.google.com)

Align­ment works both ways

Karl von WendtMar 7, 2023, 10:41 AM
23 points
21 comments2 min readLW link

Google’s PaLM-E: An Em­bod­ied Mul­ti­modal Lan­guage Model

SandXboxMar 7, 2023, 4:11 AM
87 points
7 comments1 min readLW link
(palm-e.github.io)

GÖDEL GOING DOWN

Jimdrix_HendriMar 6, 2023, 11:06 PM
−9 points
3 comments1 min readLW link

Against ubiquitous al­ign­ment taxes

berenMar 6, 2023, 7:50 PM
57 points
10 comments2 min readLW link

Ad­den­dum: ba­sic facts about lan­guage mod­els dur­ing training

berenMar 6, 2023, 7:24 PM
22 points
2 comments5 min readLW link

Un­der­stand­ing The Roots Of Math­e­mat­ics Be­fore Find­ing The Roots Of A Func­tion.

LiesLarisMar 6, 2023, 6:47 PM
2 points
0 comments1 min readLW link

Dis­cus­sion: LLaMA Leak & Whistle­blow­ing in pre-AGI era

jirahimMar 6, 2023, 6:47 PM
1 point
4 comments1 min readLW link

[Question] Are we too con­fi­dent about un­al­igned AGI kil­ling off hu­man­ity?

RomanSMar 6, 2023, 4:19 PM
21 points
63 comments1 min readLW link

In­tro­duc­ing Leap Labs, an AI in­ter­pretabil­ity startup

Jessica RumbelowMar 6, 2023, 4:16 PM
103 points
12 comments1 min readLW link

Monthly Roundup #4: March 2023

ZviMar 6, 2023, 2:10 PM
31 points
0 comments24 min readLW link
(thezvi.wordpress.com)

Fun­da­men­tal Uncer­tainty: Chap­ter 6 - How can we be cer­tain about the truth?

Gordon Seidoh WorleyMar 6, 2023, 1:52 PM
11 points
19 comments16 min readLW link

The idea

JNSMar 6, 2023, 1:42 PM
3 points
0 comments9 min readLW link

Hon­esty, Open­ness, Trust­wor­thi­ness, and Secrets

NormanPerlmutterMar 6, 2023, 9:03 AM
13 points
0 comments9 min readLW link

EA & LW Fo­rum Weekly Sum­mary (27th Feb − 5th Mar 2023)

Zoe WilliamsMar 6, 2023, 3:18 AM
12 points
0 commentsLW link

The Type II In­ner-Com­pass Theorem

Tristan MianoMar 6, 2023, 2:35 AM
−16 points
0 comments22 min readLW link

AGI’s Im­pact on Em­ploy­ment

TheUnkown Mar 6, 2023, 1:56 AM
1 point
1 comment1 min readLW link
(www.apricitas.io)

Why did you trash the old HPMOR.com?

AnnoyedReaderMar 6, 2023, 1:55 AM
54 points
68 comments2 min readLW link

Cap Model Size for AI Safety

research_prime_spaceMar 6, 2023, 1:11 AM
0 points
4 comments1 min readLW link

What should we do about net­work-effect mo­nop­o­lies?

benkuhnMar 6, 2023, 12:50 AM
31 points
7 comments1 min readLW link
(www.benkuhn.net)

Who Aligns the Align­ment Re­searchers?

Ben SmithMar 5, 2023, 11:22 PM
48 points
0 comments11 min readLW link

Star­tups are like firewood

Adam ZernerMar 5, 2023, 11:09 PM
26 points
2 comments3 min readLW link

A con­cern­ing ob­ser­va­tion from me­dia cov­er­age of AI in­dus­try dynamics

Justin OliveMar 5, 2023, 9:38 PM
8 points
3 comments3 min readLW link

Steven Pinker on ChatGPT and AGI (Feb 2023)

Evan R. MurphyMar 5, 2023, 9:34 PM
11 points
8 comments1 min readLW link
(news.harvard.edu)

Is it time to talk about AI dooms­day prep­ping yet?

bokovMar 5, 2023, 9:17 PM
0 points
8 comments1 min readLW link

Co­or­di­na­tion ex­plo­sion be­fore in­tel­li­gence ex­plo­sion...?

tailcalledMar 5, 2023, 8:48 PM
47 points
9 comments2 min readLW link

The Ogdoad

Tristan MianoMar 5, 2023, 8:01 PM
−15 points
1 comment37 min readLW link

[Question] What are some good ways to heighten my emo­tions?

oh54321Mar 5, 2023, 6:06 PM
5 points
5 comments1 min readLW link

Re­search pro­posal: Lev­er­ag­ing Jun­gian archetypes to cre­ate val­ues-based models

MiguelDevMar 5, 2023, 5:39 PM
5 points
2 comments2 min readLW link

Abus­ing Snap Cir­cuits IC

jefftkMar 5, 2023, 5:00 PM
19 points
3 comments3 min readLW link
(www.jefftk.com)

Do hu­mans de­rive val­ues from fic­ti­tious im­puted co­her­ence?

TsviBTMar 5, 2023, 3:23 PM
45 points
8 comments14 min readLW link

The In­ner-Com­pass Theorem

Tristan MianoMar 5, 2023, 3:21 PM
−18 points
12 comments16 min readLW link

Hal­i­fax Monthly Meetup: AI Safety Discussion

IdeopunkMar 5, 2023, 12:42 PM
10 points
0 comments1 min readLW link

Why kill ev­ery­one?

arisAlexisMar 5, 2023, 11:53 AM
7 points
5 comments2 min readLW link

Selec­tive, Cor­rec­tive, Struc­tural: Three Ways of Mak­ing So­cial Sys­tems Work

Said AchmizMar 5, 2023, 8:45 AM
100 points
13 comments2 min readLW link

Sub­sti­tute goods for leisure are abundant

Adam ZernerMar 5, 2023, 3:45 AM
20 points
7 comments5 min readLW link

[Question] Does polyamory at a work­place turn nepo­tism up to eleven?

ViliamMar 5, 2023, 12:57 AM
45 points
11 comments2 min readLW link

Why We MUST Build an (al­igned) Ar­tifi­cial Su­per­in­tel­li­gence That Takes Over Hu­man So­ciety—A Thought Experiment

twkaiserMar 5, 2023, 12:47 AM
−13 points
12 comments2 min readLW link