Re­vis­it­ing the Man­i­fold Hypothesis

Aidan RockeOct 1, 2023, 11:55 PM
13 points
19 comments4 min readLW link

AI Align­ment Break­throughs this Week [new sub­stack]

Logan ZoellnerOct 1, 2023, 10:13 PM
0 points
8 comments2 min readLW link

[Question] Look­ing for study

Robert FeinsteinOct 1, 2023, 7:52 PM
4 points
0 comments1 min readLW link

Join AISafety.info’s Distil­la­tion Hackathon (Oct 6-9th)

smallsiloOct 1, 2023, 6:43 PM
21 points
0 comments2 min readLW link
(forum.effectivealtruism.org)

Fifty Flips

abstractapplicOct 1, 2023, 3:30 PM
33 points
15 comments1 min readLW link1 review
(h-b-p.github.io)

AI Safety Im­pact Mar­kets: Your Char­ity Eval­u­a­tor for AI Safety

Dawn DrescherOct 1, 2023, 10:47 AM
16 points
5 commentsLW link
(impactmarkets.substack.com)

“Ab­sence of Ev­i­dence is Not Ev­i­dence of Ab­sence” As a Limit

transhumanist_atom_understanderOct 1, 2023, 8:15 AM
16 points
1 comment2 min readLW link

New Tool: the Resi­d­ual Stream Viewer

AdamYedidiaOct 1, 2023, 12:49 AM
32 points
7 comments4 min readLW link
(tinyurl.com)

My Effortless Weight­loss Story: A Quick Runthrough

CuoreDiVetroSep 30, 2023, 11:02 PM
123 points
78 comments9 min readLW link

Ar­gu­ments for moral indefinability

Richard_NgoSep 30, 2023, 10:40 PM
47 points
16 comments7 min readLW link
(www.thinkingcomplete.com)

Con­di­tion­als All The Way Down

lunatic_at_largeSep 30, 2023, 9:06 PM
33 points
2 comments3 min readLW link

Fo­cus­ing your im­pact on short vs long TAI timelines

kuhanjSep 30, 2023, 7:34 PM
4 points
0 comments10 min readLW link

How model edit­ing could help with the al­ign­ment prob­lem

Michael RipaSep 30, 2023, 5:47 PM
12 points
1 comment15 min readLW link

My sub­mis­sion to the ALTER Prize

LorxusSep 30, 2023, 4:07 PM
6 points
0 comments1 min readLW link
(www.docdroid.net)

Anki deck for learn­ing the main AI safety orgs, pro­jects, and programs

Bryce RobertsonSep 30, 2023, 4:06 PM
2 points
0 comments1 min readLW link

The Lighthaven Cam­pus is open for bookings

habrykaSep 30, 2023, 1:08 AM
209 points
18 comments4 min readLW link
(www.lighthaven.space)

Head­phones hook

philhSep 29, 2023, 10:50 PM
21 points
1 comment3 min readLW link
(reasonableapproximation.net)

Paul Chris­ti­ano’s views on “doom” (video ex­plainer)

Michaël TrazziSep 29, 2023, 9:56 PM
15 points
0 comments1 min readLW link
(youtu.be)

The Retroac­tive Fund­ing Land­scape: In­no­va­tions for Donors and Grantmakers

Dawn DrescherSep 29, 2023, 5:39 PM
13 points
0 commentsLW link
(impactmarkets.substack.com)

Bids To Defer On Value Judgements

johnswentworthSep 29, 2023, 5:07 PM
58 points
6 comments3 min readLW link

An­nounc­ing FAR Labs, an AI safety cowork­ing space

Ben GoldhaberSep 29, 2023, 4:52 PM
95 points
0 comments1 min readLW link

A tool for search­ing ra­tio­nal­ist & EA webs

Daniel_FriedrichSep 29, 2023, 3:23 PM
4 points
0 comments1 min readLW link
(ratsearch.blogspot.com)

Ba­sic Math­e­mat­ics of Pre­dic­tive Coding

Adam ShaiSep 29, 2023, 2:38 PM
49 points
6 comments9 min readLW link

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotalSep 29, 2023, 2:01 PM
160 points
79 commentsLW link
(titotal.substack.com)

Steer­ing sub­sys­tems: ca­pa­bil­ities, agency, and alignment

Seth HerdSep 29, 2023, 1:45 PM
31 points
0 comments8 min readLW link

Ap­ply to Us­able Se­cu­rity Prize by Septem­ber 30

Allison DuettmannSep 29, 2023, 1:39 PM
4 points
0 comments1 min readLW link

List of how peo­ple have be­come more hard-working

Chi NguyenSep 29, 2023, 11:30 AM
69 points
7 commentsLW link

Re­solv­ing moral un­cer­tainty with randomization

Sep 29, 2023, 11:23 AM
7 points
1 comment11 min readLW link

EA Ve­gan Ad­vo­cacy is not truth­seek­ing, and it’s ev­ery­one’s problem

ElizabethSep 28, 2023, 11:30 PM
323 points
250 comments22 min readLW link2 reviews
(acesounderglass.com)

Com­pet­i­tive, Co­op­er­a­tive, and Cohabitive

ScrewtapeSep 28, 2023, 11:25 PM
49 points
13 comments5 min readLW link1 review

The Com­ing Wave

PeterMcCluskeySep 28, 2023, 10:59 PM
27 points
1 comment6 min readLW link
(bayesianinvestor.com)

High-level in­ter­pretabil­ity: de­tect­ing an AI’s objectives

Sep 28, 2023, 7:30 PM
72 points
4 comments21 min readLW link

How to Catch an AI Liar: Lie De­tec­tion in Black-Box LLMs by Ask­ing Un­re­lated Questions

Sep 28, 2023, 6:53 PM
187 points
39 comments3 min readLW link1 review

Re­spon­si­ble scal­ing policy TLDR

lemonhopeSep 28, 2023, 6:51 PM
9 points
0 comments1 min readLW link

Align­ment Work­shop talks

Richard_NgoSep 28, 2023, 6:26 PM
37 points
1 comment1 min readLW link
(www.alignment-workshop.com)

My Cur­rent Thoughts on the AI Strate­gic Landscape

Jeffrey HeningerSep 28, 2023, 5:59 PM
11 points
28 comments14 min readLW link

My Ar­ro­gant Plan for Alignment

MrArrogantSep 28, 2023, 5:51 PM
2 points
6 comments6 min readLW link

Dis­cur­sive Com­pe­tence in ChatGPT, Part 2: Me­mory for Texts

Bill BenzonSep 28, 2023, 4:34 PM
1 point
0 comments3 min readLW link

Differ­ent views of al­ign­ment have differ­ent con­se­quences for im­perfect methods

Stuart_ArmstrongSep 28, 2023, 4:31 PM
31 points
0 comments1 min readLW link

AI #31: It Can Do What Now?

ZviSep 28, 2023, 4:00 PM
90 points
6 comments40 min readLW link
(thezvi.wordpress.com)

The point of a game is not to win, and you shouldn’t even pre­tend that it is

mako yassSep 28, 2023, 3:54 PM
51 points
27 comments4 min readLW link
(makopool.com)

Co­hab­itive Games so Far

mako yassSep 28, 2023, 3:41 PM
131 points
146 comments19 min readLW link2 reviews
(makopool.com)

Wob­bly Table The­o­rem in Practice

Morpheus28 Sep 2023 14:33 UTC
24 points
0 comments2 min readLW link

Weigh­ing An­i­mal Worth

jefftk28 Sep 2023 13:50 UTC
25 points
11 comments2 min readLW link
(www.jefftk.com)

ARC Evals: Re­spon­si­ble Scal­ing Policies

Zach Stein-Perlman28 Sep 2023 4:30 UTC
40 points
10 comments2 min readLW link1 review
(evals.alignment.org)

Petrov Day Ret­ro­spec­tive, 2023 (re: the most im­por­tant virtue of Petrov Day & unilat­er­ally pro­mot­ing it)

Ruby28 Sep 2023 2:48 UTC
66 points
73 comments6 min readLW link

Jimmy Ap­ples, source of the ru­mor that OpenAI has achieved AGI in­ter­nally, is a cred­ible in­sider.

Jorterder28 Sep 2023 1:20 UTC
−6 points
2 comments1 min readLW link
(twitter.com)

In­ves­ti­gat­ing the ru­mors of OpenAI achiev­ing AGI

Jorterder28 Sep 2023 1:17 UTC
−4 points
1 comment1 min readLW link

Alibaba Group re­leases Qwen, 14B pa­ram­e­ter LLM

Nikola Jurkovic28 Sep 2023 0:12 UTC
5 points
1 comment1 min readLW link
(qianwen-res.oss-cn-beijing.aliyuncs.com)

Me­tac­u­lus Launches 2023/​2024 FluSight Challenge Sup­port­ing CDC, $5K in Prizes

ChristianWilliams27 Sep 2023 21:35 UTC
5 points
0 commentsLW link
(www.metaculus.com)