Scal­ing laws for dom­i­nant as­surance contracts

jessicata28 Nov 2023 23:11 UTC
36 points
5 comments7 min readLW link
(unstableontology.com)

I’m con­fused about in­nate smell neuroanatomy

Steven Byrnes28 Nov 2023 20:49 UTC
34 points
0 comments8 min readLW link

How to Con­trol an LLM’s Be­hav­ior (why my P(DOOM) went down)

RogerDearnaley28 Nov 2023 19:56 UTC
64 points
30 comments11 min readLW link

[Question] Is there a word for dis­crim­i­na­tion against A.I.?

Aaron Bohannon28 Nov 2023 19:03 UTC
1 point
4 comments1 min readLW link

Up­date #2 to “Dom­i­nant As­surance Con­tract Plat­form”: EnsureDone

moyamo28 Nov 2023 18:02 UTC
33 points
2 comments1 min readLW link

Ethico­physics II: Poli­tics is the Mind-Savior

MadHatter28 Nov 2023 16:27 UTC
−9 points
9 comments4 min readLW link
(bittertruths.substack.com)

Nei­ther EA nor e/​acc is what we need to build the future

jasoncrawford28 Nov 2023 16:04 UTC
0 points
22 comments3 min readLW link
(rootsofprogress.org)

Agen­tic Growth

Logan Kieller28 Nov 2023 15:45 UTC
8 points
0 comments3 min readLW link
(logankieller.substack.com)

AISC pro­ject: How promis­ing is au­tomat­ing al­ign­ment re­search? (liter­a­ture re­view)

Bogdan Ionut Cirstea28 Nov 2023 14:47 UTC
4 points
1 comment1 min readLW link
(docs.google.com)

A day in the life of a mechanis­tic in­ter­pretabil­ity researcher

Bill Benzon28 Nov 2023 14:45 UTC
3 points
3 comments1 min readLW link

Two sources of be­yond-epi­sode goals (Sec­tion 2.2.2 of “Schem­ing AIs”)

Joe Carlsmith28 Nov 2023 13:49 UTC
11 points
1 comment15 min readLW link

Self-Refer­en­tial Prob­a­bil­is­tic Logic Ad­mits the Payor’s Lemma

Yudhister Kumar28 Nov 2023 10:27 UTC
80 points
13 comments4 min readLW link

[Question] How can I use AI with­out in­creas­ing AI-risk?

Yoav Ravid28 Nov 2023 10:05 UTC
18 points
6 comments1 min readLW link

A Read­ing From The Book Of Sequences

Screwtape28 Nov 2023 6:45 UTC
8 points
0 comments4 min readLW link

An­thropic Fall 2023 De­bate Progress Update

Ansh Radhakrishnan28 Nov 2023 5:37 UTC
74 points
9 comments12 min readLW link

Apoca­lypse in­surance, and the hardline liber­tar­ian take on AI risk

So8res28 Nov 2023 2:09 UTC
122 points
36 comments7 min readLW link

My techno-op­ti­mism [By Vi­talik Bu­terin]

habryka27 Nov 2023 23:53 UTC
102 points
16 comments2 min readLW link
(www.lesswrong.com)

[Question] Could Ger­many have won World War I with high prob­a­bil­ity given the benefit of hind­sight?

Roko27 Nov 2023 22:52 UTC
10 points
18 comments1 min readLW link

[Question] Could World War I have been pre­vented given the benefit of hind­sight?

Roko27 Nov 2023 22:39 UTC
16 points
8 comments1 min readLW link

AISC 2024 - Pro­ject Summaries

NickyP27 Nov 2023 22:32 UTC
48 points
3 comments18 min readLW link

“Epistemic range of mo­tion” and LessWrong moderation

27 Nov 2023 21:58 UTC
60 points
3 comments12 min readLW link

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

Chipmonk27 Nov 2023 21:04 UTC
48 points
0 comments3 min readLW link

There is no IQ for AI

Gabriel Alfour27 Nov 2023 18:21 UTC
30 points
10 comments9 min readLW link
(cognition.cafe)

Two con­cepts of an “epi­sode” (Sec­tion 2.2.1 of “Schem­ing AIs”)

Joe Carlsmith27 Nov 2023 18:01 UTC
19 points
1 comment13 min readLW link

[Linkpost] Ge­orge Mack’s Razors

trevor27 Nov 2023 17:53 UTC
38 points
8 comments3 min readLW link
(twitter.com)

On pos­si­ble cross-fer­til­iza­tion be­tween AI and neu­ro­science [Creativity]

Bill Benzon27 Nov 2023 16:50 UTC
15 points
22 comments7 min readLW link

Ethico­physics I

MadHatter27 Nov 2023 15:44 UTC
−1 points
16 comments1 min readLW link
(open.substack.com)

Sen­tience In­sti­tute 2023 End of Year Summary

michael_dello27 Nov 2023 12:11 UTC
11 points
0 comments5 min readLW link
(www.sentienceinstitute.org)

[Question] A Ques­tion about Cor­rigi­bil­ity (2015)

A.H.27 Nov 2023 12:05 UTC
4 points
2 comments1 min readLW link

Ap­pen­dices to the live agendas

27 Nov 2023 11:10 UTC
16 points
4 comments1 min readLW link

Shal­low re­view of live agen­das in al­ign­ment & safety

27 Nov 2023 11:10 UTC
310 points
69 comments29 min readLW link

Napoleon stole the Ro­man In­qui­si­tion archives and in­ves­ti­gated the Gal­ileo case

Meow P27 Nov 2023 9:41 UTC
−3 points
0 comments1 min readLW link
(www.cricetuscricetus.co.uk)

Paper: “FDT in an evolu­tion­ary en­vi­ron­ment”

the gears to ascension27 Nov 2023 5:27 UTC
27 points
46 comments1 min readLW link
(arxiv.org)

[Question] why did OpenAI em­ploy­ees sign

bhauth27 Nov 2023 5:21 UTC
49 points
23 comments1 min readLW link

Un­known Probabilities

transhumanist_atom_understander27 Nov 2023 2:30 UTC
13 points
0 comments4 min readLW link

Jus­tifi­ca­tion for Induction

Krantz27 Nov 2023 2:05 UTC
2 points
25 comments5 min readLW link

Si­tu­a­tional aware­ness (Sec­tion 2.1 of “Schem­ing AIs”)

Joe Carlsmith26 Nov 2023 23:00 UTC
10 points
5 comments8 min readLW link

AXRP Epi­sode 26 - AI Gover­nance with Eliz­a­beth Seger

DanielFilan26 Nov 2023 23:00 UTC
13 points
0 comments66 min readLW link

Solv­ing Two-Sided Ad­verse Selec­tion with Pre­dic­tion Mar­ket Matchmaking

Saul Munn26 Nov 2023 20:10 UTC
16 points
7 comments4 min readLW link
(www.brasstacks.blog)

Wikipe­dia is not so great, and what can be done about it.

euserx26 Nov 2023 19:13 UTC
0 points
27 comments16 min readLW link
(forum.effectivealtruism.org)

[Question] Help me solve this prob­lem: The basilisk isn’t real, but peo­ple are

canary_itm26 Nov 2023 17:44 UTC
−19 points
4 comments1 min readLW link

Twin Cities ACX Meetup—De­cem­ber 2023

Timothy M.26 Nov 2023 17:32 UTC
1 point
1 comment1 min readLW link

Spaced rep­e­ti­tion for teach­ing two-year olds how to read (In­ter­view)

Chipmonk26 Nov 2023 16:52 UTC
46 points
9 comments5 min readLW link
(chipmonk.substack.com)

Paper out now on cre­a­tine and cog­ni­tive performance

Fabienne26 Nov 2023 10:58 UTC
57 points
2 comments1 min readLW link

Why Q*, if real, might be a game changer

shminux26 Nov 2023 6:12 UTC
5 points
6 comments1 min readLW link

Mo­ral Real­ity Check (a short story)

jessicata26 Nov 2023 5:03 UTC
138 points
44 comments21 min readLW link
(unstableontology.com)

Ac­count­ing for Fore­gone Pay

jefftk26 Nov 2023 3:30 UTC
11 points
0 comments2 min readLW link
(www.jefftk.com)

A thought ex­per­i­ment to help per­suade skep­tics that power-seek­ing AI is plausible

jacobcd5225 Nov 2023 23:26 UTC
1 point
4 comments5 min readLW link

Cor­rigi­bil­ity or DWIM is an at­trac­tive pri­mary goal for AGI

Seth Herd25 Nov 2023 19:37 UTC
16 points
4 comments1 min readLW link

On “slack” in train­ing (Sec­tion 1.5 of “Schem­ing AIs”)

Joe Carlsmith25 Nov 2023 17:51 UTC
1 point
0 comments5 min readLW link