Three con­figurable prettyprinters

philhAug 10, 2023, 11:10 PM
9 points
0 comments22 min readLW link
(reasonableapproximation.net)

Ilya Sutskever’s thoughts on AI safety (July 2023): a tran­script with my comments

mishkaAug 10, 2023, 7:07 PM
21 points
3 comments5 min readLW link

Seek­ing In­put to AI Safety Book for non-tech­ni­cal audience

Darren McKeeAug 10, 2023, 5:58 PM
10 points
4 comments1 min readLW link

Eval­u­at­ing GPT-4 The­ory of Mind Capabilities

Aug 10, 2023, 5:57 PM
15 points
2 comments14 min readLW link

Some al­ign­ment ideas

SelonNeriasAug 10, 2023, 5:51 PM
1 point
0 comments11 min readLW link

Self Su­per­vised Learn­ing (SSL)

Varshul GuptaAug 10, 2023, 5:43 PM
5 points
1 comment2 min readLW link
(dubverseblack.substack.com)

Pre­dict­ing Virus Rel­a­tive Abun­dance in Wastewater

jefftkAug 10, 2023, 3:46 PM
33 points
2 commentsLW link
(naobservatory.org)

AI #24: Week of the Podcast

ZviAug 10, 2023, 3:00 PM
49 points
5 comments44 min readLW link
(thezvi.wordpress.com)

Could We Au­to­mate AI Align­ment Re­search?

Stephen McAleeseAug 10, 2023, 12:17 PM
34 points
10 comments21 min readLW link

The po­si­tional em­bed­ding ma­trix and pre­vi­ous-to­ken heads: how do they ac­tu­ally work?

AdamYedidiaAug 10, 2023, 1:58 AM
27 points
4 comments13 min readLW link

LLMs are (mostly) not helped by filler tokens

Kshitij SachanAug 10, 2023, 12:48 AM
66 points
35 comments6 min readLW link

2023 ACX Mee­tups Every­where—New­ton, MA

duck_masterAug 9, 2023, 10:47 PM
6 points
2 comments1 min readLW link

Progress links di­gest, 2023-08-09: US adds new nu­clear, Katalin Kar­ikó in­ter­view, and more

jasoncrawfordAug 9, 2023, 7:22 PM
18 points
0 comments3 min readLW link
(rootsofprogress.org)

Mech In­terp Challenge: Au­gust—De­ci­pher­ing the First Unique Char­ac­ter Model

CallumMcDougallAug 9, 2023, 7:14 PM
36 points
1 comment3 min readLW link

Real Mean­ing of life has been found. Eliezer dis­cov­ered it in 2000′s.

JorterderAug 9, 2023, 6:13 PM
−15 points
1 comment1 min readLW link
(docs.google.com)

Marginal Revolu­tion un­offi­cial birth­day party

Derek M. JonesAug 9, 2023, 2:35 PM
4 points
0 comments1 min readLW link

A con­tent anal­y­sis of the SQ-R ques­tion­naire and a pro­posal for test­ing EQ-SQ theory

tailcalledAug 9, 2023, 1:51 PM
10 points
2 comments13 min readLW link

[Question] Does LessWrong al­low ex­empt­ing posts from be­ing scraped by GPTBot?

micAug 9, 2023, 1:02 PM
29 points
3 comments1 min readLW link

If I Was An Ec­cen­tric Trillionaire

niplavAug 9, 2023, 7:56 AM
9 points
8 comments26 min readLW link

Mo­du­lat­ing syco­phancy in an RLHF model via ac­ti­va­tion steering

Nina PanicksseryAug 9, 2023, 7:06 AM
69 points
20 comments12 min readLW link

Open Thread—Au­gust 2023

habrykaAug 9, 2023, 3:52 AM
18 points
49 comments1 min readLW link

marine cloud brightening

bhauthAug 9, 2023, 2:50 AM
40 points
14 comments3 min readLW link
(www.bhauth.com)

In­flec­tion.ai is a ma­jor AGI lab

Nikola JurkovicAug 9, 2023, 1:05 AM
137 points
13 comments2 min readLW link

Acausal Now: We could to­tally acausally bar­gain with aliens at our cur­rent tech level if desired

Christopher KingAug 9, 2023, 12:50 AM
1 point
5 comments4 min readLW link

Ne­cro­mancy’s un­in­tended con­se­quences.

Christopher KingAug 9, 2023, 12:08 AM
−6 points
2 comments2 min readLW link

What’s A “Mar­ket”?

johnswentworthAug 8, 2023, 11:29 PM
94 points
16 comments10 min readLW link

Pod­cast (+tran­script): Nathan Barnard on how US fi­nan­cial reg­u­la­tion can in­form AI governance

Aaron BergmanAug 8, 2023, 9:46 PM
8 points
0 commentsLW link
(www.aaronbergman.net)

What are the flaws in this ar­gu­ment about p(Doom)?

William the Kiwi Aug 8, 2023, 8:34 PM
−2 points
26 comments1 min readLW link

A Sim­ple The­ory Of Consciousness

SherlockHolmesAug 8, 2023, 6:05 PM
2 points
5 comments1 min readLW link
(peterholmes.medium.com)

[Linkpost] Ra­tion­ally awake

jpcAug 8, 2023, 5:59 PM
−1 points
0 comments4 min readLW link
(jpc.dev)

Yet more UFO Bet­ting: Put Up or Shut Up

MoreRatsWrongReUAPAug 8, 2023, 5:50 PM
10 points
18 comments1 min readLW link

AISN #18: Challenges of Re­in­force­ment Learn­ing from Hu­man Feed­back, Microsoft’s Se­cu­rity Breach, and Con­cep­tual Re­search on AI Safety

Dan HAug 8, 2023, 3:52 PM
13 points
0 commentsLW link
(newsletter.safe.ai)

[Question] Begin­ner’s ques­tion about RLHF

FTPickleAug 8, 2023, 3:48 PM
1 point
3 comments1 min readLW link

My Trial Pe­riod as an In­de­pen­dent Align­ment Researcher

Bart BussmannAug 8, 2023, 2:16 PM
34 points
1 comment3 min readLW link

4 types of AGI se­lec­tion, and how to con­strain them

RemmeltAug 8, 2023, 10:02 AM
−4 points
3 comments3 min readLW link

No­tice your everything

metachiralityAug 8, 2023, 2:38 AM
15 points
1 comment2 min readLW link

Model Or­ganisms of Misal­ign­ment: The Case for a New Pillar of Align­ment Research

Aug 8, 2023, 1:30 AM
319 points
30 comments18 min readLW link1 review

Per­pet­u­ally De­clin­ing Pop­u­la­tion?

jefftkAug 8, 2023, 1:30 AM
48 points
29 comments3 min readLW link
(www.jefftk.com)

[Question] How do I find all the items on LW that I’ve *fa­vor­ited* or up­voted?

Alex K. Chen (parrot)Aug 7, 2023, 11:51 PM
14 points
3 comments1 min readLW link

A plea for more fund­ing short­fall transparency

porbyAug 7, 2023, 9:33 PM
73 points
4 comments2 min readLW link

[Question] Tips for re­duc­ing think­ing branch­ing factor

Simon BerensAug 7, 2023, 8:21 PM
4 points
6 comments1 min readLW link

An in­ter­ac­tive in­tro­duc­tion to grokking and mechanis­tic interpretability

Aug 7, 2023, 7:09 PM
23 points
3 comments1 min readLW link
(pair.withgoogle.com)

Feed­back­loop-first Rationality

RaemonAug 7, 2023, 5:58 PM
205 points
69 comments8 min readLW link2 reviews

Grow­ing Bon­sai Net­works with RNNs

ameoAug 7, 2023, 5:34 PM
21 points
5 comments1 min readLW link
(cprimozic.net)

[Question] Should I test my­self for microplas­tics?

AugsAug 7, 2023, 5:31 PM
9 points
2 comments1 min readLW link

Op­ti­mi­sa­tion Mea­sures: Desider­ata, Im­pos­si­bil­ity, Proposals

Aug 7, 2023, 3:52 PM
36 points
9 comments1 min readLW link

An­nounc­ing the Clearer Think­ing micro-grants pro­gram for 2023

spencergAug 7, 2023, 3:21 PM
14 points
1 comment1 min readLW link
(www.clearerthinking.org)

What I’ve been read­ing, July–Au­gust 2023

jasoncrawfordAug 7, 2023, 2:22 PM
23 points
0 comments13 min readLW link
(rootsofprogress.org)

Monthly Roundup #9: Au­gust 2023

ZviAug 7, 2023, 1:20 PM
42 points
25 comments57 min readLW link
(thezvi.wordpress.com)

Strength­en­ing the Ar­gu­ment for In­trin­sic AI Safety: The S-Curves Per­spec­tive

avturchinAug 7, 2023, 1:13 PM
8 points
0 comments12 min readLW link