Align­ment Me­gapro­jects: You’re Not Even Try­ing to Have Ideas

Nicholas / Heather KrossJul 12, 2023, 11:39 PM
55 points
32 comments2 min readLW link

Eric Michaud on the Quan­ti­za­tion Model of Neu­ral Scal­ing, In­ter­pretabil­ity and Grokking

Michaël TrazziJul 12, 2023, 10:45 PM
10 points
0 comments2 min readLW link
(theinsideview.ai)

[Question] Are there any good, easy-to-un­der­stand ex­am­ples of cases where statis­ti­cal causal net­work dis­cov­ery worked well in prac­tice?

tailcalledJul 12, 2023, 10:08 PM
42 points
6 comments1 min readLW link

The Opt-In Revolu­tion — My vi­sion of a pos­i­tive fu­ture with ASI (An ex­per­i­ment with LLM sto­ry­tel­ling)

TachikomaJul 12, 2023, 9:08 PM
2 points
0 comments2 min readLW link

[Question] What does the launch of x.ai mean for AI Safety?

Chris_LeongJul 12, 2023, 7:42 PM
35 points
3 comments1 min readLW link

Towards Devel­op­men­tal Interpretability

Jul 12, 2023, 7:33 PM
192 points
10 comments9 min readLW link1 review

Flowchart: How might rogue AIs defeat all hu­mans?

Aryeh EnglanderJul 12, 2023, 7:23 PM
12 points
0 comments1 min readLW link

A re­view of Prin­cipia Qualia

jessicataJul 12, 2023, 6:38 PM
56 points
8 comments10 min readLW link
(unstablerontology.substack.com)

How I Learned To Stop Wor­ry­ing And Love The Shoggoth

Peter MerelJul 12, 2023, 5:47 PM
9 points
15 comments5 min readLW link

Goal-Direc­tion for Si­mu­lated Agents

Raymond DouglasJul 12, 2023, 5:06 PM
33 points
2 comments6 min readLW link

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Dan HJul 12, 2023, 4:58 PM
16 points
0 commentsLW link

Re­port on mod­el­ing ev­i­den­tial co­op­er­a­tion in large worlds

Johannes TreutleinJul 12, 2023, 4:37 PM
45 points
3 comments1 min readLW link
(arxiv.org)

Com­pres­sion of morbidity

DirectedEvolutionJul 12, 2023, 3:26 PM
12 points
0 comments3 min readLW link

An Overview of the AI Safety Fund­ing Situation

Stephen McAleeseJul 12, 2023, 2:54 PM
69 points
10 commentsLW link

[Question] What is some un­nec­es­sar­ily ob­scure jar­gon that peo­ple here tend to use?

jchanJul 12, 2023, 1:52 PM
17 points
5 comments1 min readLW link

Hous­ing and Tran­sit Roundup #5

ZviJul 12, 2023, 1:10 PM
25 points
1 comment20 min readLW link
(thezvi.wordpress.com)

A tran­script of the TED talk by Eliezer Yudkowsky

Mikhail SaminJul 12, 2023, 12:12 PM
105 points
13 comments4 min readLW link

Lightweight min­i­mal speech recog­ni­tion?

jefftkJul 12, 2023, 12:00 PM
9 points
6 comments1 min readLW link
(www.jefftk.com)

Aging and the gero­science hypothesis

DirectedEvolutionJul 12, 2023, 7:16 AM
54 points
14 comments5 min readLW link

Pop­u­lariz­ing vibes vs. models

DirectedEvolutionJul 12, 2023, 5:44 AM
19 points
0 comments2 min readLW link

An­nounc­ing the AI Fables Writ­ing Con­test!

DaystarEldJul 12, 2023, 3:04 AM
36 points
3 commentsLW link

Why it’s nec­es­sary to shoot your­self in the foot

Jacob G-WJul 11, 2023, 9:17 PM
39 points
7 comments2 min readLW link
(g-w1.github.io)

How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

Christopher KingJul 11, 2023, 7:27 PM
17 points
11 comments2 min readLW link

[Question] Do we au­to­mat­i­cally ac­cept propo­si­tions?

Aaron GraifmanJul 11, 2023, 5:45 PM
10 points
5 comments1 min readLW link

fMRI LIKE APPROACH TO AI ALIGNMENT /​ DECEPTIVE BEHAVIOUR

Escaque 66Jul 11, 2023, 5:17 PM
−1 points
3 comments2 min readLW link

In­tro­duc­ing Fate­book: the fastest way to make and track predictions

Jul 11, 2023, 3:28 PM
132 points
41 comments1 min readLW link2 reviews
(fatebook.io)

My Weirdest Experience

Bridgett KayJul 11, 2023, 2:44 PM
38 points
19 comments1 min readLW link
(dxmrevealed.wordpress.com)

An­nounc­ing The Roots of Progress Blog-Build­ing Intensive

jasoncrawfordJul 11, 2023, 2:04 PM
10 points
0 comments1 min readLW link
(rootsofprogress.org)

OpenAI Launches Su­per­al­ign­ment Taskforce

ZviJul 11, 2023, 1:00 PM
150 points
40 comments49 min readLW link
(thezvi.wordpress.com)

Cri­tiquing Risks From Learned Op­ti­miza­tion, and Avoid­ing Cached Theories

ProofBySonnetJul 11, 2023, 11:38 AM
1 point
0 comments6 min readLW link

[UPDATE: dead­line ex­tended to July 24!] New wind in ra­tio­nal­ity’s sails: Ap­pli­ca­tions for Epistea Res­i­dency 2023 are now open

Jul 11, 2023, 11:02 AM
80 points
7 comments3 min readLW link

Two Hot Takes about Quine

Charlie SteinerJul 11, 2023, 6:42 AM
17 points
0 comments2 min readLW link

Dis­in­cen­tiviz­ing de­cep­tion in mesa op­ti­miz­ers with Model Tampering

martinkunevJul 11, 2023, 12:44 AM
3 points
0 comments2 min readLW link

Drawn Out: a story

Richard_NgoJul 11, 2023, 12:08 AM
80 points
2 comments8 min readLW link

Defi­ni­tions are about effi­ciency and con­sis­tency with com­mon lan­guage.

Nacruno96Jul 10, 2023, 11:46 PM
1 point
0 comments4 min readLW link

Refram­ing Evolu­tion—An in­for­ma­tion wavefront trav­el­ing through time

Joshua ClancyJul 10, 2023, 10:36 PM
1 point
0 comments5 min readLW link
(midflip.org)

GPT-7: The Tale of the Big Com­puter (An Ex­per­i­men­tal Story)

Justin BullockJul 10, 2023, 8:22 PM
4 points
4 comments5 min readLW link

Cost-effec­tive­ness of pro­fes­sional field-build­ing pro­grams for AI safety research

Dan HJul 10, 2023, 6:28 PM
8 points
5 commentsLW link

Cost-effec­tive­ness of stu­dent pro­grams for AI safety research

Dan HJul 10, 2023, 6:28 PM
15 points
2 commentsLW link

Model­ing the im­pact of AI safety field-build­ing programs

Dan HJul 10, 2023, 6:27 PM
21 points
0 commentsLW link

I think Michael Bailey’s dis­mis­sal of my au­to­g­y­nephilia ques­tions for Scott Alexan­der and Aella makes very lit­tle sense

tailcalledJul 10, 2023, 5:39 PM
46 points
45 comments2 min readLW link

In­cen­tives from a causal perspective

Jul 10, 2023, 5:16 PM
27 points
0 comments6 min readLW link

Is the En­dow­ment Effect Due to In­com­pa­ra­bil­ity?

Kevin DorstJul 10, 2023, 4:26 PM
21 points
10 comments7 min readLW link
(kevindorst.substack.com)

Fron­tier AI Regulation

Zach Stein-PerlmanJul 10, 2023, 2:30 PM
21 points
4 comments8 min readLW link
(arxiv.org)

Why is it so hard to change peo­ple’s minds? Well, imag­ine if it wasn’t...

CelarixJul 10, 2023, 1:55 PM
6 points
9 comments6 min readLW link

Con­sider Join­ing the UK Foun­da­tion Model Taskforce

ZviJul 10, 2023, 1:50 PM
105 points
12 comments1 min readLW link
(thezvi.wordpress.com)

“Refram­ing Su­per­in­tel­li­gence” + LLMs + 4 years

Eric DrexlerJul 10, 2023, 1:42 PM
118 points
9 comments12 min readLW link

Open-minded updatelessness

Jul 10, 2023, 11:08 AM
66 points
21 comments12 min readLW link

Con­scious­ness as a con­fla­tion­ary al­li­ance term for in­trin­si­cally val­ued in­ter­nal experiences

Andrew_CritchJul 10, 2023, 8:09 AM
214 points
54 comments11 min readLW link2 reviews

The world where LLMs are possible

Ape in the coatJul 10, 2023, 8:00 AM
20 points
10 comments3 min readLW link