Physics is Ul­ti­mately Subjective

Gordon Seidoh WorleyJul 14, 2023, 10:19 PM
5 points
34 comments3 min readLW link

[Question] How should a ra­tio­nal agent con­struct their util­ity func­tion when faced with ex­is­tence?

Aman RusiaJul 14, 2023, 7:48 PM
−2 points
1 comment1 min readLW link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

Štěpán LosJul 14, 2023, 5:43 PM
13 points
2 comments6 min readLW link

Un­safe AI as Dy­nam­i­cal Systems

Robert_AIZIJul 14, 2023, 3:31 PM
11 points
0 comments3 min readLW link
(aizi.substack.com)

A Short Sum­mary of “Fo­cus Your Uncer­tainty”

Stephen JamesJul 14, 2023, 11:18 AM
2 points
0 comments1 min readLW link

Do the change you want to see in the world

TeaTieAndHatJul 14, 2023, 10:19 AM
7 points
0 comments1 min readLW link

Gear­ing Up for Long Timelines in a Hard World

DalcyJul 14, 2023, 6:11 AM
18 points
0 comments4 min readLW link

When Some­one Tells You They’re Ly­ing, Believe Them

ymeskhoutJul 14, 2023, 12:31 AM
95 points
3 comments3 min readLW link

Ac­ti­va­tion adding ex­per­i­ments with FLAN-T5

Nina PanicksseryJul 13, 2023, 11:32 PM
21 points
5 comments7 min readLW link

[Question] What crite­rion would you use to se­lect com­pa­nies likely to cause AI doom?

momom2Jul 13, 2023, 8:31 PM
8 points
4 comments1 min readLW link

New­comb II: Newer and Comb-ier

Nathaniel MonsonJul 13, 2023, 6:49 PM
0 points
11 comments3 min readLW link

Jailbreak­ing GPT-4′s code interpreter

Nikola JurkovicJul 13, 2023, 6:43 PM
160 points
22 comments7 min readLW link

An at­tempt to steel­man OpenAI’s al­ign­ment plan

Nathan Helm-BurgerJul 13, 2023, 6:25 PM
22 points
0 comments4 min readLW link

In­stru­men­tal Con­ver­gence to Com­plex­ity Preservation

Macro FlaneurJul 13, 2023, 5:40 PM
2 points
2 comments3 min readLW link

Unabridged His­tory of Global Parenting

CrimsonChinJul 13, 2023, 4:49 PM
0 points
2 comments7 min readLW link

The God­dess of Every­thing Else—The Animation

WriterJul 13, 2023, 4:26 PM
142 points
4 comments1 min readLW link
(youtu.be)

Win­ners of AI Align­ment Awards Re­search Contest

Jul 13, 2023, 4:14 PM
115 points
4 comments12 min readLW link
(alignmentawards.com)

Ac­ci­den­tally Load Bearing

jefftkJul 13, 2023, 4:10 PM
289 points
18 comments1 min readLW link1 review
(www.jefftk.com)

AI #20: Code In­ter­preter and Claude 2.0 for Everyone

ZviJul 13, 2023, 2:00 PM
60 points
9 comments56 min readLW link
(thezvi.wordpress.com)

[Question] How can I get help be­com­ing a bet­ter ra­tio­nal­ist?

TeaTieAndHatJul 13, 2023, 1:41 PM
31 points
19 comments1 min readLW link

i love eat­ing trash

Ace DelgadoJul 13, 2023, 11:23 AM
−6 points
0 comments1 min readLW link

Elon Musk an­nounces xAI

Jan_KulveitJul 13, 2023, 9:01 AM
75 points
35 comments1 min readLW link
(www.ft.com)

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben SmithJul 13, 2023, 6:55 AM
19 points
9 comments9 min readLW link

Align­ment Me­gapro­jects: You’re Not Even Try­ing to Have Ideas

Nicholas / Heather KrossJul 12, 2023, 11:39 PM
55 points
32 comments2 min readLW link

Eric Michaud on the Quan­ti­za­tion Model of Neu­ral Scal­ing, In­ter­pretabil­ity and Grokking

Michaël TrazziJul 12, 2023, 10:45 PM
10 points
0 comments2 min readLW link
(theinsideview.ai)

[Question] Are there any good, easy-to-un­der­stand ex­am­ples of cases where statis­ti­cal causal net­work dis­cov­ery worked well in prac­tice?

tailcalledJul 12, 2023, 10:08 PM
42 points
6 comments1 min readLW link

The Opt-In Revolu­tion — My vi­sion of a pos­i­tive fu­ture with ASI (An ex­per­i­ment with LLM sto­ry­tel­ling)

TachikomaJul 12, 2023, 9:08 PM
2 points
0 comments2 min readLW link

[Question] What does the launch of x.ai mean for AI Safety?

Chris_LeongJul 12, 2023, 7:42 PM
35 points
3 comments1 min readLW link

Towards Devel­op­men­tal Interpretability

Jul 12, 2023, 7:33 PM
192 points
10 comments9 min readLW link1 review

Flowchart: How might rogue AIs defeat all hu­mans?

Aryeh EnglanderJul 12, 2023, 7:23 PM
12 points
0 comments1 min readLW link

A re­view of Prin­cipia Qualia

jessicataJul 12, 2023, 6:38 PM
56 points
8 comments10 min readLW link
(unstablerontology.substack.com)

How I Learned To Stop Wor­ry­ing And Love The Shoggoth

Peter MerelJul 12, 2023, 5:47 PM
9 points
15 comments5 min readLW link

Goal-Direc­tion for Si­mu­lated Agents

Raymond DouglasJul 12, 2023, 5:06 PM
33 points
2 comments6 min readLW link

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Dan HJul 12, 2023, 4:58 PM
16 points
0 commentsLW link

Re­port on mod­el­ing ev­i­den­tial co­op­er­a­tion in large worlds

Johannes TreutleinJul 12, 2023, 4:37 PM
45 points
3 comments1 min readLW link
(arxiv.org)

Com­pres­sion of morbidity

DirectedEvolutionJul 12, 2023, 3:26 PM
12 points
0 comments3 min readLW link

An Overview of the AI Safety Fund­ing Situation

Stephen McAleeseJul 12, 2023, 2:54 PM
69 points
10 commentsLW link

[Question] What is some un­nec­es­sar­ily ob­scure jar­gon that peo­ple here tend to use?

jchanJul 12, 2023, 1:52 PM
17 points
5 comments1 min readLW link

Hous­ing and Tran­sit Roundup #5

ZviJul 12, 2023, 1:10 PM
25 points
1 comment20 min readLW link
(thezvi.wordpress.com)

A tran­script of the TED talk by Eliezer Yudkowsky

Mikhail SaminJul 12, 2023, 12:12 PM
105 points
13 comments4 min readLW link

Lightweight min­i­mal speech recog­ni­tion?

jefftkJul 12, 2023, 12:00 PM
9 points
6 comments1 min readLW link
(www.jefftk.com)

Aging and the gero­science hypothesis

DirectedEvolutionJul 12, 2023, 7:16 AM
54 points
14 comments5 min readLW link

Pop­u­lariz­ing vibes vs. models

DirectedEvolutionJul 12, 2023, 5:44 AM
19 points
0 comments2 min readLW link

An­nounc­ing the AI Fables Writ­ing Con­test!

DaystarEldJul 12, 2023, 3:04 AM
36 points
3 commentsLW link

Why it’s nec­es­sary to shoot your­self in the foot

Jacob G-WJul 11, 2023, 9:17 PM
39 points
7 comments2 min readLW link
(g-w1.github.io)

How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

Christopher KingJul 11, 2023, 7:27 PM
17 points
11 comments2 min readLW link

[Question] Do we au­to­mat­i­cally ac­cept propo­si­tions?

Aaron GraifmanJul 11, 2023, 5:45 PM
10 points
5 comments1 min readLW link

fMRI LIKE APPROACH TO AI ALIGNMENT /​ DECEPTIVE BEHAVIOUR

Escaque 66Jul 11, 2023, 5:17 PM
−1 points
3 comments2 min readLW link

In­tro­duc­ing Fate­book: the fastest way to make and track predictions

Jul 11, 2023, 3:28 PM
132 points
41 comments1 min readLW link2 reviews
(fatebook.io)

My Weirdest Experience

Bridgett KayJul 11, 2023, 2:44 PM
38 points
19 comments1 min readLW link
(dxmrevealed.wordpress.com)