[Question] Good HPMoR scenes /​ pas­sages?

PhilGoetz3 Mar 2024 22:42 UTC
14 points
17 comments1 min readLW link

At­tend­ing Sold-Out Bean­town Stomp

jefftk3 Mar 2024 21:30 UTC
9 points
0 comments1 min readLW link
(www.jefftk.com)

AI things that are per­haps as im­por­tant as hu­man-con­trol­led AI (Chi ver­sion)

Chi Nguyen3 Mar 2024 18:07 UTC
48 points
4 comments1 min readLW link

A te­dious and effec­tive way to learn 汉字 (Chi­nese char­ac­ters)

dkl93 Mar 2024 16:41 UTC
6 points
1 comment2 min readLW link
(dkl9.net)

Some costs of superposition

Linda Linsefors3 Mar 2024 16:08 UTC
46 points
11 comments3 min readLW link

[Question] If you con­trol­led the first agen­tic AGI, what would you set as its first task(s)?

sweenesm3 Mar 2024 14:16 UTC
−13 points
5 comments2 min readLW link

Self-Re­solv­ing Pre­dic­tion Markets

PeterMcCluskey3 Mar 2024 2:39 UTC
31 points
0 comments3 min readLW link
(bayesianinvestor.com)

[Question] In­crease the tax value of dona­tions with high-var­i­ance in­vest­ments?

Brendan Long3 Mar 2024 1:39 UTC
20 points
4 comments2 min readLW link

Com­mon Philo­soph­i­cal Mis­takes, ac­cord­ing to Joe Sch­mid [videos]

DanielFilan3 Mar 2024 0:15 UTC
8 points
3 comments1 min readLW link
(www.youtube.com)

Agree­ing With Stalin in Ways That Ex­hibit Gen­er­ally Ra­tion­al­ist Principles

Zack_M_Davis2 Mar 2024 22:05 UTC
37 points
19 comments58 min readLW link
(unremediatedgender.space)

The World in 2029

Nathan Young2 Mar 2024 18:03 UTC
70 points
37 comments3 min readLW link

The Most Danger­ous Idea

rogersbacon2 Mar 2024 17:53 UTC
−8 points
2 comments26 min readLW link
(www.secretorum.life)

Fu­ture life

DavidMadsen2 Mar 2024 15:41 UTC
−12 points
2 comments1 min readLW link

Ugo Conti’s Whis­tle-Con­trol­led Synthesizer

jefftk2 Mar 2024 2:50 UTC
15 points
1 comment2 min readLW link
(www.jefftk.com)

A one-sen­tence for­mu­la­tion of the AI X-Risk ar­gu­ment I try to make

tcelferact2 Mar 2024 0:44 UTC
3 points
0 comments1 min readLW link

If you weren’t such an idiot...

2 Mar 2024 0:01 UTC
131 points
61 comments2 min readLW link
(markxu.com)

In­creas­ing IQ is trivial

George3d61 Mar 2024 22:43 UTC
38 points
54 comments6 min readLW link
(epistemink.substack.com)

self-fulfilling prophe­cies when ap­ply­ing for funding

Chipmonk1 Mar 2024 19:01 UTC
30 points
0 comments1 min readLW link
(chipmonk.substack.com)

An­tag­o­nis­tic AI

Xybermancer1 Mar 2024 18:50 UTC
−8 points
1 comment1 min readLW link

Against Aug­men­ta­tion of In­tel­li­gence, Hu­man or Other­wise (An Anti-Natal­ist Ar­gu­ment)

Benjamin Bourlier1 Mar 2024 18:45 UTC
−26 points
18 comments3 min readLW link

Elon files grave charges against OpenAI

mako yass1 Mar 2024 17:42 UTC
38 points
10 comments1 min readLW link
(www.courthousenews.com)

Notes on Dwarkesh Pa­tel’s Pod­cast with Demis Hassabis

Zvi1 Mar 2024 16:30 UTC
93 points
0 comments8 min readLW link
(thezvi.wordpress.com)

What does your philos­o­phy max­i­mize?

Antb1 Mar 2024 16:10 UTC
0 points
1 comment1 min readLW link

The Defence pro­duc­tion act and AI policy

NathanBarnard1 Mar 2024 14:26 UTC
37 points
0 comments2 min readLW link

Chap­ter 1: A Pin Art Hand

SashaWu1 Mar 2024 14:08 UTC
3 points
0 comments3 min readLW link

Don’t En­dorse the Idea of Mar­ket Failure

Maxwell Tabarrok1 Mar 2024 14:04 UTC
14 points
22 comments4 min readLW link
(www.maximum-progress.com)

[Question] Is it pos­si­ble to make more spe­cific book­marks?

numpyNaN1 Mar 2024 12:47 UTC
1 point
0 comments1 min readLW link

Whole­some Culture

owencb1 Mar 2024 12:08 UTC
29 points
3 comments1 min readLW link

Ad­ding Sen­sors to Man­dolin?

jefftk1 Mar 2024 2:10 UTC
6 points
1 comment1 min readLW link
(www.jefftk.com)

The Parable Of The Fallen Pen­du­lum—Part 1

johnswentworth1 Mar 2024 0:25 UTC
111 points
32 comments2 min readLW link

Gra­da­tions of moral weight

MichaelStJules29 Feb 2024 23:08 UTC
0 points
0 comments1 min readLW link

Ap­proach­ing Hu­man-Level Fore­cast­ing with Lan­guage Models

29 Feb 2024 22:36 UTC
59 points
6 comments3 min readLW link

Paper re­view: “The Un­rea­son­able Effec­tive­ness of Easy Train­ing Data for Hard Tasks”

Vassil Tashev29 Feb 2024 18:44 UTC
11 points
0 comments4 min readLW link

What’s in the box?! – Towards in­ter­pretabil­ity by dis­t­in­guish­ing niches of value within neu­ral net­works.

Joshua Clancy29 Feb 2024 18:33 UTC
3 points
4 comments128 min readLW link

Short Post: Discern­ing Truth from Trash

FinalFormal229 Feb 2024 18:09 UTC
−2 points
0 comments1 min readLW link

Sëbus: Intro

SashaWu29 Feb 2024 16:42 UTC
5 points
0 comments1 min readLW link

AI #53: One More Leap

Zvi29 Feb 2024 16:10 UTC
45 points
0 comments38 min readLW link
(thezvi.wordpress.com)

Cry­on­ics p(suc­cess) es­ti­mates are only weakly as­so­ci­ated with in­ter­est in pur­su­ing cry­on­ics in the LW 2023 Survey

Andy_McKenzie29 Feb 2024 14:47 UTC
28 points
6 comments1 min readLW link

Ben­gio’s Align­ment Pro­posal: “Towards a Cau­tious Scien­tist AI with Con­ver­gent Safety Bounds”

mattmacdermott29 Feb 2024 13:59 UTC
75 points
19 comments14 min readLW link
(yoshuabengio.org)

Tips for Em­piri­cal Align­ment Research

Ethan Perez29 Feb 2024 6:04 UTC
143 points
4 comments22 min readLW link

[Question] Sup­pos­ing the 1bit LLM pa­per pans out

O O29 Feb 2024 5:31 UTC
27 points
11 comments1 min readLW link

Can RLLMv3′s abil­ity to defend against jailbreaks be at­tributed to datasets con­tain­ing sto­ries about Jung’s shadow in­te­gra­tion the­ory?

MiguelDev29 Feb 2024 5:13 UTC
7 points
2 comments11 min readLW link

Post se­ries on “Li­a­bil­ity Law for re­duc­ing Ex­is­ten­tial Risk from AI”

Nora_Ammann29 Feb 2024 4:39 UTC
42 points
1 comment1 min readLW link
(forum.effectivealtruism.org)

Tour Ret­ro­spec­tive Fe­bru­ary 2024

jefftk29 Feb 2024 3:50 UTC
10 points
0 comments4 min readLW link
(www.jefftk.com)

Lo­cat­ing My Eyes (Part 3 of “The Sense of Phys­i­cal Ne­ces­sity”)

LoganStrohl29 Feb 2024 3:09 UTC
43 points
4 comments22 min readLW link

Con­spir­acy The­o­rists Aren’t Ig­no­rant. They’re Bad At Episte­mol­ogy.

omnizoid28 Feb 2024 23:39 UTC
18 points
10 comments5 min readLW link

Dis­cov­er­ing al­ign­ment wind­falls re­duces AI risk

28 Feb 2024 21:23 UTC
15 points
1 comment8 min readLW link
(blog.elicit.com)

my the­ory of the in­dus­trial revolution

bhauth28 Feb 2024 21:07 UTC
13 points
7 comments3 min readLW link
(www.bhauth.com)

Whole­some­ness and Effec­tive Altruism

owencb28 Feb 2024 20:28 UTC
42 points
3 comments1 min readLW link

times­tamp­ing through the Singularity

throwaway91811912728 Feb 2024 19:09 UTC
−1 points
4 comments8 min readLW link