Fund­ing for pro­grams and events on global catas­trophic risk, effec­tive al­tru­ism, and other topics

Aug 14, 2024, 11:59 PM
9 points
0 comments2 min readLW link

Fund­ing for work that builds ca­pac­ity to ad­dress risks from trans­for­ma­tive AI

Aug 14, 2024, 11:52 PM
16 points
0 comments5 min readLW link

GPT-2 Some­times Fails at IOI

Ronak_MehtaAug 14, 2024, 11:24 PM
13 points
0 comments2 min readLW link
(ronakrm.github.io)

Toward a Hu­man Hy­brid Lan­guage for En­hanced Hu­man-Ma­chine Com­mu­ni­ca­tion: Ad­dress­ing the AI Align­ment Problem

Andndn DheudndAug 14, 2024, 10:19 PM
−4 points
2 comments4 min readLW link

Ad­verse Selec­tion by Life-Sav­ing Char­i­ties

vaishnav92Aug 14, 2024, 8:46 PM
41 points
16 comments5 min readLW link
(www.everythingisatrolley.com)

The great Enigma in the sky: The uni­verse as an en­cryp­tion machine

Alex_ShleizerAug 14, 2024, 1:21 PM
4 points
1 comment8 min readLW link

An anti-in­duc­tive sequence

ViliamAug 14, 2024, 12:28 PM
37 points
10 comments3 min readLW link

Rabin’s Paradox

Charlie SteinerAug 14, 2024, 5:40 AM
18 points
41 comments3 min readLW link

An­nounc­ing the $200k EA Com­mu­nity Choice

Austin ChenAug 14, 2024, 12:39 AM
58 points
8 commentsLW link
(manifund.substack.com)

De­bate: Is it eth­i­cal to work at AI ca­pa­bil­ities com­pa­nies?

Aug 14, 2024, 12:18 AM
39 points
21 comments11 min readLW link

Fields that I refer­ence when think­ing about AI takeover prevention

BuckAug 13, 2024, 11:08 PM
144 points
16 comments10 min readLW link
(redwoodresearch.substack.com)

Ten counter-ar­gu­ments that AI is (not) an ex­is­ten­tial risk (for now)

kwiat.devAug 13, 2024, 10:35 PM
20 points
5 comments8 min readLW link

Align­ment from equivariance

hamishtodd1Aug 13, 2024, 9:09 PM
3 points
2 comments5 min readLW link

[LDSL#6] When is quan­tifi­ca­tion needed, and when is it hard?

tailcalledAug 13, 2024, 8:39 PM
32 points
0 comments2 min readLW link

A com­pu­ta­tional com­plex­ity ar­gu­ment for many worlds

jessicataAug 13, 2024, 7:35 PM
32 points
15 comments5 min readLW link
(unstableontology.com)

The Con­scious­ness Co­nun­drum: Why We Can’t Dis­miss Ma­chine Sentience

SystematicApproachAug 13, 2024, 6:01 PM
−22 points
1 comment3 min readLW link

Ten ar­gu­ments that AI is an ex­is­ten­tial risk

Aug 13, 2024, 5:00 PM
118 points
42 comments7 min readLW link
(blog.aiimpacts.org)

Eu­gen­ics And Re­pro­duc­tion Li­censes FAQs: For the Com­mon Good

Zero ContradictionsAug 13, 2024, 4:34 PM
−8 points
14 comments4 min readLW link
(zerocontradictions.net)

Su­per­in­tel­li­gent AI is pos­si­ble in the 2020s

HunterJayAug 13, 2024, 6:03 AM
41 points
3 comments12 min readLW link

De­bate: Get a col­lege de­gree?

Aug 12, 2024, 10:23 PM
42 points
14 comments21 min readLW link

Ex­tract­ing SAE task fea­tures for in-con­text learning

Aug 12, 2024, 8:34 PM
31 points
1 comment9 min readLW link

Hyppotherapy

Marius Adrian NicoarăAug 12, 2024, 8:07 PM
−3 points
0 comments1 min readLW link

Cal­ifor­ni­ans, tell your reps to vote yes on SB 1047!

Holly_ElmoreAug 12, 2024, 7:50 PM
40 points
24 commentsLW link

[LDSL#5] Com­par­i­son and mag­ni­tude/​diminishment

tailcalledAug 12, 2024, 6:47 PM
24 points
0 comments2 min readLW link

In Defense of Open-Minded UDT

abramdemskiAug 12, 2024, 6:27 PM
79 points
28 comments11 min readLW link

Hu­man­ity isn’t re­motely longter­mist, so ar­gu­ments for AGI x-risk should fo­cus on the near term

Seth HerdAug 12, 2024, 6:10 PM
46 points
10 comments1 min readLW link

Creat­ing a “Con­science Calcu­la­tor” to Guard-Rail an AGI

sweenesmAug 12, 2024, 4:03 PM
−2 points
0 comments13 min readLW link

Shift­ing Headspaces—Tran­si­tional Beast-Mode

Jonathan MoregårdAug 12, 2024, 1:02 PM
37 points
9 comments2 min readLW link
(honestliving.substack.com)

Si­mul­ta­neous Foot­bass and Foot­drums II

jefftkAug 11, 2024, 11:50 PM
9 points
0 comments1 min readLW link
(www.jefftk.com)

CultFrisbee

GauraventhAug 11, 2024, 9:36 PM
16 points
3 comments1 min readLW link
(y1d2.com)

Plea­sure and suffer­ing are not con­cep­tual opposites

MichaelStJulesAug 11, 2024, 6:32 PM
7 points
0 commentsLW link

Com­pu­ta­tional ir­re­ducibil­ity challenges the simu­la­tion hypothesis

Clément LAug 11, 2024, 4:14 PM
4 points
17 comments7 min readLW link

[LDSL#4] Root cause anal­y­sis ver­sus effect size estimation

tailcalledAug 11, 2024, 4:12 PM
29 points
0 comments2 min readLW link

Closed to Interpretation

Yeshua GodAug 11, 2024, 3:51 PM
−18 points
0 comments2 min readLW link

The­o­ries of Knowledge

Zero ContradictionsAug 11, 2024, 8:55 AM
−1 points
5 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

Un­nat­u­ral abstractions

AprillionAug 10, 2024, 10:31 PM
3 points
3 comments4 min readLW link
(peter.hozak.info)

[LDSL#3] In­for­ma­tion-ori­en­ta­tion is in ten­sion with mag­ni­tude-orientation

tailcalledAug 10, 2024, 9:58 PM
33 points
2 comments3 min readLW link

The AI reg­u­la­tor’s toolbox: A list of con­crete AI gov­er­nance practices

Adam JonesAug 10, 2024, 9:15 PM
9 points
1 comment34 min readLW link
(adamjones.me)

Diffu­sion Guided NLP: bet­ter steer­ing, mostly a good thing

Nathan Helm-BurgerAug 10, 2024, 7:49 PM
13 points
0 comments1 min readLW link
(arxiv.org)

Tall tales and long odds

Solenoid_EntityAug 10, 2024, 3:22 PM
11 points
0 comments5 min readLW link

The Great Or­ganism The­ory of Evolution

rogersbaconAug 10, 2024, 12:26 PM
20 points
0 comments6 min readLW link
(www.secretorum.life)

Emer­gence, The Blind Spot of GenAI In­ter­pretabil­ity?

Quentin FEUILLADE--MONTIXIAug 10, 2024, 10:07 AM
16 points
8 comments3 min readLW link

Row­ing vs steering

Saul MunnAug 10, 2024, 7:00 AM
43 points
2 comments6 min readLW link
(www.brasstacks.blog)

Over­pop­u­la­tion FAQs

Zero ContradictionsAug 10, 2024, 4:21 AM
−12 points
7 comments1 min readLW link
(zerocontradictions.net)

Fermi Es­ti­mat­ing How Long an Al­gorithm Takes

SatvikBeriAug 10, 2024, 1:34 AM
1 point
0 comments2 min readLW link

What’s so spe­cial about like­li­hoods?

mfattAug 10, 2024, 1:07 AM
6 points
1 comment1 min readLW link

Prov­ably Safe AI: Wor­ld­view and Projects

Aug 9, 2024, 11:21 PM
54 points
44 comments7 min readLW link

All The Lat­est Hu­man tFUS Studies

sarahconstantinAug 9, 2024, 10:20 PM
46 points
2 comments8 min readLW link
(sarahconstantin.substack.com)

But Where do the Vari­ables of my Causal Model come from?

DalcyAug 9, 2024, 10:07 PM
38 points
1 comment8 min readLW link

[LDSL#2] La­tent vari­able mod­els, net­work mod­els, and lin­ear diffu­sion of sparse lognormals

tailcalledAug 9, 2024, 7:57 PM
26 points
2 comments3 min readLW link