Piling bounded arguments

momom2Sep 19, 2024, 10:27 PM
7 points
0 comments4 min readLW link

We Don’t Know Our Own Values, but Re­ward Bridges The Is-Ought Gap

Sep 19, 2024, 10:22 PM
49 points
48 comments5 min readLW link

In­ter­ested in Cog­ni­tive Boot­camp?

RaemonSep 19, 2024, 10:12 PM
48 points
0 comments2 min readLW link

Just How Good Are Modern Chess Com­put­ers?

nemSep 19, 2024, 6:57 PM
10 points
1 comment6 min readLW link

RLHF is the worst pos­si­ble thing done when fac­ing the al­ign­ment problem

tailcalledSep 19, 2024, 6:56 PM
32 points
10 comments6 min readLW link

AISafety.info: What are In­duc­tive Bi­ases?

AlgonSep 19, 2024, 5:26 PM
11 points
4 comments2 min readLW link
(aisafety.info)

Physics of Lan­guage mod­els (part 2.1)

Nathan Helm-BurgerSep 19, 2024, 4:48 PM
9 points
2 comments1 min readLW link
(youtu.be)

Why good things of­ten don’t lead to bet­ter outcomes

DMMFSep 19, 2024, 4:37 PM
16 points
1 comment4 min readLW link
(danfrank.ca)

To CoT or not to CoT? Chain-of-thought helps mainly on math and sym­bolic reasoning

Bogdan Ionut CirsteaSep 19, 2024, 4:13 PM
21 points
1 comment1 min readLW link
(arxiv.org)

Laz­i­ness death spirals

PatrickDFarleySep 19, 2024, 3:58 PM
276 points
40 comments8 min readLW link

[In­tu­itive self-mod­els] 1. Preliminaries

Steven ByrnesSep 19, 2024, 1:45 PM
91 points
23 comments15 min readLW link

AI #82: The Gover­nor Ponders

ZviSep 19, 2024, 1:30 PM
50 points
8 comments27 min readLW link
(thezvi.wordpress.com)

Slave Mo­ral­ity: A place for ev­ery man and ev­ery man in his place

Martin SustrikSep 19, 2024, 4:20 AM
16 points
7 comments2 min readLW link
(250bpm.substack.com)

Which LessWrong/​Align­ment top­ics would you like to be tu­tored in? [Poll]

RubySep 19, 2024, 1:35 AM
43 points
12 comments1 min readLW link

The Oblique­ness Thesis

jessicataSep 19, 2024, 12:26 AM
95 points
19 comments17 min readLW link

How to choose what to work on

jasoncrawfordSep 18, 2024, 8:39 PM
22 points
6 comments4 min readLW link
(blog.rootsofprogress.org)

In­ten­tion-to-Treat (Re: How harm­ful is mu­sic, re­ally?)

kqrSep 18, 2024, 6:44 PM
11 points
0 comments5 min readLW link
(entropicthoughts.com)

The case for a nega­tive al­ign­ment tax

Sep 18, 2024, 6:33 PM
75 points
20 comments7 min readLW link

En­doge­nous Growth and Hu­man Intelligence

Nicholas D.Sep 18, 2024, 2:05 PM
3 points
0 comments2 min readLW link

In­quisi­tive vs. ad­ver­sar­ial rationality

gbSep 18, 2024, 1:50 PM
6 points
9 comments2 min readLW link

Pro­nouns are Annoying

ymeskhoutSep 18, 2024, 1:30 PM
15 points
23 comments4 min readLW link
(www.ymeskhout.com)

Is “su­per­hu­man” AI fore­cast­ing BS? Some ex­per­i­ments on the “539″ bot from the Cen­tre for AI Safety

titotalSep 18, 2024, 1:07 PM
79 points
3 commentsLW link
(open.substack.com)

Knowl­edge’s practicability

Ted NguyễnSep 18, 2024, 2:31 AM
−5 points
0 comments7 min readLW link
(tednguyen.substack.com)

Skills from a year of Pur­pose­ful Ra­tion­al­ity Practice

RaemonSep 18, 2024, 2:05 AM
190 points
18 comments7 min readLW link

[Question] Where to find re­li­able re­views of AI prod­ucts?

ElizabethSep 17, 2024, 11:48 PM
29 points
6 comments1 min readLW link

Su­per­po­si­tion through Ac­tive Learn­ing Lens

akankshancSep 17, 2024, 5:32 PM
1 point
0 comments10 min readLW link

Sur­vey—Psy­cholog­i­cal Im­pact of Long-Term AI Engagement

Manuela GarcíaSep 17, 2024, 5:31 PM
2 points
0 comments1 min readLW link

Sur­vey—Psy­cholog­i­cal Im­pact of Long-Term AI Engagement

Manuela GarcíaSep 17, 2024, 5:31 PM
1 point
1 comment1 min readLW link

[Question] What does it mean for an event or ob­ser­va­tion to have prob­a­bil­ity 0 or 1 in Bayesian terms?

Noosphere89Sep 17, 2024, 5:28 PM
1 point
22 comments1 min readLW link

How harm­ful is mu­sic, re­ally?

dkl9Sep 17, 2024, 2:53 PM
10 points
6 comments3 min readLW link
(dkl9.net)

Monthly Roundup #22: Septem­ber 2024

ZviSep 17, 2024, 12:20 PM
35 points
10 comments45 min readLW link
(thezvi.wordpress.com)

I fi­nally got ChatGPT to sound like me

lsusrSep 17, 2024, 9:39 AM
47 points
18 comments6 min readLW link

Food, Pri­son & Ex­otic An­i­mals: Sparse Au­toen­coders De­tect 6.5x Perform­ing Youtube Thumbnails

Louka Ewington-PitsosSep 17, 2024, 3:52 AM
6 points
2 comments7 min readLW link

Head in the Cloud: Why an Upload of Your Mind is Not You

xhqSep 17, 2024, 12:25 AM
−11 points
3 comments14 min readLW link

[Question] How does some­one prove that their gen­eral in­tel­li­gence is above av­er­age?

M. Y. ZuoSep 16, 2024, 9:01 PM
−3 points
12 comments1 min readLW link

[Question] Does life ac­tu­ally lo­cally *in­crease* en­tropy?

tailcalledSep 16, 2024, 8:30 PM
10 points
27 comments1 min readLW link

Book re­view: Xenosystems

jessicataSep 16, 2024, 8:17 PM
50 points
18 comments37 min readLW link
(unstableontology.com)

MIRI’s Septem­ber 2024 newsletter

HarlanSep 16, 2024, 6:15 PM
46 points
0 comments1 min readLW link
(intelligence.org)

Gen­er­a­tive ML in chem­istry is bot­tle­necked by synthesis

Abhishaike MahajanSep 16, 2024, 4:31 PM
38 points
2 comments14 min readLW link
(www.owlposting.com)

Se­cret Col­lu­sion: Will We Know When to Un­plug AI?

Sep 16, 2024, 4:07 PM
61 points
8 comments31 min readLW link

GPT-o1

ZviSep 16, 2024, 1:40 PM
86 points
34 comments46 min readLW link
(thezvi.wordpress.com)

[Question] Can sub­junc­tive de­pen­dence emerge from a sim­plic­ity prior?

Daniel CSep 16, 2024, 12:39 PM
11 points
0 comments1 min readLW link

Longevity and the Mind

George3d6Sep 16, 2024, 9:43 AM
5 points
2 comments10 min readLW link

[Question] What’s the Deal with Log­i­cal Uncer­tainty?

Ape in the coatSep 16, 2024, 8:11 AM
32 points
29 comments2 min readLW link

Re­in­force­ment Learn­ing from In­for­ma­tion Bazaar Feed­back, and other uses of in­for­ma­tion markets

Abhimanyu Pallavi SudhirSep 16, 2024, 1:04 AM
5 points
1 comment5 min readLW link

Hyperpolation

Gunnar_ZarnckeSep 15, 2024, 9:37 PM
22 points
6 comments1 min readLW link
(arxiv.org)

[Question] If I wanted to spend WAY more on AI, what would I spend it on?

Logan ZoellnerSep 15, 2024, 9:24 PM
53 points
16 comments1 min readLW link

Su­per­in­tel­li­gence Can’t Solve the Prob­lem of De­cid­ing What You’ll Do

Vladimir_NesovSep 15, 2024, 9:03 PM
27 points
11 comments1 min readLW link

For Limited Su­per­in­tel­li­gences, Epistemic Ex­clu­sion is Harder than Ro­bust­ness to Log­i­cal Exploitation

LorecSep 15, 2024, 8:49 PM
3 points
9 comments3 min readLW link

Why I funded PIBBSS

Ryan KiddSep 15, 2024, 7:56 PM
115 points
21 comments3 min readLW link