RSS

Some Ex­per­i­ments I’d Like Some­one To Try With An Amnesic

johnswentworth4 May 2024 22:04 UTC
27 points
11 comments3 min readLW link

LessWrong’s (first) album: I Have Been A Good Bing

1 Apr 2024 7:33 UTC
529 points
158 comments11 min readLW link

What I mean by “al­ign­ment is in large part about mak­ing cog­ni­tion aimable at all”

So8res30 Jan 2023 15:22 UTC
167 points
25 comments2 min readLW link

Iron­ing Out the Squiggles

Zack_M_Davis29 Apr 2024 16:13 UTC
140 points
34 comments11 min readLW link

[Question] Which skin­care prod­ucts are ev­i­dence-based?

Vanessa Kosoy2 May 2024 15:22 UTC
97 points
32 comments1 min readLW link

Re­fusal in LLMs is me­di­ated by a sin­gle direction

27 Apr 2024 11:13 UTC
179 points
67 comments10 min readLW link

My hour of mem­o­ryless lucidity

Eric Neyman4 May 2024 1:40 UTC
195 points
10 comments5 min readLW link
(ericneyman.wordpress.com)

Now THIS is fore­cast­ing: un­der­stand­ing Epoch’s Direct Approach

4 May 2024 12:06 UTC
38 points
3 comments19 min readLW link

Don’t Dis­miss Sim­ple Align­ment Approaches

Chris_Leong7 Oct 2023 0:35 UTC
128 points
9 comments4 min readLW link

In­tro­duc­ing AI-Pow­ered Au­dio­books of Ra­tional Fic­tion Classics

Askwho4 May 2024 17:32 UTC
54 points
10 comments1 min readLW link

Thoughts on seed oil

dynomight20 Apr 2024 12:29 UTC
290 points
107 comments17 min readLW link
(dynomight.net)

Open Thread Spring 2024

habryka11 Mar 2024 19:17 UTC
22 points
93 comments1 min readLW link

Killing Socrates

[DEACTIVATED] Duncan Sabien11 Apr 2023 10:28 UTC
173 points
144 comments8 min readLW link

[Question] Shane Legg’s nec­es­sary prop­er­ties for ev­ery AGI Safety plan

jacquesthibs1 May 2024 17:15 UTC
55 points
12 comments1 min readLW link

Q&A on Pro­posed SB 1047

Zvi2 May 2024 15:10 UTC
63 points
2 comments44 min readLW link
(thezvi.wordpress.com)

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

27 Feb 2024 23:03 UTC
99 points
177 comments14 min readLW link

Key take­aways from our EA and al­ign­ment re­search sur­veys

3 May 2024 18:10 UTC
78 points
5 comments21 min readLW link

S-Risks: Fates Worse Than Ex­tinc­tion

4 May 2024 15:30 UTC
33 points
2 comments6 min readLW link
(youtu.be)

Please stop pub­lish­ing ideas/​in­sights/​re­search about AI

Tamsin Leake2 May 2024 14:54 UTC
15 points
51 comments4 min readLW link

Why I’m do­ing PauseAI

Joseph Miller30 Apr 2024 16:21 UTC
99 points
14 comments4 min readLW link