Cry­on­ics com­pa­nies should let peo­ple make con­di­tions for reawakening

Andrew Vlahos18 Mar 2023 21:03 UTC
10 points
11 comments4 min readLW link

“Pub­lish or Per­ish” (a quick note on why you should try to make your work leg­ible to ex­ist­ing aca­demic com­mu­ni­ties)

David Scott Krueger (formerly: capybaralet)18 Mar 2023 19:01 UTC
98 points
48 comments1 min readLW link

Dan Luu on “You can only com­mu­ni­cate one top pri­or­ity”

Raemon18 Mar 2023 18:55 UTC
147 points
18 comments3 min readLW link
(twitter.com)

An Ap­peal to AI Su­per­in­tel­li­gence: Rea­sons to Pre­serve Humanity

James_Miller18 Mar 2023 16:22 UTC
30 points
72 comments12 min readLW link

[Question] What did you do with GPT4?

ChristianKl18 Mar 2023 15:21 UTC
27 points
17 comments1 min readLW link

Try to solve the hard parts of the al­ign­ment problem

Mikhail Samin18 Mar 2023 14:55 UTC
45 points
7 comments5 min readLW link

Test­ing ChatGPT 3.5 for poli­ti­cal bi­ases us­ing role­play­ing prompts

twkaiser18 Mar 2023 11:42 UTC
−2 points
2 comments19 min readLW link
(hackernoon.com)

What I did to re­duce the risk of Long COVID (and man­age symp­toms) af­ter get­ting COVID

Sameerishere18 Mar 2023 5:32 UTC
10 points
3 comments10 min readLW link

(re­tired ar­ti­cle) AGI With In­ter­net Ac­cess: Why we won’t stuff the ge­nie back in its bot­tle.

Max TK18 Mar 2023 3:43 UTC
5 points
10 comments4 min readLW link

St. Patty’s Day LA meetup

lc18 Mar 2023 0:00 UTC
8 points
0 comments1 min readLW link

[Question] Why Carl Jung is not pop­u­lar in AI Align­ment Re­search?

MiguelDev17 Mar 2023 23:56 UTC
−3 points
13 comments1 min readLW link

[Event] Join Me­tac­u­lus for Fore­cast Fri­day on March 24th!

ChristianWilliams17 Mar 2023 22:47 UTC
3 points
0 comments1 min readLW link

Meetup Tip: The Next Meetup Will Be. . .

Screwtape17 Mar 2023 22:04 UTC
41 points
0 comments3 min readLW link

The Power of High Speed Stupidity

robotelvis17 Mar 2023 21:41 UTC
32 points
5 comments9 min readLW link
(messyprogress.substack.com)

Ret­ro­spec­tive on ‘GPT-4 Pre­dic­tions’ After the Re­lease of GPT-4

Stephen McAleese17 Mar 2023 18:34 UTC
22 points
6 comments6 min readLW link

“Care­fully Boot­strapped Align­ment” is or­ga­ni­za­tion­ally hard

Raemon17 Mar 2023 18:00 UTC
258 points
22 comments11 min readLW link

[Question] Are nested jailbreaks in­evitable?

judson17 Mar 2023 17:43 UTC
1 point
0 comments1 min readLW link

Eth­i­cal AI in­vest­ments?

Justin wilson17 Mar 2023 17:43 UTC
24 points
15 comments1 min readLW link

New eco­nomic sys­tem for AI era

ksme sho17 Mar 2023 17:42 UTC
−1 points
1 comment5 min readLW link

On some first prin­ci­ples of intelligence

Macheng_Shen17 Mar 2023 17:42 UTC
−14 points
0 comments4 min readLW link

Essen­tial Be­hav­iorism Terms

Rivka17 Mar 2023 17:41 UTC
13 points
1 comment10 min readLW link

Vec­tor se­man­tics and “Kubla Khan,” Part 2

Bill Benzon17 Mar 2023 16:32 UTC
2 points
0 comments3 min readLW link

Su­per-Luigi = Luigi + (Luigi—Waluigi)

Alexei17 Mar 2023 15:27 UTC
16 points
9 comments1 min readLW link

Sur­vey on in­ter­me­di­ate goals in AI governance

17 Mar 2023 13:12 UTC
25 points
3 comments1 min readLW link

Mo­ral­ity Doesn’t Deter­mine Reality

Alex Beyman17 Mar 2023 7:11 UTC
−15 points
8 comments11 min readLW link

GPT-4 solves Gary Mar­cus-in­duced flubs

JakubK17 Mar 2023 6:40 UTC
56 points
29 comments2 min readLW link
(docs.google.com)

[Question] Are the LLM “in­tel­li­gence” tests pub­li­cly available for hu­mans to take?

nim17 Mar 2023 0:09 UTC
7 points
12 comments1 min readLW link

Dona­tion offsets for ChatGPT Plus subscriptions

Jeffrey Ladish16 Mar 2023 23:29 UTC
53 points
3 comments3 min readLW link

The al­gorithm isn’t do­ing X, it’s just do­ing Y.

Cleo Nardo16 Mar 2023 23:28 UTC
53 points
43 comments5 min readLW link

An­nounc­ing the ERA Cam­bridge Sum­mer Re­search Fellowship

Nandini Shiralkar16 Mar 2023 22:57 UTC
11 points
0 comments3 min readLW link

Grad­ual take­off, fast failure

Max H16 Mar 2023 22:02 UTC
15 points
4 comments5 min readLW link

Con­ced­ing a short timelines bet early

Matthew Barnett16 Mar 2023 21:49 UTC
132 points
16 comments1 min readLW link

At­tri­bu­tion Patch­ing: Ac­ti­va­tion Patch­ing At In­dus­trial Scale

Neel Nanda16 Mar 2023 21:44 UTC
45 points
10 comments58 min readLW link
(www.neelnanda.io)

[Question] Will 2023 be the last year you can write short sto­ries and re­ceive most of the in­tel­lec­tual credit for writ­ing them?

lc16 Mar 2023 21:36 UTC
20 points
11 comments1 min readLW link

Is it a bad idea to pay for GPT-4?

nem16 Mar 2023 20:49 UTC
24 points
8 comments1 min readLW link

Are AI de­vel­op­ers play­ing with fire?

marcusarvan16 Mar 2023 19:12 UTC
6 points
0 comments10 min readLW link

[Question] When will com­puter pro­gram­ming be­come an un­skil­led job (if ever)?

lc16 Mar 2023 17:46 UTC
33 points
51 comments1 min readLW link

[Ap­pendix] Nat­u­ral Ab­strac­tions: Key Claims, The­o­rems, and Critiques

16 Mar 2023 16:38 UTC
46 points
0 comments13 min readLW link

Nat­u­ral Ab­strac­tions: Key claims, The­o­rems, and Critiques

16 Mar 2023 16:37 UTC
206 points
20 comments45 min readLW link

On the Cri­sis at Sili­con Valley Bank

Zvi16 Mar 2023 15:50 UTC
59 points
9 comments41 min readLW link
(thezvi.wordpress.com)

[Question] What liter­a­ture on the neu­ro­science of de­ci­sion mak­ing can you recom­mend?

quetzal_rainbow16 Mar 2023 15:32 UTC
3 points
0 comments1 min readLW link

[Question] What or­ga­ni­za­tions other than Con­jec­ture have (esp. pub­lic) info-haz­ard poli­cies?

David Scott Krueger (formerly: capybaralet)16 Mar 2023 14:49 UTC
20 points
1 comment1 min readLW link

[Question] Is there an anal­y­sis of the com­mon con­sid­er­a­tion that split­ting an AI lab into two (e.g. the found­ing of An­thropic) speeds up the de­vel­op­ment of TAI and there­fore in­creases AI x-risk?

tchauvin16 Mar 2023 14:16 UTC
4 points
0 comments1 min readLW link

A chess game against GPT-4

Rafael Harth16 Mar 2023 14:05 UTC
24 points
23 comments1 min readLW link

ChatGPT get­ting out of the box

qbolec16 Mar 2023 13:47 UTC
6 points
3 comments1 min readLW link

[Question] Are funds (such as the Long-Term Fu­ture Fund) will­ing to give ex­tra money to AI safety re­searchers to bal­ance for the op­por­tu­nity cost of tak­ing an “in­dus­try” job?

Malleable_shape16 Mar 2023 11:54 UTC
5 points
1 comment1 min readLW link

Three lev­els of ex­plo­ra­tion and intelligence

Q Home16 Mar 2023 10:55 UTC
9 points
3 comments21 min readLW link

Here, have a calm­ness video

Kaj_Sotala16 Mar 2023 10:00 UTC
111 points
15 comments2 min readLW link
(www.youtube.com)

Wittgen­stein’s Lan­guage Games and the Cri­tique of the Nat­u­ral Ab­strac­tion Hypothesis

Chris_Leong16 Mar 2023 7:56 UTC
15 points
19 comments2 min readLW link

Red-team­ing AI-safety con­cepts that rely on sci­ence metaphors

catubc16 Mar 2023 6:52 UTC
5 points
4 comments5 min readLW link