[Question] How can one liter­ally buy time (from x-risk) with money?

Alex_Altair13 Dec 2022 19:24 UTC
24 points
3 comments1 min readLW link

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubK13 Dec 2022 19:01 UTC
21 points
9 comments2 min readLW link
(forum.effectivealtruism.org)

Ap­pli­ca­tions open for AGI Safety Fun­da­men­tals: Align­ment Course

13 Dec 2022 18:31 UTC
48 points
0 comments2 min readLW link

What Does It Mean to Align AI With Hu­man Values?

Algon13 Dec 2022 16:56 UTC
8 points
3 comments1 min readLW link
(www.quantamagazine.org)

It Takes Two Parac­eta­mol?

Eli_13 Dec 2022 16:29 UTC
33 points
10 comments2 min readLW link

[In­terim re­search re­port] Tak­ing fea­tures out of su­per­po­si­tion with sparse autoencoders

13 Dec 2022 15:41 UTC
137 points
22 comments22 min readLW link2 reviews

[Question] Is the ChatGPT-simu­lated Linux vir­tual ma­chine real?

Kenoubi13 Dec 2022 15:41 UTC
18 points
7 comments1 min readLW link

Ex­is­ten­tial AI Safety is NOT sep­a­rate from near-term applications

scasper13 Dec 2022 14:47 UTC
37 points
17 comments3 min readLW link

What is the cor­re­la­tion be­tween up­vot­ing and benefit to read­ers of LW?

banev13 Dec 2022 14:26 UTC
8 points
15 comments1 min readLW link

Limits of Superintelligence

Aleksei Petrenko13 Dec 2022 12:19 UTC
1 point
5 comments1 min readLW link

Bay 2022 Solstice

Raemon13 Dec 2022 8:58 UTC
17 points
0 comments1 min readLW link

Last day to nom­i­nate things for the Re­view. Also, 2019 books still ex­ist.

Raemon13 Dec 2022 8:53 UTC
15 points
0 comments1 min readLW link

AI al­ign­ment is dis­tinct from its near-term applications

paulfchristiano13 Dec 2022 7:10 UTC
254 points
21 comments2 min readLW link
(ai-alignment.com)

Take 10: Fine-tun­ing with RLHF is aes­thet­i­cally un­satis­fy­ing.

Charlie Steiner13 Dec 2022 7:04 UTC
37 points
3 comments2 min readLW link

[Question] Are law­suits against AGI com­pa­nies ex­tend­ing AGI timelines?

SlowingAGI13 Dec 2022 6:00 UTC
1 point
1 comment1 min readLW link

EA & LW Fo­rums Weekly Sum­mary (5th Dec − 11th Dec 22′)

Zoe Williams13 Dec 2022 2:53 UTC
7 points
0 comments1 min readLW link

Align­ment with ar­gu­ment-net­works and as­sess­ment-predictions

Tor Økland Barstad13 Dec 2022 2:17 UTC
10 points
5 comments45 min readLW link

Re­vis­it­ing al­gorith­mic progress

13 Dec 2022 1:39 UTC
94 points
15 comments2 min readLW link1 review
(arxiv.org)

An ex­plo­ra­tion of GPT-2′s em­bed­ding weights

Adam Scherlis13 Dec 2022 0:46 UTC
42 points
4 comments10 min readLW link

12 ca­reer-re­lated ques­tions that may (or may not) be helpful for peo­ple in­ter­ested in al­ign­ment research

Akash12 Dec 2022 22:36 UTC
20 points
0 comments2 min readLW link

Con­cept ex­trap­o­la­tion for hy­poth­e­sis generation

12 Dec 2022 22:09 UTC
20 points
2 comments3 min readLW link

Let’s go meta: Gram­mat­i­cal knowl­edge and self-refer­en­tial sen­tences [ChatGPT]

Bill Benzon12 Dec 2022 21:50 UTC
5 points
0 comments9 min readLW link

D&D.Sci De­cem­ber 2022 Eval­u­a­tion and Ruleset

abstractapplic12 Dec 2022 21:21 UTC
14 points
7 comments2 min readLW link

Log-odds are bet­ter than Probabilities

Robert_AIZI12 Dec 2022 20:10 UTC
22 points
4 comments4 min readLW link
(aizi.substack.com)

Ben­galuru LW/​ACX So­cial Meetup—De­cem­ber 2022

faiz12 Dec 2022 19:30 UTC
4 points
0 comments1 min readLW link

Psy­cholog­i­cal Di­sor­ders and Problems

12 Dec 2022 18:15 UTC
39 points
6 comments1 min readLW link

Con­fus­ing the goal and the path

adamShimi12 Dec 2022 16:42 UTC
44 points
7 comments1 min readLW link
(epistemologicalvigilance.substack.com)

Mean­ingful things are those the uni­verse pos­sesses a se­man­tics for

Abhimanyu Pallavi Sudhir12 Dec 2022 16:03 UTC
16 points
14 comments14 min readLW link

Trade­offs in com­plex­ity, ab­strac­tion, and generality

12 Dec 2022 15:55 UTC
32 points
0 comments2 min readLW link

Green Line Ex­ten­sion Open­ing Dates

jefftk12 Dec 2022 14:40 UTC
12 points
0 comments1 min readLW link
(www.jefftk.com)

Join the AI Test­ing Hackathon this Friday

Esben Kran12 Dec 2022 14:24 UTC
10 points
0 comments1 min readLW link

Side-chan­nels: in­put ver­sus output

davidad12 Dec 2022 12:32 UTC
44 points
16 comments2 min readLW link

Take 9: No, RLHF/​IDA/​de­bate doesn’t solve outer al­ign­ment.

Charlie Steiner12 Dec 2022 11:51 UTC
33 points
14 comments2 min readLW link

Creat­ing a database for base rates

nikos12 Dec 2022 10:09 UTC
2 points
1 comment3 min readLW link
(forum.effectivealtruism.org)

Triv­ial GPT-3.5 limi­ta­tion workaround

Dave Lindbergh12 Dec 2022 8:42 UTC
5 points
4 comments1 min readLW link

Ponzi schemes can be highly prof­itable if your timing is good

GeneSmith12 Dec 2022 6:42 UTC
10 points
18 comments5 min readLW link

Prod­ding ChatGPT to solve a ba­sic alge­bra problem

shminux12 Dec 2022 4:09 UTC
14 points
6 comments1 min readLW link
(twitter.com)

Wider De­fault Au­dio Player in Chrome?

jefftk12 Dec 2022 3:30 UTC
11 points
2 comments1 min readLW link
(www.jefftk.com)

A brain­teaser for lan­guage models

Adam Scherlis12 Dec 2022 2:43 UTC
47 points
3 comments2 min readLW link

a rough sketch of for­mal al­igned AI us­ing QACI

Tamsin Leake11 Dec 2022 23:40 UTC
14 points
0 comments4 min readLW link
(carado.moe)

Bench­marks for Com­par­ing Hu­man and AI Intelligence

MrThink11 Dec 2022 22:06 UTC
8 points
4 comments2 min readLW link

Reflec­tions on the PIBBSS Fel­low­ship 2022

11 Dec 2022 21:53 UTC
32 points
0 comments18 min readLW link

A crisis for on­line com­mu­ni­ca­tion: bots and bot users will over­run the In­ter­net?

Mitchell_Porter11 Dec 2022 21:11 UTC
15 points
11 comments1 min readLW link

Finite Fac­tored Sets in Pictures

Magdalena Wache11 Dec 2022 18:49 UTC
174 points
35 comments12 min readLW link

For­mal­iza­tion as sus­pen­sion of intuition

adamShimi11 Dec 2022 15:16 UTC
54 points
18 comments1 min readLW link
(epistemologicalvigilance.substack.com)

An ar­gu­ment on an­i­mal con­scious­ness (so­lic­it­ing crit­i­cism)

SciHamster11 Dec 2022 15:12 UTC
−3 points
2 comments1 min readLW link

ChatGPT’s new novel ra­tio­nal­ity tech­nique of fact checking

ChristianKl11 Dec 2022 13:54 UTC
−14 points
7 comments1 min readLW link

Refram­ing in­ner alignment

davidad11 Dec 2022 13:53 UTC
53 points
13 comments4 min readLW link

A poem about ap­plied ra­tio­nal­ity by ChatGPT

ChristianKl11 Dec 2022 13:43 UTC
4 points
0 comments1 min readLW link

ChatGPT goes through a worm­hole hole in our Shandyesque uni­verse [vir­tual wacky weed]

Bill Benzon11 Dec 2022 11:59 UTC
−1 points
2 comments3 min readLW link