[Question] Look­ing for ideas of pub­lic as­sets (stocks, funds, ETFs) that I can in­vest in to have a chance at prof­it­ing from the mass adop­tion and com­mer­cial­iza­tion of AI technology

Annapurna7 Dec 2022 22:35 UTC
15 points
9 comments1 min readLW link

A Fal­li­bil­ist Wordview

Toni MUENDEL7 Dec 2022 20:59 UTC
−13 points
2 comments13 min readLW link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

7 Dec 2022 19:46 UTC
102 points
17 comments5 min readLW link

How to Think About Cli­mate Models and How to Im­prove Them

clans7 Dec 2022 19:37 UTC
7 points
0 comments2 min readLW link
(locationtbd.home.blog)

The nov­elty quotient

River Lewis7 Dec 2022 17:16 UTC
4 points
7 comments2 min readLW link
(heytraveler.substack.com)

ChatGPT: “An er­ror oc­curred. If this is­sue per­sists...”

Bill Benzon7 Dec 2022 15:41 UTC
5 points
11 comments3 min readLW link

Take 6: CAIS is ac­tu­ally Or­wellian.

Charlie Steiner7 Dec 2022 13:50 UTC
14 points
8 comments2 min readLW link

Peter Thiel on Tech­nolog­i­cal Stag­na­tion and Out of Touch Rationalists

Matt Goldenberg7 Dec 2022 13:15 UTC
9 points
26 comments1 min readLW link
(youtu.be)

[Link] Wave­func­tions: from Lin­ear Alge­bra to Spinors

sen7 Dec 2022 12:44 UTC
11 points
12 comments1 min readLW link
(paperclip.substack.com)

Why I like Zulip in­stead of Slack or Discord

Alok Singh7 Dec 2022 9:28 UTC
31 points
10 comments1 min readLW link

Bioweapons, and ChatGPT (an­other vuln­er­a­bil­ity story)

joshuatanderson7 Dec 2022 7:27 UTC
−5 points
0 comments2 min readLW link

Where to be an AI Safety Pro­fes­sor

scasper7 Dec 2022 7:09 UTC
30 points
12 comments2 min readLW link

[Question] Are there any tools to con­vert LW se­quences to PDF or any other file for­mat?

quetzal_rainbow7 Dec 2022 5:28 UTC
2 points
2 comments1 min readLW link

Man­i­fold Mar­kets com­mu­nity meetup

Sinclair Chen7 Dec 2022 3:25 UTC
4 points
0 comments1 min readLW link

“At­ten­tion Pas­sen­gers”: not for Signs

jefftk7 Dec 2022 2:00 UTC
27 points
10 comments1 min readLW link
(www.jefftk.com)

[ASoT] Prob­a­bil­ity In­fects Con­cepts it Touches

Ulisse Mini7 Dec 2022 1:48 UTC
10 points
4 comments1 min readLW link

Sim­ple Way to Prevent Power-Seek­ing AI

research_prime_space7 Dec 2022 0:26 UTC
12 points
1 comment1 min readLW link

In defense of prob­a­bly wrong mechanis­tic models

evhub6 Dec 2022 23:24 UTC
53 points
10 comments2 min readLW link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan Arel6 Dec 2022 22:35 UTC
4 points
2 comments3 min readLW link

ChatGPT and the Hu­man Race

Ben Reilly6 Dec 2022 21:38 UTC
6 points
1 comment3 min readLW link

[Question] How do finite fac­tored sets com­pare with phase space?

Alex_Altair6 Dec 2022 20:05 UTC
15 points
1 comment1 min readLW link

Mesa-Op­ti­miz­ers via Grokking

orthonormal6 Dec 2022 20:05 UTC
36 points
4 comments6 min readLW link

Us­ing GPT-Eliezer against ChatGPT Jailbreaking

6 Dec 2022 19:54 UTC
170 points
85 comments9 min readLW link

The Parable of the Crimp

Phosphorous6 Dec 2022 18:41 UTC
11 points
3 comments3 min readLW link

The Cat­e­gor­i­cal Im­per­a­tive Obscures

Gordon Seidoh Worley6 Dec 2022 17:48 UTC
17 points
17 comments2 min readLW link

MIRI’s “Death with Dig­nity” in 60 sec­onds.

Cleo Nardo6 Dec 2022 17:18 UTC
55 points
4 comments1 min readLW link

Things roll downhill

awenonian6 Dec 2022 15:27 UTC
19 points
0 comments1 min readLW link

EA & LW Fo­rums Weekly Sum­mary (28th Nov − 4th Dec 22′)

Zoe Williams6 Dec 2022 9:38 UTC
10 points
1 comment1 min readLW link

Free Will is [REDACTED]

lsusr6 Dec 2022 8:14 UTC
−5 points
21 comments1 min readLW link

Take 5: Another prob­lem for nat­u­ral ab­strac­tions is laz­i­ness.

Charlie Steiner6 Dec 2022 7:00 UTC
30 points
4 comments3 min readLW link

Ver­ifi­ca­tion Is Not Easier Than Gen­er­a­tion In General

johnswentworth6 Dec 2022 5:20 UTC
60 points
27 comments1 min readLW link

Shh, don’t tell the AI it’s likely to be evil

naterush6 Dec 2022 3:35 UTC
19 points
9 comments1 min readLW link

[Question] What are the ma­jor un­der­ly­ing di­vi­sions in AI safety?

Chris_Leong6 Dec 2022 3:28 UTC
5 points
2 comments1 min readLW link

[Link] Why I’m op­ti­mistic about OpenAI’s al­ign­ment approach

janleike5 Dec 2022 22:51 UTC
98 points
15 comments1 min readLW link
(aligned.substack.com)

The No Free Lunch the­o­rem for dummies

Steven Byrnes5 Dec 2022 21:46 UTC
37 points
16 comments3 min readLW link

ChatGPT and Ide­olog­i­cal Tur­ing Test

Viliam5 Dec 2022 21:45 UTC
42 points
1 comment1 min readLW link

ChatGPT on Spielberg’s A.I. and AI Alignment

Bill Benzon5 Dec 2022 21:10 UTC
5 points
0 comments4 min readLW link

Up­dat­ing my AI timelines

Matthew Barnett5 Dec 2022 20:46 UTC
143 points
50 comments2 min readLW link

Steer­ing Be­havi­our: Test­ing for (Non-)My­opia in Lan­guage Models

5 Dec 2022 20:28 UTC
40 points
19 comments10 min readLW link

Col­lege Ad­mis­sions as a Bru­tal One-Shot Game

devansh5 Dec 2022 20:05 UTC
8 points
26 comments2 min readLW link

Anal­y­sis of AI Safety sur­veys for field-build­ing insights

Ash Jafari5 Dec 2022 19:21 UTC
11 points
2 comments5 min readLW link

Test­ing Ways to By­pass ChatGPT’s Safety Features

Robert_AIZI5 Dec 2022 18:50 UTC
7 points
4 comments5 min readLW link
(aizi.substack.com)

Fore­sight for AGI Safety Strat­egy: Miti­gat­ing Risks and Iden­ti­fy­ing Golden Opportunities

jacquesthibs5 Dec 2022 16:09 UTC
28 points
6 comments8 min readLW link

Aligned Be­hav­ior is not Ev­i­dence of Align­ment Past a Cer­tain Level of Intelligence

Ronny Fernandez5 Dec 2022 15:19 UTC
19 points
5 comments7 min readLW link

[Question] How should I judge the im­pact of giv­ing $5k to a fam­ily of three kids and two men­tally ill par­ents?

Blake5 Dec 2022 13:42 UTC
10 points
10 comments1 min readLW link

Is the “Valley of Con­fused Ab­strac­tions” real?

jacquesthibs5 Dec 2022 13:36 UTC
19 points
11 comments2 min readLW link

Take 4: One prob­lem with nat­u­ral ab­strac­tions is there’s too many of them.

Charlie Steiner5 Dec 2022 10:39 UTC
36 points
4 comments1 min readLW link

[Question] What are some good Less­wrong-re­lated ac­counts or hash­tags on Mastodon that I should fol­low?

SpectrumDT5 Dec 2022 9:42 UTC
2 points
0 comments1 min readLW link

[Question] Who are some promi­nent rea­son­able peo­ple who are con­fi­dent that AI won’t kill ev­ery­one?

Optimization Process5 Dec 2022 9:12 UTC
72 points
54 comments1 min readLW link

Monthly Shorts 11/​22

Celer5 Dec 2022 7:30 UTC
8 points
0 comments3 min readLW link
(keller.substack.com)