[Question] How do I get all re­cent less­wrong posts that doesn’t have AI tag?

Duck Duck19 Apr 2023 23:39 UTC
5 points
2 comments1 min readLW link

Stop try­ing to have “in­ter­est­ing” friends

eq19 Apr 2023 23:39 UTC
40 points
15 comments6 min readLW link

[Question] Is there any liter­a­ture on us­ing so­cial­iza­tion for AI al­ign­ment?

Nathan112319 Apr 2023 22:16 UTC
10 points
9 comments2 min readLW link

I Believe I Know Why AI Models Hallucinate

Richard Aragon19 Apr 2023 21:07 UTC
−10 points
6 comments7 min readLW link
(turingssolutions.com)

What if we Align the AI and no­body cares?

Logan Zoellner19 Apr 2023 20:40 UTC
−5 points
23 comments2 min readLW link

Orthog­o­nal: A new agent foun­da­tions al­ign­ment organization

Tamsin Leake19 Apr 2023 20:17 UTC
207 points
4 comments1 min readLW link
(orxl.org)

How to ex­press this sys­tem for eth­i­cally al­igned AGI as a Math­e­mat­i­cal for­mula?

Oliver Siegel19 Apr 2023 20:13 UTC
−1 points
0 comments1 min readLW link

How could you pos­si­bly choose what an AI wants?

So8res19 Apr 2023 17:08 UTC
105 points
19 comments1 min readLW link

[Question] Does ob­ject per­ma­nence of simu­lacrum af­fect LLMs’ rea­son­ing?

ProgramCrafter19 Apr 2023 16:28 UTC
1 point
1 comment1 min readLW link

Davi­dad’s Bold Plan for Align­ment: An In-Depth Explanation

19 Apr 2023 16:09 UTC
154 points
29 comments21 min readLW link

GWWC Re­port­ing At­tri­tion Visualization

jefftk19 Apr 2023 15:40 UTC
16 points
0 comments1 min readLW link
(www.jefftk.com)

Keep hu­mans in the loop

19 Apr 2023 15:34 UTC
22 points
1 comment10 min readLW link

Ap­prox­i­ma­tion is ex­pen­sive, but the lunch is cheap

19 Apr 2023 14:19 UTC
68 points
3 comments16 min readLW link

Le­gi­t­imis­ing AI Red-Team­ing by Public

VojtaKovarik19 Apr 2023 14:05 UTC
10 points
7 comments3 min readLW link

More on Twit­ter and Algorithms

Zvi19 Apr 2023 12:40 UTC
37 points
7 comments13 min readLW link
(thezvi.wordpress.com)

[Cross­post] Or­ga­niz­ing a de­bate with ex­perts and MPs to raise AI xrisk aware­ness: a pos­si­ble blueprint

otto.barten19 Apr 2023 11:45 UTC
8 points
0 comments4 min readLW link
(forum.effectivealtruism.org)

The key to un­der­stand­ing the ul­ti­mate na­ture of re­al­ity is: Time. The key to un­der­stand­ing Time is: Evolu­tion.

Dr_What19 Apr 2023 10:05 UTC
−10 points
0 comments3 min readLW link

Open Brains

George3d619 Apr 2023 7:35 UTC
7 points
0 comments6 min readLW link
(cerebralab.com)

The Learn­ing-The­o­retic Agenda: Sta­tus 2023

Vanessa Kosoy19 Apr 2023 5:21 UTC
135 points
13 comments55 min readLW link

Pay­ing the cor­rigi­bil­ity tax

Max H19 Apr 2023 1:57 UTC
14 points
1 comment13 min readLW link

Notes on Teach­ing in Prison

jsd19 Apr 2023 1:53 UTC
270 points
12 comments12 min readLW link

Con­scious­ness as re­cur­rence, po­ten­tial for en­forc­ing al­ign­ment?

Foyle18 Apr 2023 23:05 UTC
−3 points
6 comments1 min readLW link

En­courag­ing New Users To Bet On Their Beliefs

YafahEdelman18 Apr 2023 22:10 UTC
49 points
8 comments2 min readLW link

AI Safety Newslet­ter #2: ChaosGPT, Nat­u­ral Selec­tion, and AI Safety in the Media

18 Apr 2023 18:44 UTC
30 points
0 comments4 min readLW link
(newsletter.safe.ai)

Scien­tism vs. people

Roman Leventov18 Apr 2023 17:28 UTC
4 points
4 comments11 min readLW link

Ca­pa­bil­ities and al­ign­ment of LLM cog­ni­tive architectures

Seth Herd18 Apr 2023 16:29 UTC
81 points
18 comments20 min readLW link

World and Mind in Ar­tifi­cial In­tel­li­gence: ar­gu­ments against the AI pause

Arturo Macias18 Apr 2023 14:40 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

Slow­ing AI: Interventions

Zach Stein-Perlman18 Apr 2023 14:30 UTC
19 points
0 comments5 min readLW link

Cryp­to­graphic and aux­iliary ap­proaches rele­vant for AI safety

Allison Duettmann18 Apr 2023 14:18 UTC
7 points
0 comments6 min readLW link

The Overem­ployed Via ChatGPT

Zvi18 Apr 2023 13:40 UTC
57 points
7 comments6 min readLW link
(thezvi.wordpress.com)

[Linkpost] AI Align­ment, Ex­plained in 5 Points (up­dated)

Daniel_Eth18 Apr 2023 8:09 UTC
10 points
0 comments1 min readLW link

Ar­gen­tines LW/​SSC/​EA/​MIRIx—Call to All

daviddelauba18 Apr 2023 6:37 UTC
1 point
0 comments1 min readLW link

No, re­ally, it pre­dicts next to­kens.

simon18 Apr 2023 3:47 UTC
58 points
37 comments3 min readLW link

The ba­sic rea­sons I ex­pect AGI ruin

Rob Bensinger18 Apr 2023 3:37 UTC
187 points
72 comments14 min readLW link

High school­ers can ap­ply to the At­las Fel­low­ship: $10k schol­ar­ship + 11-day program

18 Apr 2023 2:53 UTC
26 points
0 comments3 min readLW link

Green goo is plausible

anithite18 Apr 2023 0:04 UTC
57 points
29 comments4 min readLW link

AI Im­pacts Quar­terly Newslet­ter, Jan-Mar 2023

Harlan17 Apr 2023 22:10 UTC
5 points
0 comments3 min readLW link
(blog.aiimpacts.org)

[Question] How do you al­ign your emo­tions through up­dates and ex­is­ten­tial un­cer­tainty?

VojtaKovarik17 Apr 2023 20:46 UTC
4 points
10 comments1 min readLW link

AI Align­ment Re­search Eng­ineer Ac­cel­er­a­tor (ARENA): call for applicants

CallumMcDougall17 Apr 2023 20:30 UTC
100 points
9 comments7 min readLW link

AI policy ideas: Read­ing list

Zach Stein-Perlman17 Apr 2023 19:00 UTC
22 points
7 comments4 min readLW link

NYT: The Sur­pris­ing Thing A.I. Eng­ineers Will Tell You if You Let Them

Sodium17 Apr 2023 18:59 UTC
11 points
2 comments1 min readLW link
(www.nytimes.com)

But why would the AI kill us?

So8res17 Apr 2023 18:42 UTC
117 points
86 comments2 min readLW link

Sama Says the Age of Gi­ant AI Models is Already Over

Algon17 Apr 2023 18:36 UTC
49 points
12 comments1 min readLW link
(www.wired.com)

Meetup Tip: Con­ver­sa­tion Starters

Screwtape17 Apr 2023 18:25 UTC
20 points
1 comment2 min readLW link

Cri­tiques of promi­nent AI safety labs: Red­wood Research

Omega.17 Apr 2023 18:20 UTC
1 point
0 comments22 min readLW link
(forum.effectivealtruism.org)

How Large Lan­guage Models Nuke our Naive No­tions of Truth and Reality

Sean Lee17 Apr 2023 18:08 UTC
0 points
23 comments11 min readLW link

An al­ter­na­tive of PPO to­wards alignment

ml hkust17 Apr 2023 17:58 UTC
2 points
2 comments4 min readLW link

What I learned at the AI Safety Europe Retreat

skaisg17 Apr 2023 17:40 UTC
28 points
0 comments10 min readLW link
(skaisg.eu)

What is your timelines for ADI (ar­tifi­cial dis­em­pow­er­ing in­tel­li­gence)?

Christopher King17 Apr 2023 17:01 UTC
3 points
3 comments2 min readLW link

[Question] Can we get around Godel’s In­com­plete­ness the­o­rems and Tur­ing un­de­cid­able prob­lems via in­finite com­put­ers?

Noosphere8917 Apr 2023 15:14 UTC
−11 points
12 comments1 min readLW link