hu­man in­tel­li­gence may be al­ign­ment-limited

bhauthJun 15, 2023, 10:32 PM
16 points
3 comments2 min readLW link

Devel­op­ing a tech­nol­ogy with safety in mind: Les­sons from the Wright Brothers

jasoncrawfordJun 15, 2023, 9:08 PM
30 points
4 comments3 min readLW link
(rootsofprogress.org)

AXRP Epi­sode 22 - Shard The­ory with Quintin Pope

DanielFilanJun 15, 2023, 7:00 PM
52 points
11 comments93 min readLW link

Can we ac­cel­er­ate hu­man progress? Moder­ated Con­ver­sa­tion in NYC

Jannik SchgJun 15, 2023, 5:33 PM
1 point
0 comments1 min readLW link

Group Pri­ori­tar­i­anism: Why AI Should Not Re­place Hu­man­ity [draft]

fshJun 15, 2023, 5:33 PM
8 points
0 comments25 min readLW link

Press the hap­piness but­ton!

SpiarrowJun 15, 2023, 5:30 PM
5 points
3 comments2 min readLW link

AI #16: AI in the UK

ZviJun 15, 2023, 1:20 PM
46 points
20 comments54 min readLW link
(thezvi.wordpress.com)

I still think it’s very un­likely we’re ob­serv­ing alien aircraft

dynomightJun 15, 2023, 1:01 PM
180 points
70 comments5 min readLW link
(dynomight.net)

Aligned Ob­jec­tives Prize Competition

PrometheusJun 15, 2023, 12:42 PM
8 points
0 comments2 min readLW link
(app.impactmarkets.io)

A more effec­tive Ele­va­tor Pitch for AI risk

IknownothingJun 15, 2023, 12:39 PM
2 points
0 comments1 min readLW link

Why “AI al­ign­ment” would bet­ter be re­named into “Ar­tifi­cial In­ten­tion re­search”

chaosmageJun 15, 2023, 10:32 AM
29 points
12 comments2 min readLW link

Matt Taibbi’s COVID reporting

ChristianKlJun 15, 2023, 9:49 AM
21 points
34 comments1 min readLW link
(www.racket.news)

Look­ing Back On Ads

jefftkJun 15, 2023, 2:10 AM
30 points
11 comments3 min readLW link
(www.jefftk.com)

Why liber­tar­i­ans are ad­vo­cat­ing for reg­u­la­tion on AI

RobertMJun 14, 2023, 8:59 PM
36 points
13 comments4 min readLW link

In­stru­men­tal Con­ver­gence? [Draft]

J. Dmitri GallowJun 14, 2023, 8:21 PM
48 points
20 comments33 min readLW link

On the Ap­ple Vi­sion Pro

ZviJun 14, 2023, 5:50 PM
44 points
17 comments11 min readLW link
(thezvi.wordpress.com)

Progress links and tweets, 2023-06-14

jasoncrawfordJun 14, 2023, 4:30 PM
19 points
1 comment2 min readLW link
(rootsofprogress.org)

Philo­soph­i­cal Cy­borg (Part 1)

Jun 14, 2023, 4:20 PM
31 points
4 comments13 min readLW link

Is the con­fir­ma­tion bias re­ally a bias?

LionelJun 14, 2023, 2:06 PM
−2 points
6 comments1 min readLW link
(lionelpage.substack.com)

NA East ACX & Ra­tion­al­ity Meetup Or­ga­niz­ers Retreat

WillaJun 14, 2023, 1:39 PM
8 points
0 comments1 min readLW link

Light­cone In­fras­truc­ture/​LessWrong is look­ing for funding

habrykaJun 14, 2023, 4:45 AM
205 points
39 comments1 min readLW link

An­thropic | Chart­ing a Path to AI Accountability

Gabe MJun 14, 2023, 4:43 AM
34 points
2 comments3 min readLW link
(www.anthropic.com)

De­mys­tify­ing Born’s rule

Christopher KingJun 14, 2023, 3:16 AM
5 points
26 comments3 min readLW link

My guess for why I was wrong about US housing

romeostevensitJun 14, 2023, 12:37 AM
110 points
13 comments1 min readLW link

Notes from the Bank of England Talk by Gio­vanni Dosi on Agent-based Model­ing for Macroeconomics

PixelatedPenguinJun 13, 2023, 10:25 PM
3 points
0 comments1 min readLW link

In­tro­duc­ing The Long Game Pro­ject: Im­prov­ing De­ci­sion-Mak­ing Through Table­top Ex­er­cises and Si­mu­lated Experience

Dan StuartJun 13, 2023, 9:45 PM
4 points
0 comments4 min readLW link

In­tel­li­gence al­lo­ca­tion from a Mean Field Game The­ory perspective

Marv KJun 13, 2023, 7:52 PM
13 points
2 comments2 min readLW link

Mul­ti­ple stages of fal­lacy—jus­tifi­ca­tions and non-jus­tifi­ca­tions for the mul­ti­ple stage fallacy

AronTJun 13, 2023, 5:37 PM
33 points
2 comments5 min readLW link
(coordinationishard.substack.com)

TryCon­tra Events

jefftkJun 13, 2023, 5:30 PM
2 points
0 comments1 min readLW link
(www.jefftk.com)

Me­taAI: less is less for al­ign­ment.

Cleo NardoJun 13, 2023, 2:08 PM
71 points
17 comments5 min readLW link

The Dial of Progress

ZviJun 13, 2023, 1:40 PM
161 points
119 comments11 min readLW link
(thezvi.wordpress.com)

Vir­tual AI Safety Un­con­fer­ence (VAISU)

Jun 13, 2023, 9:56 AM
15 points
0 comments1 min readLW link

Seat­tle ACX Meetup—Sum­mer 2023

Optimization ProcessJun 13, 2023, 5:14 AM
5 points
0 comments1 min readLW link

TASRA: A Tax­on­omy and Anal­y­sis of So­cietal-Scale Risks from AI

Andrew_CritchJun 13, 2023, 5:04 AM
64 points
1 comment1 min readLW link

<$750k grants for Gen­eral Pur­pose AI As­surance/​Safety Research

PhosphorousJun 13, 2023, 4:45 AM
37 points
1 comment1 min readLW link
(cset.georgetown.edu)

UFO Bet­ting: Put Up or Shut Up

RatsWrongAboutUAPJun 13, 2023, 4:05 AM
261 points
216 comments2 min readLW link1 review

A bunch of videos in comments

the gears to ascensionJun 12, 2023, 10:31 PM
10 points
62 comments1 min readLW link

[Linkpost] The neu­ro­con­nec­tion­ist re­search programme

Bogdan Ionut CirsteaJun 12, 2023, 9:58 PM
6 points
1 comment1 min readLW link

Contin­gency: A Con­cep­tual Tool from Evolu­tion­ary Biol­ogy for Alignment

clem_acsJun 12, 2023, 8:54 PM
57 points
2 comments14 min readLW link
(acsresearch.org)

Book Re­view: Autoheterosexuality

tailcalledJun 12, 2023, 8:11 PM
27 points
9 comments24 min readLW link

Aura as a pro­pri­o­cep­tive glitch

pchvykovJun 12, 2023, 7:30 PM
37 points
4 comments4 min readLW link

Align­ing Math­e­mat­i­cal No­tions of In­finity with Hu­man Intuition

London L.Jun 12, 2023, 7:19 PM
1 point
10 comments9 min readLW link
(medium.com)

ARC is hiring the­o­ret­i­cal researchers

Jun 12, 2023, 6:50 PM
126 points
12 comments4 min readLW link
(www.alignment.org)

In­tro­duc­tion to Towards Causal Foun­da­tions of Safe AGI

Jun 12, 2023, 5:55 PM
67 points
6 comments4 min readLW link

Man­i­fold Pre­dicted the AI Ex­tinc­tion State­ment and CAIS Wanted it Deleted

David CheeJun 12, 2023, 3:54 PM
71 points
15 comments12 min readLW link

Explicitness

TsviBTJun 12, 2023, 3:05 PM
29 points
0 comments15 min readLW link

If you are too stressed, walk away from the front lines

Neil Jun 12, 2023, 2:26 PM
44 points
14 comments5 min readLW link

UK PM: $125M for AI safety

Hauke HillebrandtJun 12, 2023, 12:33 PM
31 points
11 comments1 min readLW link
(twitter.com)

[Question] Could in­duced and sta­bi­lized hy­po­ma­nia be a de­sir­able men­tal state?

MvBJun 12, 2023, 12:13 PM
8 points
22 comments2 min readLW link

Non-loss of con­trol AGI-re­lated catas­tro­phes are out of con­trol too

Jun 12, 2023, 12:01 PM
2 points
3 comments24 min readLW link