A bunch of videos in comments

the gears to ascensionJun 12, 2023, 10:31 PM
10 points
62 comments1 min readLW link

[Linkpost] The neu­ro­con­nec­tion­ist re­search programme

Bogdan Ionut CirsteaJun 12, 2023, 9:58 PM
6 points
1 comment1 min readLW link

Contin­gency: A Con­cep­tual Tool from Evolu­tion­ary Biol­ogy for Alignment

clem_acsJun 12, 2023, 8:54 PM
57 points
2 comments14 min readLW link
(acsresearch.org)

Book Re­view: Autoheterosexuality

tailcalledJun 12, 2023, 8:11 PM
27 points
9 comments24 min readLW link

Aura as a pro­pri­o­cep­tive glitch

pchvykovJun 12, 2023, 7:30 PM
37 points
4 comments4 min readLW link

Align­ing Math­e­mat­i­cal No­tions of In­finity with Hu­man Intuition

London L.Jun 12, 2023, 7:19 PM
1 point
10 comments9 min readLW link
(medium.com)

ARC is hiring the­o­ret­i­cal researchers

Jun 12, 2023, 6:50 PM
126 points
12 comments4 min readLW link
(www.alignment.org)

In­tro­duc­tion to Towards Causal Foun­da­tions of Safe AGI

Jun 12, 2023, 5:55 PM
67 points
6 comments4 min readLW link

Man­i­fold Pre­dicted the AI Ex­tinc­tion State­ment and CAIS Wanted it Deleted

David CheeJun 12, 2023, 3:54 PM
71 points
15 comments12 min readLW link

Explicitness

TsviBTJun 12, 2023, 3:05 PM
29 points
0 comments15 min readLW link

If you are too stressed, walk away from the front lines

Neil Jun 12, 2023, 2:26 PM
44 points
14 comments5 min readLW link

UK PM: $125M for AI safety

Hauke HillebrandtJun 12, 2023, 12:33 PM
31 points
11 comments1 min readLW link
(twitter.com)

[Question] Could in­duced and sta­bi­lized hy­po­ma­nia be a de­sir­able men­tal state?

MvBJun 12, 2023, 12:13 PM
8 points
22 comments2 min readLW link

Non-loss of con­trol AGI-re­lated catas­tro­phes are out of con­trol too

Jun 12, 2023, 12:01 PM
2 points
3 comments24 min readLW link

Cri­tiques of promi­nent AI safety labs: Conjecture

Omega.Jun 12, 2023, 1:32 AM
12 points
32 comments33 min readLW link

why I’m anti-YIMBY

bhauthJun 12, 2023, 12:19 AM
20 points
45 comments2 min readLW link

ACX Brno meetup #2

adekczJun 11, 2023, 1:53 PM
2 points
0 comments1 min readLW link

[Linkpost] Large Lan­guage Models Con­verge on Brain-Like Word Representations

Bogdan Ionut CirsteaJun 11, 2023, 11:20 AM
36 points
12 comments1 min readLW link

In­fer­ence-Time In­ter­ven­tion: Elic­it­ing Truth­ful An­swers from a Lan­guage Model

likennethJun 11, 2023, 5:38 AM
195 points
4 comments1 min readLW link
(arxiv.org)

You Are a Com­puter, and No, That’s Not a Metaphor

jakejJun 11, 2023, 5:38 AM
12 points
1 comment22 min readLW link
(sigil.substack.com)

Snake Eyes Paradox

Martin RandallJun 11, 2023, 4:10 AM
22 points
25 comments6 min readLW link

[Question] [Mostly solved] I get dis­tracted while read­ing, but can eas­ily com­pre­hend au­dio text for 8+ hours per day. What are the best AI text-to-speech read­ers? Alter­na­tively, do you have other ideas for what I could do?

kuiraJun 11, 2023, 3:49 AM
18 points
7 comments1 min readLW link

The Dic­ta­tor­ship Problem

alyssavanceJun 11, 2023, 2:45 AM
35 points
145 comments11 min readLW link

Higher Di­men­sion Carte­sian Ob­jects and Align­ing ‘Tiling Si­mu­la­tors’

lukemarksJun 11, 2023, 12:13 AM
22 points
0 comments5 min readLW link

Us­ing Con­sen­sus Mechanisms as an ap­proach to Alignment

PrometheusJun 10, 2023, 11:38 PM
11 points
2 comments6 min readLW link

Hu­man­i­ties first math prob­lem, The shal­low gene pool.

archeonJun 10, 2023, 11:09 PM
−2 points
0 comments1 min readLW link

I can see how I am Dumb

Johannes C. MayerJun 10, 2023, 7:18 PM
46 points
11 comments5 min readLW link

Etho­dy­nam­ics of Omelas

dr_sJun 10, 2023, 4:24 PM
83 points
18 comments9 min readLW link1 review

Deal­ing with UFO claims

ChristianKlJun 10, 2023, 3:45 PM
3 points
32 comments1 min readLW link

A The­ory of Un­su­per­vised Trans­la­tion Mo­ti­vated by Un­der­stand­ing An­i­mal Communication

jsdJun 10, 2023, 3:44 PM
19 points
0 comments1 min readLW link
(arxiv.org)

[Question] What are brains?

ValentineJun 10, 2023, 2:46 PM
10 points
22 comments2 min readLW link

EY in the New York Times

BlueberryJun 10, 2023, 12:21 PM
6 points
14 comments1 min readLW link
(www.nytimes.com)

Goal-mis­gen­er­al­iza­tion is ELK-hard

rokosbasiliskJun 10, 2023, 9:32 AM
2 points
0 comments1 min readLW link

[Question] What do benefi­cial TDT trades for hu­man­ity con­cretely look like?

Stephen FowlerJun 10, 2023, 6:50 AM
4 points
0 comments1 min readLW link

cloud seed­ing doesn’t work

bhauthJun 10, 2023, 5:14 AM
7 points
2 comments1 min readLW link

[FICTION] Un­box­ing Ely­sium: An AI’S Escape

Super AGIJun 10, 2023, 4:41 AM
−16 points
4 comments14 min readLW link

[FICTION] Prometheus Ris­ing: The Emer­gence of an AI Consciousness

Super AGIJun 10, 2023, 4:41 AM
−14 points
0 comments9 min readLW link

for­mal­iz­ing the QACI al­ign­ment for­mal-goal

Jun 10, 2023, 3:28 AM
54 points
6 comments13 min readLW link
(carado.moe)

Ex­pert trap: Why is it hap­pen­ing? (Part 2 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Paweł SysiakJun 9, 2023, 11:00 PM
3 points
0 comments7 min readLW link

Ex­pert trap: What is it? (Part 1 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Paweł SysiakJun 9, 2023, 11:00 PM
6 points
2 comments8 min readLW link

[Question] How ac­cu­rate is data about past earth tem­per­a­tures?

tailcalledJun 9, 2023, 9:29 PM
10 points
2 comments1 min readLW link

Proxi-An­tipodes: A Geo­met­ri­cal In­tu­ition For The Difficulty Of Align­ing AI With Mul­ti­tudi­nous Hu­man Values

Matthew_OpitzJun 9, 2023, 9:21 PM
7 points
0 comments5 min readLW link

Why AI may not save the World

Alberto ZannoniJun 9, 2023, 5:42 PM
0 points
0 comments4 min readLW link
(a16z.com)

You can now listen to the “AI Safety Fun­da­men­tals” courses

PeterHJun 9, 2023, 4:45 PM
6 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Ex­plor­ing Con­cept-Spe­cific Slices in Weight Ma­tri­ces for Net­work Interpretability

DuncanFowlerJun 9, 2023, 4:39 PM
1 point
0 comments6 min readLW link

A plea for solu­tion­ism on AI safety

jasoncrawfordJun 9, 2023, 4:29 PM
72 points
6 comments6 min readLW link
(rootsofprogress.org)

Michael Shel­len­berger: US Has 12 Or More Alien Space­craft, Say Mili­tary And In­tel­li­gence Contractors

lcJun 9, 2023, 4:11 PM
11 points
31 comments3 min readLW link
(public.substack.com)

Im­prove­ment on MIRI’s Corrigibility

Jun 9, 2023, 4:10 PM
54 points
8 comments13 min readLW link

D&D.Sci 5E: Re­turn of the League of Defen­ders Eval­u­a­tion & Ruleset

aphyerJun 9, 2023, 3:25 PM
30 points
8 comments6 min readLW link

In­ternLM—China’s Best (Un­ver­ified)

Lao MeinJun 9, 2023, 7:39 AM
51 points
4 comments1 min readLW link