In­tro­duc­ción al Riesgo Ex­is­ten­cial de In­teligen­cia Artificial

david.frivaJul 15, 2023, 8:37 PM
4 points
2 comments4 min readLW link
(youtu.be)

The hous­ing crisis, ex­plained us­ing game theory

JohnstoneJul 15, 2023, 8:27 PM
4 points
2 comments8 min readLW link

Only a hack can solve the shut­down problem

dpJul 15, 2023, 8:26 PM
5 points
0 comments8 min readLW link

Ro­bust­ness of Model-Graded Eval­u­a­tions and Au­to­mated Interpretability

Jul 15, 2023, 7:12 PM
47 points
5 comments9 min readLW link

[Question] How to deal with fear of failure?

TeaTieAndHatJul 15, 2023, 6:57 PM
1 point
2 comments1 min readLW link

Sim­plified bio-an­chors for up­per bounds on AI timelines

Fabien RogerJul 15, 2023, 6:15 PM
21 points
4 comments5 min readLW link

A Hill of Val­idity in Defense of Meaning

Zack_M_DavisJul 15, 2023, 5:57 PM
25 points
120 comments73 min readLW link1 review
(unremediatedgender.space)

What is a cog­ni­tive bias?

LionelJul 15, 2023, 1:01 PM
1 point
0 comments2 min readLW link
(lionelpage.substack.com)

[Question] When peo­ple say robots will steal jobs, what kinds of jobs are never im­plied?

Mary ChernyshenkoJul 15, 2023, 10:50 AM
5 points
12 comments1 min readLW link

Nar­ra­tive The­ory. Part 2. A new way of do­ing the same thing

ErisJul 15, 2023, 10:37 AM
2 points
0 comments1 min readLW link

How to use ChatGPT to get bet­ter book & movie recommendations

KatWoodsJul 15, 2023, 8:55 AM
29 points
3 comments1 min readLW link

[Question] Would you take a job mak­ing hu­manoid robots for an AGI?

Super AGIJul 15, 2023, 5:26 AM
−1 points
2 comments1 min readLW link

Ra­tion­al­ity, Ped­a­gogy, and “Vibes”: Quick Thoughts

Nicholas / Heather KrossJul 15, 2023, 2:09 AM
14 points
1 comment4 min readLW link

(redacted) Ano­ma­lous to­kens might dis­pro­por­tionately af­fect com­plex lan­guage tasks

Nikola JurkovicJul 15, 2023, 12:48 AM
4 points
0 comments7 min readLW link

Why was the AI Align­ment com­mu­nity so un­pre­pared for this mo­ment?

Ras1513Jul 15, 2023, 12:26 AM
121 points
65 comments2 min readLW link

Physics is Ul­ti­mately Subjective

Gordon Seidoh WorleyJul 14, 2023, 10:19 PM
5 points
34 comments3 min readLW link

[Question] How should a ra­tio­nal agent con­struct their util­ity func­tion when faced with ex­is­tence?

Aman RusiaJul 14, 2023, 7:48 PM
−2 points
1 comment1 min readLW link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

Štěpán LosJul 14, 2023, 5:43 PM
13 points
2 comments6 min readLW link

Un­safe AI as Dy­nam­i­cal Systems

Robert_AIZIJul 14, 2023, 3:31 PM
11 points
0 comments3 min readLW link
(aizi.substack.com)

A Short Sum­mary of “Fo­cus Your Uncer­tainty”

Stephen JamesJul 14, 2023, 11:18 AM
2 points
0 comments1 min readLW link

Do the change you want to see in the world

TeaTieAndHatJul 14, 2023, 10:19 AM
7 points
0 comments1 min readLW link

Gear­ing Up for Long Timelines in a Hard World

DalcyJul 14, 2023, 6:11 AM
18 points
0 comments4 min readLW link

When Some­one Tells You They’re Ly­ing, Believe Them

ymeskhoutJul 14, 2023, 12:31 AM
95 points
3 comments3 min readLW link

Ac­ti­va­tion adding ex­per­i­ments with FLAN-T5

Nina PanicksseryJul 13, 2023, 11:32 PM
21 points
5 comments7 min readLW link

[Question] What crite­rion would you use to se­lect com­pa­nies likely to cause AI doom?

momom2Jul 13, 2023, 8:31 PM
8 points
4 comments1 min readLW link

New­comb II: Newer and Comb-ier

Nathaniel MonsonJul 13, 2023, 6:49 PM
0 points
11 comments3 min readLW link

Jailbreak­ing GPT-4′s code interpreter

Nikola JurkovicJul 13, 2023, 6:43 PM
160 points
22 comments7 min readLW link

An at­tempt to steel­man OpenAI’s al­ign­ment plan

Nathan Helm-BurgerJul 13, 2023, 6:25 PM
22 points
0 comments4 min readLW link

In­stru­men­tal Con­ver­gence to Com­plex­ity Preservation

Macro FlaneurJul 13, 2023, 5:40 PM
2 points
2 comments3 min readLW link

Unabridged His­tory of Global Parenting

CrimsonChinJul 13, 2023, 4:49 PM
0 points
2 comments7 min readLW link

The God­dess of Every­thing Else—The Animation

WriterJul 13, 2023, 4:26 PM
142 points
4 comments1 min readLW link
(youtu.be)

Win­ners of AI Align­ment Awards Re­search Contest

Jul 13, 2023, 4:14 PM
115 points
4 comments12 min readLW link
(alignmentawards.com)

Ac­ci­den­tally Load Bearing

jefftkJul 13, 2023, 4:10 PM
289 points
18 comments1 min readLW link1 review
(www.jefftk.com)

AI #20: Code In­ter­preter and Claude 2.0 for Everyone

ZviJul 13, 2023, 2:00 PM
60 points
9 comments56 min readLW link
(thezvi.wordpress.com)

[Question] How can I get help be­com­ing a bet­ter ra­tio­nal­ist?

TeaTieAndHatJul 13, 2023, 1:41 PM
31 points
19 comments1 min readLW link

i love eat­ing trash

Ace DelgadoJul 13, 2023, 11:23 AM
−6 points
0 comments1 min readLW link

Elon Musk an­nounces xAI

Jan_KulveitJul 13, 2023, 9:01 AM
75 points
35 comments1 min readLW link
(www.ft.com)

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben SmithJul 13, 2023, 6:55 AM
19 points
9 comments9 min readLW link

Align­ment Me­gapro­jects: You’re Not Even Try­ing to Have Ideas

Nicholas / Heather KrossJul 12, 2023, 11:39 PM
55 points
32 comments2 min readLW link

Eric Michaud on the Quan­ti­za­tion Model of Neu­ral Scal­ing, In­ter­pretabil­ity and Grokking

Michaël TrazziJul 12, 2023, 10:45 PM
10 points
0 comments2 min readLW link
(theinsideview.ai)

[Question] Are there any good, easy-to-un­der­stand ex­am­ples of cases where statis­ti­cal causal net­work dis­cov­ery worked well in prac­tice?

tailcalledJul 12, 2023, 10:08 PM
42 points
6 comments1 min readLW link

The Opt-In Revolu­tion — My vi­sion of a pos­i­tive fu­ture with ASI (An ex­per­i­ment with LLM sto­ry­tel­ling)

TachikomaJul 12, 2023, 9:08 PM
2 points
0 comments2 min readLW link

[Question] What does the launch of x.ai mean for AI Safety?

Chris_LeongJul 12, 2023, 7:42 PM
35 points
3 comments1 min readLW link

Towards Devel­op­men­tal Interpretability

Jul 12, 2023, 7:33 PM
192 points
10 comments9 min readLW link1 review

Flowchart: How might rogue AIs defeat all hu­mans?

Aryeh EnglanderJul 12, 2023, 7:23 PM
12 points
0 comments1 min readLW link

A re­view of Prin­cipia Qualia

jessicataJul 12, 2023, 6:38 PM
56 points
8 comments10 min readLW link
(unstablerontology.substack.com)

How I Learned To Stop Wor­ry­ing And Love The Shoggoth

Peter MerelJul 12, 2023, 5:47 PM
9 points
15 comments5 min readLW link

Goal-Direc­tion for Si­mu­lated Agents

Raymond DouglasJul 12, 2023, 5:06 PM
33 points
2 comments6 min readLW link

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Dan HJul 12, 2023, 4:58 PM
16 points
0 commentsLW link

Re­port on mod­el­ing ev­i­den­tial co­op­er­a­tion in large worlds

Johannes TreutleinJul 12, 2023, 4:37 PM
45 points
3 comments1 min readLW link
(arxiv.org)