In­tro­duc­ción al Riesgo Ex­is­ten­cial de In­teligen­cia Artificial

david.friva15 Jul 2023 20:37 UTC
4 points
2 comments4 min readLW link
(youtu.be)

The hous­ing crisis, ex­plained us­ing game theory

Johnstone15 Jul 2023 20:27 UTC
4 points
2 comments8 min readLW link

Only a hack can solve the shut­down problem

dp15 Jul 2023 20:26 UTC
5 points
0 comments8 min readLW link

Ro­bust­ness of Model-Graded Eval­u­a­tions and Au­to­mated Interpretability

15 Jul 2023 19:12 UTC
44 points
5 comments9 min readLW link

[Question] How to deal with fear of failure?

TeaTieAndHat15 Jul 2023 18:57 UTC
1 point
2 comments1 min readLW link

Sim­plified bio-an­chors for up­per bounds on AI timelines

Fabien Roger15 Jul 2023 18:15 UTC
20 points
4 comments5 min readLW link

A Hill of Val­idity in Defense of Meaning

Zack_M_Davis15 Jul 2023 17:57 UTC
8 points
118 comments75 min readLW link
(unremediatedgender.space)

What is a cog­ni­tive bias?

Lionel15 Jul 2023 13:01 UTC
1 point
0 comments2 min readLW link
(lionelpage.substack.com)

[Question] When peo­ple say robots will steal jobs, what kinds of jobs are never im­plied?

Mary Chernyshenko15 Jul 2023 10:50 UTC
5 points
12 comments1 min readLW link

Nar­ra­tive The­ory. Part 2. A new way of do­ing the same thing

Eris15 Jul 2023 10:37 UTC
2 points
0 comments1 min readLW link

How to use ChatGPT to get bet­ter book & movie recommendations

KatWoods15 Jul 2023 8:55 UTC
28 points
3 comments1 min readLW link

[Question] Would you take a job mak­ing hu­manoid robots for an AGI?

Super AGI15 Jul 2023 5:26 UTC
−1 points
2 comments1 min readLW link

Ra­tion­al­ity, Ped­a­gogy, and “Vibes”: Quick Thoughts

NicholasKross15 Jul 2023 2:09 UTC
14 points
1 comment4 min readLW link

(redacted) Ano­ma­lous to­kens might dis­pro­por­tionately af­fect com­plex lan­guage tasks

nikola15 Jul 2023 0:48 UTC
4 points
0 comments7 min readLW link

Why was the AI Align­ment com­mu­nity so un­pre­pared for this mo­ment?

Ras151315 Jul 2023 0:26 UTC
119 points
65 comments2 min readLW link

Physics is Ul­ti­mately Subjective

Gordon Seidoh Worley14 Jul 2023 22:19 UTC
5 points
34 comments3 min readLW link

[Question] How should a ra­tio­nal agent con­struct their util­ity func­tion when faced with ex­is­tence?

Aman Rusia14 Jul 2023 19:48 UTC
−2 points
1 comment1 min readLW link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

Štěpán Los14 Jul 2023 17:43 UTC
13 points
2 comments6 min readLW link

Un­safe AI as Dy­nam­i­cal Systems

Robert_AIZI14 Jul 2023 15:31 UTC
11 points
0 comments3 min readLW link
(aizi.substack.com)

A Short Sum­mary of “Fo­cus Your Uncer­tainty”

Stephen James14 Jul 2023 11:18 UTC
2 points
0 comments1 min readLW link

Do the change you want to see in the world

TeaTieAndHat14 Jul 2023 10:19 UTC
7 points
0 comments1 min readLW link

Gear­ing Up for Long Timelines in a Hard World

Dalcy14 Jul 2023 6:11 UTC
13 points
0 comments4 min readLW link

When Some­one Tells You They’re Ly­ing, Believe Them

ymeskhout14 Jul 2023 0:31 UTC
92 points
3 comments3 min readLW link

Ac­ti­va­tion adding ex­per­i­ments with FLAN-T5

Nina Rimsky13 Jul 2023 23:32 UTC
21 points
5 comments7 min readLW link

[Question] What crite­rion would you use to se­lect com­pa­nies likely to cause AI doom?

momom213 Jul 2023 20:31 UTC
8 points
4 comments1 min readLW link

New­comb II: Newer and Comb-ier

Nathaniel Monson13 Jul 2023 18:49 UTC
0 points
11 comments3 min readLW link

Jailbreak­ing GPT-4′s code interpreter

nikola13 Jul 2023 18:43 UTC
160 points
22 comments7 min readLW link

An at­tempt to steel­man OpenAI’s al­ign­ment plan

Nathan Helm-Burger13 Jul 2023 18:25 UTC
22 points
0 comments4 min readLW link

In­stru­men­tal Con­ver­gence to Com­plex­ity Preservation

Macro Flaneur13 Jul 2023 17:40 UTC
2 points
2 comments3 min readLW link

Unabridged His­tory of Global Parenting

CrimsonChin13 Jul 2023 16:49 UTC
0 points
2 comments7 min readLW link

The God­dess of Every­thing Else—The Animation

Writer13 Jul 2023 16:26 UTC
142 points
4 comments1 min readLW link
(youtu.be)

Win­ners of AI Align­ment Awards Re­search Contest

13 Jul 2023 16:14 UTC
114 points
3 comments12 min readLW link
(alignmentawards.com)

Ac­ci­den­tally Load Bearing

jefftk13 Jul 2023 16:10 UTC
264 points
14 comments1 min readLW link
(www.jefftk.com)

AI #20: Code In­ter­preter and Claude 2.0 for Everyone

Zvi13 Jul 2023 14:00 UTC
60 points
9 comments56 min readLW link
(thezvi.wordpress.com)

[Question] How can I get help be­com­ing a bet­ter ra­tio­nal­ist?

TeaTieAndHat13 Jul 2023 13:41 UTC
31 points
19 comments1 min readLW link

i love eat­ing trash

Ace Delgado13 Jul 2023 11:23 UTC
−15 points
0 comments1 min readLW link

Elon Musk an­nounces xAI

Jan_Kulveit13 Jul 2023 9:01 UTC
75 points
35 comments1 min readLW link
(www.ft.com)

The in­tel­li­gence-sen­tience or­thog­o­nal­ity thesis

Ben Smith13 Jul 2023 6:55 UTC
18 points
9 comments9 min readLW link

Align­ment Me­gapro­jects: You’re Not Even Try­ing to Have Ideas

NicholasKross12 Jul 2023 23:39 UTC
55 points
30 comments2 min readLW link

Eric Michaud on the Quan­ti­za­tion Model of Neu­ral Scal­ing, In­ter­pretabil­ity and Grokking

Michaël Trazzi12 Jul 2023 22:45 UTC
10 points
0 comments2 min readLW link
(theinsideview.ai)

[Question] Are there any good, easy-to-un­der­stand ex­am­ples of cases where statis­ti­cal causal net­work dis­cov­ery worked well in prac­tice?

tailcalled12 Jul 2023 22:08 UTC
42 points
6 comments1 min readLW link

The Opt-In Revolu­tion — My vi­sion of a pos­i­tive fu­ture with ASI (An ex­per­i­ment with LLM sto­ry­tel­ling)

Tachikoma12 Jul 2023 21:08 UTC
2 points
0 comments2 min readLW link

[Question] What does the launch of x.ai mean for AI Safety?

Chris_Leong12 Jul 2023 19:42 UTC
35 points
3 comments1 min readLW link

Towards Devel­op­men­tal Interpretability

12 Jul 2023 19:33 UTC
172 points
8 comments9 min readLW link

Flowchart: How might rogue AIs defeat all hu­mans?

Aryeh Englander12 Jul 2023 19:23 UTC
12 points
0 comments1 min readLW link

A re­view of Prin­cipia Qualia

jessicata12 Jul 2023 18:38 UTC
56 points
6 comments10 min readLW link
(unstablerontology.substack.com)

How I Learned To Stop Wor­ry­ing And Love The Shoggoth

Peter Merel12 Jul 2023 17:47 UTC
10 points
9 comments5 min readLW link

Goal-Direc­tion for Si­mu­lated Agents

Raymond D12 Jul 2023 17:06 UTC
33 points
2 comments6 min readLW link

AISN#14: OpenAI’s ‘Su­per­al­ign­ment’ team, Musk’s xAI launches, and de­vel­op­ments in mil­i­tary AI use

Dan H12 Jul 2023 16:58 UTC
16 points
0 comments1 min readLW link

Re­port on mod­el­ing ev­i­den­tial co­op­er­a­tion in large worlds

Johannes Treutlein12 Jul 2023 16:37 UTC
44 points
3 comments1 min readLW link
(arxiv.org)