[Question] Is there a fun­da­men­tal dis­tinc­tion be­tween simu­lat­ing a mind and simu­lat­ing *be­ing* a mind? Is this a use­ful and im­por­tant dis­tinc­tion?

Thoth HermesApr 8, 2023, 11:44 PM
−17 points
8 comments2 min readLW link

“warn­ing about ai doom” is also “an­nounc­ing ca­pa­bil­ities progress to noobs”

the gears to ascensionApr 8, 2023, 11:42 PM
23 points
5 comments3 min readLW link

Fea­ture Re­quest: Right Click to Copy LaTeX

DragonGodApr 8, 2023, 11:27 PM
18 points
4 comments1 min readLW link

ELCK might re­quire non­triv­ial scal­able al­ign­ment progress, and seems tractable enough to try

Alex Lawsen Apr 8, 2023, 9:49 PM
17 points
0 comments2 min readLW link

GPTs are Pre­dic­tors, not Imitators

Eliezer YudkowskyApr 8, 2023, 7:59 PM
416 points
100 comments3 min readLW link3 reviews

4 gen­er­a­tions of alignment

qbolecApr 8, 2023, 7:59 PM
1 point
0 comments3 min readLW link

The sur­pris­ing pa­ram­e­ter effi­ciency of vi­sion models

berenApr 8, 2023, 7:44 PM
81 points
28 comments4 min readLW link

Ran­dom Ob­ser­va­tion on AI goals

FTPickleApr 8, 2023, 7:28 PM
−11 points
2 comments1 min readLW link

Can we eval­u­ate the “tool ver­sus agent” AGI pre­dic­tion?

XodarapApr 8, 2023, 6:40 PM
16 points
7 commentsLW link

Rel­a­tive Ab­stracted Agency

AudereApr 8, 2023, 4:57 PM
14 points
6 comments5 min readLW link

The benev­olence of the butcher

dr_sApr 8, 2023, 4:29 PM
84 points
33 comments6 min readLW link1 review

SERI MATS—Sum­mer 2023 Cohort

Apr 8, 2023, 3:32 PM
71 points
25 comments4 min readLW link

AI Pro­pos­als at ‘Two Ses­sions’: AGI as ‘Two Bombs, One Satel­lite’?

Derek M. JonesApr 8, 2023, 11:31 AM
5 points
0 comments1 min readLW link
(www.chinatalk.media)

All images from the WaitButWhy se­quence on AI

trevorApr 8, 2023, 7:36 AM
73 points
5 comments2 min readLW link

Guidelines for pro­duc­tive discussions

ambigramApr 8, 2023, 6:00 AM
38 points
0 comments5 min readLW link

All AGI Safety ques­tions wel­come (es­pe­cially ba­sic ones) [April 2023]

steven0461Apr 8, 2023, 4:21 AM
57 points
89 comments2 min readLW link

Bring­ing Agency Into AGI Ex­tinc­tion Is Superfluous

George3d6Apr 8, 2023, 4:02 AM
28 points
18 comments5 min readLW link

La­gos, Nige­ria—ACX Mee­tups Every­where 2023

damolaApr 8, 2023, 3:55 AM
1 point
0 comments1 min readLW link

Up­com­ing Changes in Large Lan­guage Models

Andrew Keenan RichardsonApr 8, 2023, 3:41 AM
43 points
8 comments4 min readLW link
(mechanisticmind.com)

Con­sider The Hand Axe

ymeskhoutApr 8, 2023, 1:31 AM
142 points
16 comments6 min readLW link

AGI as a new data point

Will RodgersApr 8, 2023, 1:01 AM
−1 points
0 comments1 min readLW link

Parametrize Pri­or­ity Evaluations

SilverFlameApr 8, 2023, 12:39 AM
2 points
2 comments6 min readLW link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

Eliezer YudkowskyApr 8, 2023, 12:36 AM
271 points
44 comments12 min readLW link1 review

Hu­man­i­tar­ian Phase Tran­si­tion needed be­fore Tech­nolog­i­cal Sin­gu­lar­ity

Dr_WhatApr 7, 2023, 11:17 PM
−9 points
5 comments2 min readLW link

[Question] Thoughts about Hug­ging Face?

kwiat.devApr 7, 2023, 11:17 PM
7 points
0 comments1 min readLW link

[Question] Is it cor­rect to frame al­ign­ment as “pro­gram­ming a good philos­o­phy of mean­ing”?

UtilApr 7, 2023, 11:16 PM
2 points
3 comments1 min readLW link

Select Agent Speci­fi­ca­tions as Nat­u­ral Abstractions

lukemarksApr 7, 2023, 11:16 PM
19 points
3 comments5 min readLW link

n=3 AI Risk Quick Math and Reasoning

lionhearted (Sebastian Marshall)Apr 7, 2023, 8:27 PM
6 points
3 comments4 min readLW link

[Question] What are good al­ter­na­tives to Pre­dic­tion­book for per­sonal pre­dic­tion track­ing? Edited: I origi­nally thought it was down but it was just 500 un­til I though of clear­ing cook­ies.

sortegaApr 7, 2023, 7:18 PM
4 points
4 comments1 min readLW link

En­vi­ron­ments for Mea­sur­ing De­cep­tion, Re­source Ac­qui­si­tion, and Eth­i­cal Violations

Dan HApr 7, 2023, 6:40 PM
51 points
2 comments2 min readLW link
(arxiv.org)

Su­per­in­tel­li­gence Is Not Omniscience

Jeffrey HeningerApr 7, 2023, 4:30 PM
16 points
21 comments7 min readLW link
(aiimpacts.org)

An ‘AGI Emer­gency Eject Cri­te­ria’ con­sen­sus could be re­ally use­ful.

tcelferactApr 7, 2023, 4:21 PM
5 points
0 commentsLW link

Reli­a­bil­ity, Se­cu­rity, and AI risk: Notes from in­fosec text­book chap­ter 1

Orpheus16Apr 7, 2023, 3:47 PM
34 points
1 comment4 min readLW link

Pre-reg­is­ter­ing a study

Robert_AIZIApr 7, 2023, 3:46 PM
10 points
0 comments6 min readLW link
(aizi.substack.com)

Live dis­cus­sion at Eastercon

Douglas_ReayApr 7, 2023, 3:25 PM
5 points
0 comments1 min readLW link

[Question] ChatGTP “Writ­ing ” News Sto­ries for The Guardian?

jmhApr 7, 2023, 12:16 PM
1 point
4 comments1 min readLW link

Sto­ry­tel­ler’s con­ven­tion, 2223 A.D.

plexApr 7, 2023, 11:54 AM
8 points
0 comments2 min readLW link

Stampy’s AI Safety Info—New Distil­la­tions #1 [March 2023]

markovApr 7, 2023, 11:06 AM
42 points
0 comments2 min readLW link
(aisafety.info)

Beren’s “De­con­fus­ing Direct vs Amor­tised Op­ti­mi­sa­tion”

DragonGodApr 7, 2023, 8:57 AM
52 points
10 comments3 min readLW link

Goal al­ign­ment with­out al­ign­ment on episte­mol­ogy, ethics, and sci­ence is futile

Roman LeventovApr 7, 2023, 8:22 AM
20 points
2 comments2 min readLW link

Po­lio Lab Leak Caught with Wastew­a­ter Sampling

CullenApr 7, 2023, 1:06 AM
82 points
3 commentsLW link

Catch­ing the Eye of Sauron

Casey_Apr 7, 2023, 12:40 AM
221 points
68 comments4 min readLW link

[Question] How to par­allelize “in­her­ently” se­rial the­ory work?

Nicholas / Heather KrossApr 7, 2023, 12:08 AM
16 points
6 comments1 min readLW link

If Align­ment is Hard, then so is Self-Improvement

PavleMihaApr 7, 2023, 12:08 AM
21 points
20 comments1 min readLW link

An­thropic is fur­ther ac­cel­er­at­ing the Arms Race?

sapphireApr 6, 2023, 11:29 PM
82 points
22 comments1 min readLW link
(techcrunch.com)

Sugges­tion for safe AI struc­ture (Cu­rated Trans­par­ent De­ci­sions)

Kane GregoryApr 6, 2023, 10:00 PM
5 points
6 comments3 min readLW link

10 rea­sons why lists of 10 rea­sons might be a win­ning strategy

trevor6 Apr 2023 21:24 UTC
110 points
7 comments1 min readLW link

A Defense of Utilitarianism

Pareto Optimal6 Apr 2023 21:09 UTC
−3 points
2 comments5 min readLW link
(paretooptimal.substack.com)

One Does Not Sim­ply Re­place the Hu­mans

JerkyTreats6 Apr 2023 20:56 UTC
9 points
3 comments4 min readLW link
(www.lesswrong.com)

[Question] Where to be­gin in ML/​AI?

Jake the Student6 Apr 2023 20:45 UTC
9 points
4 comments1 min readLW link