Ques­tions I’d Want to Ask an AGI+ to Test Its Un­der­stand­ing of Ethics

sweenesmJan 26, 2024, 11:40 PM
14 points
6 comments4 min readLW link

An In­vi­ta­tion to Refrain from Down­vot­ing Posts into Net-Nega­tive Karma

MikkWJan 26, 2024, 8:13 PM
2 points
12 comments1 min readLW link

The Good Balsamic Vinegar

jennJan 26, 2024, 7:30 PM
52 points
4 comments2 min readLW link
(jenn.site)

The Per­spec­tive-based Ex­pla­na­tion to the Reflec­tive In­con­sis­tency Paradox

dadadarrenJan 26, 2024, 7:00 PM
10 points
16 comments8 min readLW link

To Boldly Code

StrivingForLegibilityJan 26, 2024, 6:25 PM
25 points
4 comments3 min readLW link

In­cor­po­rat­ing Mechanism De­sign Into De­ci­sion Theory

StrivingForLegibilityJan 26, 2024, 6:25 PM
17 points
4 comments4 min readLW link

Mak­ing ev­ery re­searcher seek grants is a bro­ken model

jasoncrawfordJan 26, 2024, 4:06 PM
159 points
41 comments4 min readLW link
(rootsofprogress.org)

Notes on Innocence

David GrossJan 26, 2024, 2:45 PM
13 points
21 comments18 min readLW link

Stacked Lap­top Monitor

jefftkJan 26, 2024, 2:10 PM
22 points
5 comments1 min readLW link
(www.jefftk.com)

Surgery Works Well Without The FDA

Maxwell TabarrokJan 26, 2024, 1:31 PM
43 points
28 comments4 min readLW link
(maximumprogress.substack.com)

[Question] Work­shop (hackathon, res­i­dence pro­gram, etc.) about for-profit AI Safety pro­jects?

Roman LeventovJan 26, 2024, 9:49 AM
21 points
5 comments1 min readLW link

Without fun­da­men­tal ad­vances, mis­al­ign­ment and catas­tro­phe are the de­fault out­comes of train­ing pow­er­ful AI

Jan 26, 2024, 7:22 AM
161 points
60 comments57 min readLW link

Ap­prox­i­mately Bayesian Rea­son­ing: Knigh­tian Uncer­tainty, Good­hart, and the Look-Else­where Effect

RogerDearnaleyJan 26, 2024, 3:58 AM
16 points
2 comments11 min readLW link

Mus­ings on Cargo Cult Consciousness

Gareth DavidsonJan 25, 2024, 11:00 PM
−13 points
11 comments17 min readLW link

RAND re­port finds no effect of cur­rent LLMs on vi­a­bil­ity of bioter­ror­ism attacks

StellaAthenaJan 25, 2024, 7:17 PM
94 points
14 comments1 min readLW link
(www.rand.org)

[Question] Bayesian Reflec­tion Prin­ci­ples and Ig­no­rance of the Future

cricketsJan 25, 2024, 7:00 PM
5 points
3 comments1 min readLW link

“Does your paradigm beget new, good, paradigms?”

RaemonJan 25, 2024, 6:23 PM
40 points
6 comments2 min readLW link

AI #48: The Talk of Davos

ZviJan 25, 2024, 4:20 PM
38 points
9 comments36 min readLW link
(thezvi.wordpress.com)

Im­port­ing a Python File by Name

jefftkJan 25, 2024, 4:00 PM
12 points
7 comments1 min readLW link
(www.jefftk.com)

[Re­post] The Copen­hagen In­ter­pre­ta­tion of Ethics

mesaoptimizerJan 25, 2024, 3:20 PM
77 points
4 comments5 min readLW link
(web.archive.org)

Nash Bar­gain­ing be­tween Subagents doesn’t solve the Shut­down Problem

A.H.Jan 25, 2024, 10:47 AM
22 points
1 comment9 min readLW link

Sta­tus-ori­ented spending

Adam ZernerJan 25, 2024, 6:46 AM
14 points
19 comments4 min readLW link

Pro­tect­ing agent boundaries

ChipmonkJan 25, 2024, 4:13 AM
11 points
6 comments2 min readLW link

[Question] Is a ran­dom box of gas pre­dictable af­ter 20 sec­onds?

Jan 24, 2024, 11:00 PM
37 points
35 comments1 min readLW link

[Question] Will quan­tum ran­dom­ness af­fect the 2028 elec­tion?

Jan 24, 2024, 10:54 PM
66 points
52 comments1 min readLW link

AISN #30: In­vest­ments in Com­pute and Mili­tary AI Plus, Ja­pan and Sin­ga­pore’s Na­tional AI Safety Institutes

Jan 24, 2024, 7:38 PM
27 points
1 comment6 min readLW link
(newsletter.safe.ai)

Krueger Lab AI Safety In­tern­ship 2024

Joey BreamJan 24, 2024, 7:17 PM
3 points
0 comments1 min readLW link

Agents that act for rea­sons: a thought experiment

Michele CampoloJan 24, 2024, 4:47 PM
3 points
0 comments3 min readLW link

Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

Samuel HoltonJan 24, 2024, 4:19 PM
10 points
0 comments11 min readLW link
(forum.effectivealtruism.org)

The case for en­sur­ing that pow­er­ful AIs are controlled

Jan 24, 2024, 4:11 PM
276 points
73 comments28 min readLW link

LLMs can strate­gi­cally de­ceive while do­ing gain-of-func­tion re­search

Igor IvanovJan 24, 2024, 3:45 PM
36 points
4 comments11 min readLW link

Monthly Roundup #14: Jan­uary 2024

ZviJan 24, 2024, 12:50 PM
38 points
22 comments44 min readLW link
(thezvi.wordpress.com)

This might be the last AI Safety Camp

Jan 24, 2024, 9:33 AM
196 points
34 comments1 min readLW link

Global LessWrong/​AC10 Meetup on VRChat

Jan 24, 2024, 5:44 AM
15 points
2 comments1 min readLW link

Hu­mans aren’t fleeb.

Charlie SteinerJan 24, 2024, 5:31 AM
37 points
5 comments2 min readLW link

A Paradigm Shift in Sustainability

Jose Miguel Cruz y CelisJan 23, 2024, 11:34 PM
5 points
0 comments18 min readLW link

From Finite Fac­tors to Bayes Nets

J BostockJan 23, 2024, 8:03 PM
38 points
7 comments8 min readLW link

In­sti­tu­tional eco­nomics through the lens of scale-free reg­u­la­tive de­vel­op­ment, mor­pho­gen­e­sis, and cog­ni­tive science

Roman LeventovJan 23, 2024, 7:42 PM
8 points
0 comments14 min readLW link

Mak­ing a Sec­u­lar Sols­tice Songbook

jefftkJan 23, 2024, 7:40 PM
38 points
6 comments1 min readLW link
(www.jefftk.com)

Sim­ple Appreciations

Jonathan MoregårdJan 23, 2024, 4:23 PM
17 points
11 comments4 min readLW link
(open.substack.com)

[Question] What en­vi­ron­men­tal cues had you not seen them would have ended in dis­aster?

koratkarJan 23, 2024, 2:59 PM
11 points
1 comment1 min readLW link

Loneli­ness and suicide miti­ga­tion for stu­dents us­ing GPT3-en­abled chat­bots (sur­vey of Replika users in Na­ture)

Kaj_SotalaJan 23, 2024, 2:05 PM
45 points
2 comments2 min readLW link
(www.nature.com)

“Safety as a Scien­tific Pur­suit” (2024)

technicalitiesJan 23, 2024, 12:40 PM
17 points
3 comments2 min readLW link
(banburismus.substack.com)

Brain­storm­ing: Slow Takeoff

David PiepgrassJan 23, 2024, 6:58 AM
3 points
0 comments51 min readLW link

Refram­ing Acausal Trol­ling as Acausal Patronage

StrivingForLegibilityJan 23, 2024, 3:04 AM
14 points
0 comments2 min readLW link

Orthog­o­nal­ity or the “Hu­man Worth Hy­poth­e­sis”?

JeffsJan 23, 2024, 12:57 AM
21 points
31 comments3 min readLW link

the sub­red­dit size threshold

bhauthJan 23, 2024, 12:38 AM
32 points
3 comments4 min readLW link
(www.bhauth.com)

Start­ing in mechanis­tic interpretability

Jakub SmékalJan 22, 2024, 11:40 PM
1 point
0 comments3 min readLW link
(jakubsmekal.com)

We need a Science of Evals

Jan 22, 2024, 8:30 PM
71 points
13 comments9 min readLW link

An­nounc­ing the SoS Re­search Col­lec­tive for in­de­pen­dent re­searchers (and aca­demics think­ing in­de­pen­dently)

rogersbaconJan 22, 2024, 8:13 PM
15 points
0 comments8 min readLW link
(www.theseedsofscience.pub)