[Question] What is the prob­a­bil­ity that a su­per­in­tel­li­gent, sen­tient AGI is ac­tu­ally in­fea­si­ble?

Nathan112314 Aug 2022 22:41 UTC
−3 points
6 comments1 min readLW link

Deal­ing With Delusions

adrusi14 Aug 2022 21:11 UTC
9 points
2 comments1 min readLW link

All the posts I will never write

Alexander Gietelink Oldenziel14 Aug 2022 18:29 UTC
53 points
8 comments8 min readLW link

Brain-like AGI pro­ject “ain­telope”

Gunnar_Zarncke14 Aug 2022 16:33 UTC
54 points
2 comments1 min readLW link

AI Trans­parency: Why it’s crit­i­cal and how to ob­tain it.

Zohar Jackson14 Aug 2022 10:31 UTC
6 points
1 comment5 min readLW link

A brief note on Sim­plic­ity Bias

Spencer Becker-Kahn14 Aug 2022 2:05 UTC
19 points
0 comments4 min readLW link

Evolu­tion is a bad anal­ogy for AGI: in­ner alignment

Quintin Pope13 Aug 2022 22:15 UTC
78 points
15 comments8 min readLW link

An Un­canny Prison

Nathan112313 Aug 2022 21:40 UTC
3 points
3 comments2 min readLW link

Florida Elections

Double13 Aug 2022 20:10 UTC
−3 points
8 comments1 min readLW link

Cul­ti­vat­ing Valiance

Shoshannah Tekofsky13 Aug 2022 18:47 UTC
35 points
4 comments4 min readLW link

An ex­tended rocket al­ign­ment analogy

remember13 Aug 2022 18:22 UTC
28 points
3 comments4 min readLW link

[Question] The OpenAI play­ground for GPT-3 is a ter­rible in­ter­face. Is there any great lo­cal (or web) app for ex­plor­ing/​learn­ing with lan­guage mod­els?

aviv13 Aug 2022 16:34 UTC
3 points
1 comment1 min readLW link

[Question] What is an agent in re­duc­tion­ist ma­te­ri­al­ism?

Valentine13 Aug 2022 15:39 UTC
7 points
15 comments1 min readLW link

Refine’s First Blog Post Day

adamShimi13 Aug 2022 10:23 UTC
55 points
3 comments1 min readLW link

The Dumbest Pos­si­ble Gets There First

Artaxerxes13 Aug 2022 10:20 UTC
44 points
7 comments2 min readLW link

I missed the crux of the al­ign­ment prob­lem the whole time

zeshen13 Aug 2022 10:11 UTC
53 points
7 comments3 min readLW link

goal-pro­gram bricks

Tamsin Leake13 Aug 2022 10:08 UTC
31 points
2 comments2 min readLW link
(carado.moe)

Shapes of Mind and Plu­ral­ism in Alignment

adamShimi13 Aug 2022 10:01 UTC
33 points
2 comments2 min readLW link

How I think about alignment

Linda Linsefors13 Aug 2022 10:01 UTC
31 points
11 comments5 min readLW link

Steelmin­ing via Analogy

Paul Bricman13 Aug 2022 9:59 UTC
24 points
0 comments2 min readLW link
(paulbricman.com)

the In­su­lated Goal-Pro­gram idea

Tamsin Leake13 Aug 2022 9:57 UTC
43 points
4 comments2 min readLW link
(carado.moe)

Ap­pendix: Jar­gon Dictionary

CFAR!Duncan13 Aug 2022 8:09 UTC
32 points
5 comments21 min readLW link

Ap­pendix: Ham­ming Questions

CFAR!Duncan13 Aug 2022 8:07 UTC
36 points
0 comments2 min readLW link

Build­ing a Bugs List prompts

CFAR!Duncan13 Aug 2022 8:00 UTC
62 points
9 comments2 min readLW link

Cam­bridge LW Meetup: Con­struc­tive Complaining

Tony Wang13 Aug 2022 4:52 UTC
2 points
0 comments1 min readLW link

Gra­di­ent de­scent doesn’t se­lect for in­ner search

Ivan Vendrov13 Aug 2022 4:15 UTC
47 points
23 comments4 min readLW link

[Question] How to bet against civ­i­liza­tional ad­e­quacy?

Wei Dai12 Aug 2022 23:33 UTC
54 points
17 comments1 min readLW link

In­fant AI Scenario

Nathan112312 Aug 2022 21:20 UTC
1 point
0 comments3 min readLW link

Deep­Mind al­ign­ment team opinions on AGI ruin arguments

Vika12 Aug 2022 21:06 UTC
376 points
37 comments14 min readLW link1 review

Dis­solve: The Petty Crimes of Blaise Pascal

JohnBuridan12 Aug 2022 20:04 UTC
17 points
4 comments6 min readLW link

The Host Minds of HBO’s West­world.

Nerret12 Aug 2022 18:53 UTC
1 point
0 comments3 min readLW link

What is es­ti­ma­tional pro­gram­ming? Squig­gle in context

Quinn12 Aug 2022 18:39 UTC
14 points
7 comments7 min readLW link

Over­sight Misses 100% of Thoughts The AI Does Not Think

johnswentworth12 Aug 2022 16:30 UTC
97 points
50 comments1 min readLW link

Timelines ex­pla­na­tion post part 1 of ?

Nathan Helm-Burger12 Aug 2022 16:13 UTC
10 points
1 comment2 min readLW link

A lit­tle play­ing around with Blen­der­bot3

Nathan Helm-Burger12 Aug 2022 16:06 UTC
9 points
0 comments1 min readLW link

Refin­ing the Sharp Left Turn threat model, part 1: claims and mechanisms

12 Aug 2022 15:17 UTC
85 points
4 comments3 min readLW link1 review
(vkrakovna.wordpress.com)

Ar­gu­ment by In­tel­lec­tual Ordeal

lc12 Aug 2022 13:03 UTC
26 points
5 comments5 min readLW link

Anti-squat­ted AI x-risk do­mains index

plex12 Aug 2022 12:01 UTC
56 points
6 comments1 min readLW link

[Question] Perfect Predictors

aditya malik12 Aug 2022 11:51 UTC
2 points
5 comments1 min readLW link

[Question] What are some good ar­gu­ments against build­ing new nu­clear power plants?

RomanS12 Aug 2022 7:32 UTC
16 points
15 comments2 min readLW link

Seek­ing PCK (Ped­a­gog­i­cal Con­tent Knowl­edge)

CFAR!Duncan12 Aug 2022 4:15 UTC
52 points
11 comments5 min readLW link

Ar­tifi­cial in­tel­li­gence wireheading

Big Tony12 Aug 2022 3:06 UTC
5 points
2 comments1 min readLW link

Dis­sected boxed AI

Nathan112312 Aug 2022 2:37 UTC
−8 points
2 comments1 min readLW link

Troll Timers

Screwtape12 Aug 2022 0:55 UTC
29 points
13 comments4 min readLW link

[Question] Se­ri­ously, what goes wrong with “re­ward the agent when it makes you smile”?

TurnTrout11 Aug 2022 22:22 UTC
86 points
42 comments2 min readLW link

En­cul­tured AI Pre-plan­ning, Part 2: Pro­vid­ing a Service

11 Aug 2022 20:11 UTC
33 points
4 comments3 min readLW link

My sum­mary of the al­ign­ment problem

Peter Hroššo11 Aug 2022 19:42 UTC
16 points
3 comments2 min readLW link
(threadreaderapp.com)

Lan­guage mod­els seem to be much bet­ter than hu­mans at next-to­ken prediction

11 Aug 2022 17:45 UTC
182 points
59 comments13 min readLW link1 review

In­tro­duc­ing Past­cast­ing: A tool for fore­cast­ing practice

Sage Future11 Aug 2022 17:38 UTC
95 points
10 comments2 min readLW link2 reviews

Pen­du­lums, Policy-Level De­ci­sion­mak­ing, Sav­ing State

CFAR!Duncan11 Aug 2022 16:47 UTC
26 points
3 comments8 min readLW link