RSS

Hyperstitions

TagLast edit: 6 Dec 2024 10:32 UTC by Morphism

A hyperstition is a self-fulfilling belief.

From Give Up Seventy Percent of the Way Through the Hyperstitious Slur Cascade by Scott Alexander:

A hyperstition is a belief which becomes true if people believe it’s true. For example, “Dogecoin is a great short-term investment and you need to buy it right now!” is true if everyone believes it is true; lots of people will buy Dogecoin and it will go way up. “The bank is collapsing and you need to get your money out right away” is likewise true; if everyone believes it, there will be a run on the bank.

The word “hyperstition” can either apply to a collective hyperstition, like the above, or personal hyperstition, like “This pill will relieve my pain”.

A hyperstition that is a prediction is a self-fulfilling prophecy.

In doxastic modal logic, the statement “P is a hyperstition” is written as □P→P. Modal reasoners that satisfy Löb’s Theorem believe all personal hyperstitions. This can cause some problems for modal embedded agents. Löbian cooperation works by making mutual cooperation a collective hyperstition.

Sili­con Mo­ral­ity Plays: The Hyper­sti­tion Progress Report

jayterwahl29 Nov 2025 18:32 UTC
38 points
7 comments1 min readLW link

the void

nostalgebraist11 Jun 2025 3:19 UTC
396 points
107 comments1 min readLW link
(nostalgebraist.tumblr.com)

Pre­train­ing on Aligned AI Data Dra­mat­i­cally Re­duces Misal­ign­ment—Even After Post-Training

RogerDearnaley19 Jan 2026 21:24 UTC
102 points
12 comments11 min readLW link
(arxiv.org)

Hyperstition

Mark Russell30 Nov 2025 19:53 UTC
2 points
4 comments7 min readLW link

Misal­ign­ment and Role­play­ing: Are Misal­igned LLMs Act­ing Out Sci-Fi Sto­ries?

Mark Keavney24 Sep 2025 2:09 UTC
41 points
6 comments13 min readLW link

Should AI Devel­op­ers Re­move Dis­cus­sion of AI Misal­ign­ment from AI Train­ing Data?

Alek Westover23 Oct 2025 15:12 UTC
51 points
3 comments9 min readLW link

[Question] Ex­am­ples of self-fulfilling prophe­cies in AI al­ign­ment?

Chris Lakin3 Mar 2025 2:45 UTC
30 points
13 comments1 min readLW link

Un­bend­able Arm as Test Case for Reli­gious Belief

Ivan Vendrov14 Apr 2025 1:57 UTC
28 points
39 comments2 min readLW link
(nothinghuman.substack.com)

Mis­gen­er­al­iza­tion of Fic­tional Train­ing Data as a Con­trib­u­tor to Misalignment

Mark Keavney27 Aug 2025 1:01 UTC
15 points
1 comment2 min readLW link

Self-fulfilling correlations

PhilGoetz26 Aug 2010 21:07 UTC
151 points
50 comments3 min readLW link

When to join a re­spectabil­ity cascade

B Jacobs24 Sep 2024 7:54 UTC
10 points
1 comment2 min readLW link
(bobjacobs.substack.com)

Omega and self-fulfilling prophecies

Richard_Kennaway19 Mar 2011 17:23 UTC
14 points
19 comments1 min readLW link

The Parable of Pre­dict-O-Matic

abramdemski15 Oct 2019 0:49 UTC
364 points
43 comments14 min readLW link2 reviews

Fifty Shades of Self-Fulfilling Prophecy

PhilGoetz24 Jul 2014 0:17 UTC
38 points
87 comments2 min readLW link

Self-Fulfilling Prophe­cies Aren’t Always About Self-Awareness

John_Maxwell18 Nov 2019 23:11 UTC
14 points
7 comments4 min readLW link

An ex­am­ple of self-fulfilling spu­ri­ous proofs in UDT

cousin_it25 Mar 2012 11:47 UTC
33 points
43 comments2 min readLW link

Con­di­tional Pre­dic­tion with Zero-Sum Train­ing Solves Self-Fulfilling Prophecies

26 May 2023 17:44 UTC
88 points
13 comments24 min readLW link

The Way You Go Depends A Good Deal On Where You Want To Get: FEP min­i­mizes sur­prise about ac­tions us­ing prefer­ences about the fu­ture as *ev­i­dence*

Christopher King27 Apr 2025 21:55 UTC
10 points
5 comments5 min readLW link
No comments.