RSS

Si­mu­la­tion Hypothesis

TagLast edit: 25 Sep 2020 2:46 UTC by Ruby

The Simulation Hypothesis proposes that conscious beings could be immersed within an artificial Universe embedded within a higher order of reality. The roots of this argument can be found throughout the history of philosophy in such works as Plato’s “Allegory of the Cave” and Descartes “evil demon”.

The important distinction between these and modern Simulation Arguments has been the addition of proposed methods of engineering Simulated Reality through the use of computers. The modern Simulation Argument makes the case that since a civilization will be able to simulate many more ancient civilizations than there were ancient civilizations, it is more likely that we are a been simulated than not. It shows that the belief that there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.

John Barrow has suggested that if we are living in a computer simulation we may observe “glitches” in the our programmed environment due to the level of detail being compromised to save computing power. Alternatively, the Simulators may not have a full understanding of the Laws of Nature which would mean over time the simulated environment would drift away from its stable state. These “glitches” could be identified by scientists scrutinizing nature using unusual methods of observation. However, Nick Bostrom argues that it is extremely likely that a civilization will have far surpassing computational powers than the ones needed to simulate an ancient civilization in great detail. Moreover, one can argue that due to exponential grow, it’s extremely unlikely that the simulators are in the region of progress where they already can simulate an artificial reality but can’t simulate it with finer detail. They either can’t simulate at all, or have computational powers that far exceed the needed amount.

External links

See also

Si­mu­lated Elon Musk Lives in a Simulation

lsusr18 Sep 2021 7:37 UTC
60 points
9 comments3 min readLW link

The Si­mu­la­tion Hy­poth­e­sis Un­der­cuts the SIA/​Great Filter Dooms­day Argument

1 Oct 2021 22:23 UTC
45 points
10 comments7 min readLW link

On the Im­pos­si­bil­ity of In­ter­ac­tion with Si­mu­lated Agents

Walkabout13 Jan 2022 17:50 UTC
3 points
8 comments3 min readLW link

Si­mu­la­tion arguments

Joe Carlsmith18 Feb 2022 10:45 UTC
40 points
14 comments65 min readLW link

Jeff Shain­line thinks that there is too much serendipity in the physics of op­ti­cal/​su­per­con­duct­ing com­put­ing, sug­gest­ing that they were part of the crite­ria of Cos­molog­i­cal Nat­u­ral Selec­tion, which could have some fairly love­craf­tian implications

MakoYass1 Apr 2022 7:09 UTC
14 points
3 comments26 min readLW link

Beyond Astro­nom­i­cal Waste

Wei_Dai7 Jun 2018 21:04 UTC
116 points
41 comments3 min readLW link

The math­e­mat­i­cal uni­verse: the map that is the territory

ata26 Mar 2010 9:26 UTC
97 points
123 comments11 min readLW link

What would con­vince you you’d won the lot­tery?

Stuart_Armstrong10 Oct 2017 13:45 UTC
28 points
11 comments4 min readLW link

Prin­cipia Com­pat. The po­ten­tial Im­por­tance of Mul­ti­verse Theory

MakoYass2 Feb 2016 4:22 UTC
0 points
36 comments8 min readLW link

That Alien Message

Eliezer Yudkowsky22 May 2008 5:55 UTC
286 points
173 comments10 min readLW link

Con­tain­ing the AI… In­side a Si­mu­lated Reality

HumaneAutomation31 Oct 2020 16:16 UTC
1 point
5 comments2 min readLW link

Shock Level 5: Big Wor­lds and Mo­dal Realism

Roko25 May 2010 23:19 UTC
36 points
158 comments4 min readLW link

Real­ity vs Vir­tual Reality

cleonid13 Mar 2009 15:17 UTC
−12 points
8 comments1 min readLW link

[Question] The Si­mu­la­tion Epiphany Problem

Koen.Holtman31 Oct 2019 22:12 UTC
15 points
13 comments4 min readLW link

The AI in a box boxes you

Stuart_Armstrong2 Feb 2010 10:10 UTC
158 points
390 comments1 min readLW link

Are you in a Boltz­mann simu­la­tion?

Stuart_Armstrong13 Sep 2018 12:56 UTC
23 points
29 comments3 min readLW link

Si­mu­la­tion Ty­pol­ogy and Ter­mi­na­tion Risks

avturchin18 May 2019 12:42 UTC
12 points
0 comments1 min readLW link
(arxiv.org)

Si­mu­la­tions Map: what is the most prob­a­ble type of the simu­la­tion in which we live?

turchin11 Oct 2015 5:10 UTC
10 points
47 comments7 min readLW link

[Link] Does a Si­mu­la­tion Really Need to Be Run?

VincentYu22 Jun 2011 23:28 UTC
28 points
17 comments1 min readLW link

Phys­i­cal­ism im­plies ex­pe­rience never dies. So what am I go­ing to ex­pe­rience af­ter it does?

Szymon Kucharski14 Mar 2021 14:45 UTC
−2 points
1 comment30 min readLW link

On Falsify­ing the Si­mu­la­tion Hy­poth­e­sis (or Em­brac­ing its Pre­dic­tions)

Lorenzo Rex12 Apr 2021 0:12 UTC
8 points
17 comments5 min readLW link

Si­mu­la­tion the­ol­ogy: prac­ti­cal as­pect.

Just Learning5 May 2021 2:20 UTC
6 points
6 comments3 min readLW link

The Anti-Carter Basilisk

Jon Gilbert26 May 2021 22:56 UTC
0 points
0 comments2 min readLW link

A suffi­ciently para­noid non-Friendly AGI might self-mod­ify it­self to be­come Friendly

RomanS22 Sep 2021 6:29 UTC
5 points
2 comments1 min readLW link

[Question] Re­v­erse en­g­ineer­ing of the simulation

jan betley7 Feb 2022 21:36 UTC
1 point
2 comments1 min readLW link

[Question] What do you think will most prob­a­bly hap­pen to our con­scious­ness when our simu­la­tion ends?

ArtMi12 Apr 2022 8:23 UTC
1 point
6 comments1 min readLW link

Ne­go­ti­at­ing Up and Down the Si­mu­la­tion Hier­ar­chy: Why We Might Sur­vive the Unal­igned Singularity

David Udell4 May 2022 4:21 UTC
23 points
16 comments2 min readLW link

[Question] A ter­rify­ing var­i­ant of Boltz­mann’s brains problem

Zeruel01730 May 2022 20:08 UTC
5 points
12 comments4 min readLW link

[Question] What is the most prob­a­ble AI?

Zeruel01720 Jun 2022 23:26 UTC
−2 points
0 comments3 min readLW link
No comments.