Spooky action at a distance, and the Universe as a cellular automaton
Suppose the author of a simulation wrote some code that would run a cellular automaton. Suppose further that unlike Conway’s Game of Life, cells in this simulation could influence other cells that are not their immediate neighbour. This would be simple enough to code up, and the cellular automaton could still be Turing Complete, and indeed could perhaps be a highly efficient computational substrate for physics.
(Suppose that this automaton, instead of consisting of squares that would turn black or white each round, contained a series of numbers in each cell, which change predictably and in some logically clever way according to the numbers in other cells. One number, for example, could determine how far away the influence of this cell extends. This I think would make the automaton more capable of encoding the logic of things like electromagnetic fields etc.)
A physicist in the simulated Universe might be puzzled by this “spooky action at a distance”, where “cells” which are treated as particles appear to influence one another or be entangled in puzzling ways. Think Bell’s Theorem and that whole discussion.
Perhaps...we might be living in such a Universe, and if we could figure out the right kind of sophisticated cellular automaton, run on a computer if not pen and paper, physics would be making more progress than under the current paradigm of using extremely expensive machines to bash particles together?
Why do so many technophiles dislike the idea of world government?
I rarely see the concept of “world government”, or governance, or a world court or any such thing, spoken of positively by anyone. That includes technophiles and futurists who are fully cognizant of and believe in the concept of a technological singularity that needs to be controlled, “aligned”, made safe etc.
Solutions to AI safety usually focus on how the AI should be coded, and it seems to me that the idea of “cancelling war/ merely human economics”—in a sense, dropping our tools wherever humanity is not focused entirely on making a safe FAI—is a little neglected.
Of course, some of the people who focus on the mathematical/logical/code aspects of safe AI are doing a great job, and I don’t mean to disparage their work. But I am nonetheless posing this question.
I also do not (necessarily) mean to conflate world government with a communist system that ignores Hayek’s fatal conceit and therefore renders humanity less capable of building AIs, computers etc. Just some type of governance singleton that means all nukes are in safe hands, etc.
(crosspost from Hacker News)