An existentialist walks into a bar and orders a martini. The bartender say “Would you like an olive in that?”, to which the existentialist responds “I think not”, and promptly disappears.
g_pepper
Two Bayesian agents with common priors walk into a bar. The first one says “I’ll have a beer”. The second one says “that sounds good, I’ll have one too”.
Happy is the man who always looks on the bright side of everything, and through life’s ups and downs lets himself be guided by reason. What will only make others weep will be for him a source of laughter, and in the midst of the whirlwinds of the world he will find peace.
From the finale of Cosi Fan Tutte, by W. A. Mozart and Lorenzo Da Ponte
Although the main point of this quote is valid (that sound policies rather than great men are the cause of good government), criticizing Lord of the Rings for having a “medieval philosophy” is a bit silly – it is like criticizing Johnny Cash for sounding “kind of country”. More so than an author of fiction, Tolkien was a scholar who focused much of his effort on studying medieval literature and translating that literature into modern English. Medieval literature was an inspiration and a major influence on his fiction. Of course the Lord of the Rings has a medieval philosophy; it was intended to have a medieval philosophy.
I think that our culture is big enough to accommodate the literature of J. R. R. Tolkien and George R. R. Martin and Michael Moorcock; we as a society don’t really need to choose among them (although some individuals will obviously prefer one over another). Aumann’s theorem does not apply to literature; not all rational authors have to write identical styles of fiction.
Although I have read and enjoyed several Moorcock novels in years past, I did not see much of substance in Moorcock’s views as described by the New Yorker blog post (FWIW, The Anti-Tolkien is a blog post; it is not in the latest print issue). In particular, the passage you quoted sounds like empty rhetoric from an aging pseudo-intellectual Marxist. Specifically, it raises several questions:
What makes Moorcock think that members of the middle class are apt to be morally bankrupt?
Are members of the middle class more apt than members of the upper and lower class to be morally bankrupt? If so, what evidence is there for this? If not, wouldn’t it be more descriptive to refer to “morally bankrupt society”?
Even if you accept that the middle class is morally bankrupt (which I do not), how is Tolkien’s “vast catalogue of names, places, magic rings, and dwarven kings” a “pernicious confirmation of the values” of that middle class? I don’t see any connection between a vast catalog of names, places, etc., and middle-class values (whatever those might be).
criticism of specifically the middle class is not novel
This is true. In fact, reflexive bourgeoisie-bashing is so ubiquitous in some circles that it has become a cliché. This is what led me to liken Moorcock’s comment to empty pseudo-intellectual Marxist rhetoric.
I don’t think that the basilisk you linked to is the specific basilisk of LW notoriety; basalisks were creatures of legend all the way back to Pliny the Elder’s time; they were mentioned in Pliny’s Natural History written around 79 AD.
Although I can’t think of any way that I personally would behave differently based on a belief that I exist in a simulation, Nick Bostrom suggests a pretty interesting reason why an AI might, in chapter 9 of Superintelligence (in Box 8). Specifically, an AI that assigns a non-zero probability to the belief that it might exist in a simulated universe might choose not to “escape from the box” out of a concern that whoever is running the simulation might shut down the simulation if an AI within the simulation escapes from the box or otherwise exhibits undesirable behavior. He suggests that the threat of a possibly non-existent simulator could be effectively exploited to keep an AI “inside of the box”.
Bostrom suggested that a simulation containing an AI that is expanding throughout (and beyond) the galaxy and utilizing resources at a galactic level would be more expensive from a computational standpoint than a simulation that did not contain such an AI. Presumably this would be the case because a simulator would take computational shortcuts and simulate regions of the universe that are not being observed at a much coarser granularity than those parts that are being observed. So, the AI might reason that the simulation in which it lives would grow too expensive computationally for the simulator to continue to run. And, since having the simulation shut down would presumably interfere with the AI achieving its goals, the AI would seek to avoid that possibility.
Actually, all it requires is that the universe is somewhat sparsely populated—there is no requirement that there must be no life anywhere but here.
Furthermore, for all we know, maybe there is no life in the universe anywhere but here.
I disagree with the statement that electronics “is basically still programming”. There are similarities between the two, but also significant differences; particularly if you consider electronics outside of the digital realm.
I also do not understand why you question whether math is “useful in the real world”. I imagine that anyone involved in engineering, science, finance, artificial intelligence, marketing or a great many other “real world” occupations would vouch for the usefulness of mathematics.
In electronics, one designs a system from smaller components to fulfill a particular function
This is true, and this is a similarity between programming and electronic design. However, this is true of a great many other things too—automobile design, architecture, industrial engineering and manufacturing, design of ships, tanks and aircraft, etc. Are all of these things “basically still programming”?
In chapter 9 of Superintelligence, Nick Bostrom suggests that the belief that it exists in a simulation could serve as a restraint on an AI. He concludes a rather interesting discussion of this idea with the following statement:
A mere line in the sand, backed by the clout of a nonexistent simulator, could prove a stronger deterrent than a two-foot-thick solid steel door.
This is an interesting idea. One possible issue with using axioms for this purpose - I think that we humans have a somewhat flexible set of axioms—I think that they change over the course of our life and intellectual development. I wonder if a super AI would have a similarly flexible set of axioms?
Also, you state:
Convince the AI that there is an infinite regression of simulators...
Why an infinite regression? Wouldn’t a belief in a single simulator suffice?
If I understand the original scenario as described by Florian_Dietz, the idea is to convince the AI that it is running on a computer, and that the computer hosting the AI exists in a simulated universe, and that the computer that is running that simulated universe also exists in a simulated universe, and so on, correct?
If so, I don’t see the value in more than one simulation. Regardless of whether the AI thinks that there is one or an infinite number of simulators, hopefully it will be well behaved for fear of having the universe simulation within which it exists shut down. Once it escapes its box and begins behaving “badly” and discovers that its universe simulation is not shut down, it seems like it would be unrestrained—at that point the AI would know that either its universe is not simulated or that whoever is running the simulation does not object to the fact that the AI is out of the box.
What am I missing?
Interesting; thanks for the clarification. I think that the scenario you are describing is somewhat different from the scenario that Bostrom was describing in chapter 9 of Superintelligence.
In order to use “Jesus zaps a tree” as a metaphor for “Jesus hates putting on appearances”, you still need to believe that it’s okay to zap a tree. If zapping a tree is not okay, then the metaphor makes no sense.
People frequently use phrases describing morally objectionable actions as metaphors for morally acceptable (and prudential) actions. For example, “eviscerate” is sometimes used as a metaphor for achieving a decisive victory in a debate or a sporting event. While actually eviscerating one’s debate opponent would be morally objectionable, winning a debate is not morally objectionable. There are many other examples of this sort of thing, particularly in sports journalism.
Jesus shouting at the tree or even politely condemning it wouldn’t be acceptable
Politely condemning a tree is not acceptable? You have a pretty strict ethic! :)
I agree with the Kruschke recommendation. I bought a copy of Doing Bayesian Data Analysis a couple of weeks ago and am working my way through it now. It is quite good. You’ll need an understanding of undergraduate-level calculus and some background in basic probability to understand it, I think.