Love example 2. Maybe there is a name for this already, but you could generalize the semiotic fallacy to arguments where there is an appeal to any motivating idea (whether of a semiotic nature of not) that is exceptionally hard to evaluate from a consequentialist perspective. Example: From my experience, among mathematicians (at least in theoretical computer science, though I’d guess it’s the same in other areas) who attempt to justify their work, most end up appealing to the idea of unforeseen connections/usage in the future.
der
Hey, I’ve been an anonymous reader off and on over the years.
Seeing that there was some interest in Bostrom’s simulation argument before (http://lesswrong.com/lw/hgx/paper_on_the_simulation_argument_and_selective/), I wanted to post a link to a paper I wrote on the subject, together with the following text, but I was only able to post into my (private?) Drafts section. I’m sorry I don’t know better about where the appropriate place is for this kind of thing (if it’s welcome here at all). The paper: http://www.cs.toronto.edu/~wehr/rd/simulation_args_crit_extended_with_proofs.pdf
This is a very technical paper, which requires some (or a lot) of familiarity with Bostrom/Kulczycki’s “patched” Simulation Argument (www.simulation-argument.com/patch.pdf). I’m choosing to publish it here after experiencing Analysis’s depressing version of peer review (they rejected a shorter, more-professional version of the paper based on one very positive review, and one negative review that was almost certainly written by Kulczycki or Bostrom themself).
The positive review (of the earlier shorter, more-professional version of the paper) does a better job of summarizing the contribution than I did, so with the permission of the reviewer I’m including an excerpt here:
Bostrom (2003) argued that at least one of the following three claims is true: (1) the fraction of civilizations that reach a ‘post-human’ stage is approximately zero; (2) the fraction of post-human civilizations interested in running ‘significant numbers’ of simulations of their own ancestors is approximately zero; (3) the fraction of observers with human-type experiences that are simulated is approximately one.
The informal argument for this three-part disjunction is that, given what we know about the physical limits of computation, a post-human civilization would be so technologically advanced that it could run ‘hugely many’ simulations of observers very easily, should it choose to do so, so that the falsity of (1) and (2) implies the truth of (3). However, this informal argument falls short of a formal proof.
Bostrom himself saw that his attempt at a formal proof in the (2003) paper was sloppy, and he attempted to put it right in Bostrom and Kulczycki (2011). The take-home message of Sections 1 and 2 of the manuscript under review is that these (2011) reformulations of the argument are still rather sloppy. For example, the author points out (p. 6) that the main text of B&K inaccurately describes the mathematical argument in the appendix: the appendix uses an assumption much more favourable to B&K’s desired conclusion than the assumption stated in the main text. Moreover, B&K’s use of vague terms such as ‘significant number’ and ‘astronomically large factor’ creates a misleading impression. The author shows, amusingly, that the ‘significant number’ must be almost 1 million times greater than the ‘astronomically large factor’ for their argument to work (p. 9).
In Section 3, the author provides a new formulation of the simulation argument that is easily the most rigorous I have seen. This formulation deserves to be the reference point for future discussions of the argument’s epistemological consequences.”
For example, the statement of the argument in https://wiki.lesswrong.com/wiki/Simulation_argument definitely needs to be revised.
Your note about Gödel’s theorem is confusing or doesn’t make sense. There is no such thing as an inconsistent math structure, assuming that by “structure” you mean the things used in defining the semantics of first order logic (which is what Tegmark means when he says “structure”, unless I’m mistaken).
The incompleteness theorems only give limitations on recursively enumerable sets of axioms.
Other than that, this looks like a great resource for people wanting to investigate the topic for themselves.
Errors in the Bostrom/Kulczycki Simulation Arguments
Seeing that there was some interest in Bostrom’s simulation argument before (http://lesswrong.com/lw/hgx/paper_on_the_simulation_argument_and_selective/), I wanted to post a link to a paper I wrote on the subject, together with the following text, but I was only able to post into my (private?) Drafts section. I’m sorry I don’t know better about where the appropriate place is for this kind of thing (if it’s welcome here at all). The paper: http://www.cs.toronto.edu/~wehr/rd/simulation_args_crit_extended_with_proofs.pdf
This is a very technical paper, which requires some (or a lot) of familiarity with Bostrom/Kulczycki’s “patched” Simulation Argument (www.simulation-argument.com/patch.pdf). I’m choosing to publish it here after experiencing Analysis’s depressing version of peer review (they rejected a shorter, more-professional version of the paper based on one very positive review, and one negative review that was almost certainly written by Kulczycki or Bostrom themself).
The positive review (of the earlier shorter, more-professional version of the paper) does a better job of summarizing the contribution than I did, so with the permission of the reviewer I’m including an excerpt here:
Bostrom (2003) argued that at least one of the following three claims is true: (1) the fraction of civilizations that reach a ‘post-human’ stage is approximately zero; (2) the fraction of post-human civilizations interested in running ‘significant numbers’ of simulations of their own ancestors is approximately zero; (3) the fraction of observers with human-type experiences that are simulated is approximately one.
The informal argument for this three-part disjunction is that, given what we know about the physical limits of computation, a post-human civilization would be so technologically advanced that it could run ‘hugely many’ simulations of observers very easily, should it choose to do so, so that the falsity of (1) and (2) implies the truth of (3). However, this informal argument falls short of a formal proof.
Bostrom himself saw that his attempt at a formal proof in the (2003) paper was sloppy, and he attempted to put it right in Bostrom and Kulczycki (2011). The take-home message of Sections 1 and 2 of the manuscript under review is that these (2011) reformulations of the argument are still rather sloppy. For example, the author points out (p. 6) that the main text of B&K inaccurately describes the mathematical argument in the appendix: the appendix uses an assumption much more favourable to B&K’s desired conclusion than the assumption stated in the main text. Moreover, B&K’s use of vague terms such as ‘significant number’ and ‘astronomically large factor’ creates a misleading impression. The author shows, amusingly, that the ‘significant number’ must be almost 1 million times greater than the ‘astronomically large factor’ for their argument to work (p. 9).
In Section 3, the author provides a new formulation of the simulation argument that is easily the most rigorous I have seen. This formulation deserves to be the reference point for future discussions of the argument’s epistemological consequences.”
I think that was B/K’s point of view as well, although in their review they fell back on the Patch 2 argument. The version of my paper they read didn’t flesh out the problems with the Patch 2 argument.
I respectfully disagree that the criticism is entirely based on the wording of that one sentence. For one thing, if I remember correctly, I counted at least 6 prose locations in the paper about the Patch 1 argument that need to be corrected. Anywhere “significant number of” appears needs to be changed, for example, since “significant number of” can actually mean, depending on the settings of the parameters, “astronomically large number of”. I think presenting the argument without parameters is misleading, and essentially propaganda.
Patch 2 has a similar issue (see Section 2.1), as well as (I think) another, more serious issue (Section 3.1, “Step 3”).
Ha, actually I agree with your retracted summary.
The positive reviewer agreed with you, though about an earlier version of that section. I stand by it, but admit that the informal and undetailed style clashes with the rest of the paper.
Love this. The Rationalist community hasn’t made any progress on the problem of controlling, over confident, non-self-critical people rising to the top in any sufficiently large organization. Reading more of your posts now.
Great post. I even worry about the emphasis on FAI, as it seems to depend on friendly superintelligent AIs effectively defending us against deliberately criminal AIs. Scott Alexander speculated:
For example, it might program a virus that will infect every computer in the world, causing them to fill their empty memory with partial copies of the superintelligence, which when networked together become full copies of the superintelligence.
But way before that, we will have humans looking to get rich programming such a virus, and you better believe they won’t be using safeguards. It won’t take over every computer in the world—just the ones that aren’t defended by a more-powerful superintelligence (i.e. almost all computers) and that aren’t interacting with the internet using formally verified software. We’ll be attacked by a superintelligence running on billions of smart phones. Might be distributed initially through a compromised build of the hottest new social app for anonymous VR fucking.
You’re right.
A guy I know, who works in one of the top ML groups, is literally less worried about superintelligence than he is about getting murdered by rationalists. That’s an extreme POV. Most researchers in ML simply think that people who worry about superintelligence are uneducated cranks addled by sci fi.
I hope everyone is aware of that perception problem.
heh, I suppose he would agree
I’m not sure either. I’m reassured that there seems to be some move away from public geekiness, like using the word “singularity”, but I suspect that should go further, e.g. replace the paperclip maximizer with something less silly (even though, to me, it’s an adequate illustration). I suspect getting some famous “cool”/sexy non-scientist people on board would help; I keep coming back to Jon Hamm (who, judging from his cameos on great comedy shows, and his role in the harrowing Black Mirror episode, has plenty of nerd inside).
If my anecdotal evidence is indicative of reality, the attitude in the ML community is that people concerned about superhuman AI should not even be engaged with seriously. Hopefully that, at least, will change soon.
For Bostrom’s simulation argument to conclude the disjunction of the two interesting propositions (our doom, or we’re sims), you need to assume there are simulation runners who are motivated to do very large numbers of ancestor simulations. The simulation runners would be ultrapowerful, probably rich, amoral history/anthropology nerds, because all the other ultrapowerful amoral beings have more interesting things to occupy themselves with. If it’s a set-it-and-forget-it simulation, that might be plausible. If the simulation requires monitoring and manual intervention, I think it’s very implausible.
If my own experience and the experiences of the people I know is indicative of the norm, then thinking about ethics, the horror that is the world at large, etc, tends to encourage depression. And depression, as you’ve realized yourself, is bad for doing good (but perhaps good for not doing bad?). I’m still working on it myself (with the help of a strong dose of antidepressants, regular exercise, consistently good sleep, etc). Glad to hear you are on the path to finding a better balance.
This is a very technical paper, which requires some (or a lot) of familiarity with Bostrom/Kulczycki’s “patched” Simulation Argument (www.simulation-argument.com/patch.pdf). I’m choosing to publish it here after experiencing Analysis’s depressing version of peer review (they rejected a shorter, more-professional version of the paper based on one very positive review, and one negative review, from a superficial reading of the paper, that is almost certainly written by Kulczycki or Bostrom themself).
The positive review (of the earlier shorter, more-professional version of the paper) does a better job of summarizing the contribution than I did, so with the permission of the reviewer I’m including an excerpt here: