I guess so. It’s an interesting idea—kind of like social cooperation problems like recycling; if too many other people are not doing it, then there isn’t much point in doing it yourself. Applying it to morality is interesting. But wrong, I think.
kokotajlod
I enjoyed taking this survey. Thanks!
I can’t wait to see the results and play with the data, if that becomes possible.
I’ve been thinking a lot about this issue (and the broader issue that this is a special case of) recently. My two cents:
Under most views, this isn’t just an ethical problem. It can be reformulated as a problem about what we ought to expect. Suppose you are John Smith. Do you anticipate different experiences depending on how far down the sequence your enemies will go? This makes the problem more problematic, because while there is nothing wrong with valuing a system less and less as it gets less and less biological and more and more encrypted, there is something strange about thinking that a system is less and less… of a contributor to your expectations about the future? Perhaps this could be made to make sense, but it would take a bit of work. Alternatively, we could reject the notion of expectations and use some different model entirely. This “kicking away the ladder” approach raises worries of its own though.
I think the problem generalizes even further, actually. Like others have said, this is basically one facet of an issue that includes terms like “dust theory” and “computationalism.”
Personally, I’m starting to seriously doubt the computationalist theory of mind I’ve held since high school. Not sure what else to believe though.
I agree. Here’s a quick brainstormed statement, just to get the ball rolling:
“This film portrays an implausible runaway unfriendly AI scenario, trivializing what is actually a serious issue. For depictions of much more plausible runaway unfriendly AI scenarios, visit [website], where the science behind these depictions is also presented.”
Yep, I think that’s an improvement. What do you think about ChristianKI’s objection to putting a link in the statement?
I’m confused. I thought SI assigns equal probabilities to every program in the infinite space, and then simpler predictions turn out to be more likely because there are many more programs that make simple predictions than there are programs that make complicated predictions.
As I understand it, SI works even better in the finite case than in the infinite case, because you don’t have to worry about infinite sums and whatnot. (Provided the domain of the finite case was something like “All the programs of length N or less” for some finite N.)
Yes, but every other model of universal computation gives the same results, besides for a single constant (the cost of intercomputation between the universal models)
I don’t see how this solves the problem. The other models of universal computation give different results, though they are similar in certain respects. For particular problems, like “Will the gravitational constant change tomorrow?” it doesn’t seem to help.
Consider the hypotheses A = [Laws of physics] and B = [Laws of physics + 10% increase in gravitational constant tomorrow]
Every model of computation that we have used so far makes A simpler than B. But there are infinitely many models that make B simpler than A. Why should we prefer A to B?
Maybe I’m just missing something. Various smart people have said that the relativity of complexity isn’t a problem because of the constant-factor agreement between models/languages. However, various other smart people who have thought about it a lot have said the opposite, so I don’t think I’m missing something obvious.
Maybe the answer has to do with a measure defined over the space of models? This would be nice, but I worry about just pushing the problem up a level.
That’s what I used to think about how SI worked. Then I read this explanation which seems to make sense to me, and seems to justify my view of things.
Specifically, this explanation shows how Solomonoff Induction doesn’t need to assume a prior that some hypotheses are more likely than others; it can weight them all equally, and then prove that simpler hypotheses have more “copies” and thus simpler predictions are more likely.
In the infinite case there is still the problem of how to get an infinite number of equally-weighted hypotheses to sum to probability 1. But this is what Measure Theory does, I believe. (I’m not a mathematician, but this is what I’ve read and been told.) So it isn’t a problem, any more than math is a problem.
But if the space was finite, as I said, then Solomonoff Induction wouldn’t even have that problem!
So, I no longer think I’m confused. I think that your understanding of SI portrays it as more arbitrary than it actually is. SI isn’t just a weighting by simplicity! It is a proof that weighting by simplicity is justified, given certain premises! (namely, that the space of hypotheses is the space of computable programs in a certain language, and that they are all equally likely, modulo evidence.)
The problem being discussed is the relativity of complexity. So long as anything can be made out to be more complicated than anything else by an appropriate choice of language, it seems that Solomonoff Induction will be arbitrary, and we won’t be justified in thinking that it is accurate.
Yes, one universal prior will differ from another by just a finite number of terms. But there is no upper bound on how large this finite number can be. So we can’t make any claims about how likely specific predictions are, without arbitrarily ruling out infinite sets of languages/models. So the problem remains.
As you say, A is to be preferred to programs longer than Z. But there is no upper bound on how long Z might be. So any particular program—for example, B—is such that we have no reason to say that it is more or less likely than A. So it seems we have failed to find a full justification for why we should prefer A to B.
Unless, as I said, we start talking about the space of all possible languages/models. But as I said, this threatens to just push the problem up a level.
- 12 Apr 2015 16:51 UTC; 1 point) 's comment on Occam’s Razor by (
Huh. Okay, thanks for the info. This is troubling, because I have long held out hope that SI would not turn out to be arbitrary. Could you direct me to where I can learn more about this arbitrariness in measure theory?
Nice project! Good luck!
As I currently understand it, your plan is to write an essay arguing that humanity should devote lots of resources to figuring out how to steer the far future? In other words, “We ought to steer the near future into a place where we have a better idea of where to steer the far future, and then steer the far future in that direction.”
Your argument would have to center around the lemma that we currently don’t know what we want, or more poetically that our power far outstrips our wisdom, so that we ought to trade off the one for the other. I very much agree with this, though I would be interested to see how you argue for it. Keep us posted!
As for your questions, I’m afraid I don’t have much help to offer. If you haven’t already you should take a look at surveys of population ethics, meta-ethics, and normative ethics literature. And I don’t understand your second question; could you elaborate?
Sorry, no links—I’m an outsider to current meta-ethics too. But I’m sure people have thought about e.g. the question “How am I supposed to go about figuring out what is good / what I value?]” Rawls, for example, famously introduced the notion of “Reflective Equilibrium,” which is a good first pass at the problem I’d say.
Imagine that I know what I want very well, but do not possess a “blank” AGI agent (or “blank genie” as such things were previously described) into which my True Wishes can be inserted to make them come true. What other modes of implementation might be open to me for implementing my True Wishes?
It sounds like you are asking how we could get our goals accomplished without AI. The answer to that is “The same way people have accomplished their goals since the dawn of time—through hard work, smart economic decisions, political maneuvering, and warfare.” If you are asking what the most effective ways for us to implement our True Wishes are, after AGI, then… it depends on your Wishes and on your capabilities as a person, but I imagine it would have to do with influencing the course of society at large, perhaps simply in your home country or perhaps all around the world. (If you don’t care so much what other people do, so long as you have a nice life, then the problem is much simpler. I’m assuming you have grand ideas about the rest of the world.)
Making lots of money is a really good first step, given capitalism and stable property rights. Thanks to increasing democracy and decreasing warfare in the world, getting many people to listen to your ideas is important. In fact, if you know what your True Wishes are, then you are probably going to be pretty good at convincing other people to follow you, since most people aren’t even close to that level of self-awareness, and since people’s True Wishes probably overlap a lot.
Edit: P.S. If you haven’t read Nick Bostrom’s stuff, you definitely should. He has said quite a bit on how we should be steering the far future. Since you are on LW it is highly likely that you have already done this, but I might as well say it just in case.
True, we ought to proactively avoid such cults. My suggestion was merely that, in the event that you figure out what your True Wishes are, persuading other people to follow them is an effective way to achieve them. It is hard to get something accomplished by yourself. Since your True Wishes involve not being a fascist dictator, you will have to find a balance between not telling anyone what you think and starting a death cult.
Perhaps I shouldn’t have said “convincing other people to follow you” but rather “convincing other people to work towards your True Wishes.”
I don’t understand. Game of Life models a truly deterministic universe. Game of Life could be modified with an additional rule that deleted all cells in a structure that forms the word “Hello world.” This would be faster-than-light interaction because the state of the grid at one point would influence the state of the grid at a distant point, in one time step. But this would still model a deterministic universe.
Maybe we should revise downward our probability that we are living in a universe that includes True Random numbers.
The speed of light in our universe is 1 Planck distance in 1 Planck time. Hence the connection between the speed of light and the idea that all interaction is local.
The closely analogous speed in Game of Life would be 1 grid square in 1 time step.
If Game of Life can be modified as described to allow FTL, nonlocal interactions, then so can the rules of our universe. At least, the burden of proof is on you to show why such modifications aren’t legitimate somehow.
Not quite. In the toy universe in which the Game of Life exists there is no light, no speed of light, and no limits on faster-than-light interactions.
So you agree that it is possible for a truly deterministic universe to have nonlocal interaction. I think it is a short step from there to my conclusion, namely that it is possible for a truly deterministic universe to have faster-than-light interaction.
That may be true, but I don’t see how it affects the discussion between ThisSpaceAvailable and me.
I could easily change my example so that it involves a single universe rather than a collection. And if we can’t talk about cause and effect in the game of life universe, we can’t talk about cause and effect in the real universe either. (Unless you can find some relevant distinction between the two, and thus show my analogy to be faulty.)
Do you think you understand what I am trying to do here? Because it seems to me that you are just being difficult. I honestly don’t understand what the problem is. It might be my fault, not yours. EDIT: So, I’ll do what I can to understand you also. Right now I’m thinking that you must have a different understanding of what the space of possible universes looks like.
Huh? The fact that you can make up whatever rules you want for a toy mathematical abstraction does NOT imply that you can do the same for our physical universe.
Watch me do the same for our physical universe:
Take the laws of physics, whatever they turn out to be, and add this new law: —Fix a simultenaity plane if necessary —Pick two distant particles —Decree that every second, all the matter in a 1meter radius around each particle will be swapped with the matter in a 1meter radius around each other particle. As if someone paused the universe, cut out the relevant sections, and switched them.
These are silly rules, but they are consistent, computable, etc. The modified Laws of Physics that would result would be no less possible than our current Laws of Physics, though they would be much less probable.
Okay. So, this is what I think you are thinking:
Not just any old Laws are possible. Even consistent, computable Laws can be impossible. (I’m afraid I don’t know why you think this though. Perhaps you think there is only one possible world and it is not a Level IV multiverse or perhaps you think that all possible worlds conform to the laws of physics as some of us currently understand them.)
I’m thinking that universes which are consistent & computable are possible. After all, it seems to me that Solomonoff Induction says this.
Note that I am distinguishing between possibility and probability. I’m not claiming that my modified laws are probable, merely that they are metaphysically possible
If you have problems with the term “metaphysical possibility” then I can instead speak in multiverse terms: I am claiming that all computable, consistent worlds exist somewhere in the multiverse. If you don’t like that either, then I’d like to know how you think about possibility.
Note: I intend to edit this later to add links. I won’t change the text. EDIT: I added the links.
I don’t think Solomonoff Induction solves any of those three things. I really hope it does, and I can see how it kinda goes half of the way there to solving them, but I just don’t see it going all the way yet. (Mostly I’m concerned with #1. The other two I’m less sure about, but they are also less important.)
I don’t know why the philosophical community seems to be ignoring Solomonoff Induction etc. though. It does seem relevant. Maybe the philosophers are just more cynical than we are about Solomonoff Induction’s chances of eventually being able to solve 1, 2, and 3.