Eliezer, this post seems to me to reinforce, not weaken, a “God to rule us all” image. Oh, and among the various clues that might indicate to me that someone would make a good choice with power, the ability to recreate that power from scratch does not seem a particularly strong clue.
Robin_Hanson2
What is the point of trying to figure out what your friendly AI will choose in each standard difficult moral choice situation, if in each case the answer will be “how dare you disagree with it since it is so much smarter and more moral than you?” If the point is that your design of this AI will depend on how well various proposed designs agree with your moral intuitions in specific cases, well then the rest of us have great cause to be concerned about how much we trust your specific intuitions.
James is right; you only need one moment of “weakness” to approve a protection against all future moments of weakness, so it is not clear there is an asymmetric problem here.
The hard question is: who do you trust to remove your choices, and are they justified in doing so anyway even if you don’t trust them to do so?
Honestly, almost everything the ordinary person thinks economists think is wrong. Which is what makes teaching intro to econ such a challenge. The main message here is to realize you don’t know nearly as much as you might think about what other groups out there think, especially marginalized and colorful groups. Doubt everything you think you know about the beliefs of satanists, theologians, pedophiles, free-lovers, marxists, mobsters, futurists, UFO folk, vegans, and yes economists.
But how much has your intuitive revulsion at your dependence on others, your inability to do everything by yourself, biased your beliefs about what options you are likely to have. If wishes were horses you know. It is not clear what problems you can really blame on each of us not knowing everything we all know; to answer that you’d have to be clearer on what counterfactuals you are considering.
Marcello, I won’t say any particular possible scenario isn’t worth thinking about; the issue is just its relative importance.
Carl, yes of course singletons are not very unlikely. I don’t think I said the other claim you attribute to me.
Why shouldn’t we focus on working out our preferences in more detail for the scenarios we think most likely? If I think it rather unlikely that I’ll have a genie who can grant three wishes, why should I work hard to figure out what those wishes would be? If we disagree about what scenarios are how likely, we will of course disagree about where preferences should be elaborated in the most detail.
Wei, yes I meant “unlikely.” Bo, you and I have very different ideas of what “logical” means. V.G., I hope you will comment more.
Eliezer, I’d advise no sudden moves; think very carefully before doing anything. I don’t know what I’d think after thinking carefully, as otherwise I wouldn’t need to do it. Are you sure there isn’t some way to delay thinking on your problem until after it appears? Having to have an answer now when it seems an likely problem is very expensive.
Eliezer, I haven’t meant to express any dissatisfaction with your plans to use a ring of power. And I agree that someone should be working on such plans even if the chances of it happening are rather small. So I approve of your working on such plans. My objection is only that if enough people overestimate the chance of such scenario, it will divert too much attention from other important scenarios. I similarly think global warming is real, worthy of real attention, but that it diverts too much attention from other future issues.
- Nov 1, 2010, 12:11 AM; 3 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
The one ring of power sits before us on a pedestal; around it stand a dozen folks of all races. I believe that whoever grabs the ring first becomes invincible, all powerful. If I believe we cannot make a deal, that someone is about to grab it, then I have to ask myself whether I would weld such power better than whoever I guess will grab it if I do not. If I think I’d do a better job, yes, I grab it. And I’d accept that others might consider that an act of war against them; thinking that way they may well kill me before I get to the ring.
With the ring, the first thing I do then is think very very carefully about what to do next. Most likely the first task is who to get advice from. And then I listen to that advice.
Yes this is a very dramatic story, one which we are therefore biased to overestimate its likelihood.
I don’t recall where exactly, but I’m pretty sure I’ve already admitted that I’d “grab the ring” before on this blog in the last month.
I find it suspicious that people’s preferences over population, lifespan, standard of living, and diversity seem to be “kinked” near their familiar world. A world with 1% of the population, standard of living, lifespan, or diversity of their own world seems to most a terrible travesty, almost a horror, while a world with 100 times as much of one of these factors seems to them at most a small gain, hardly worth mentioning. I suspect a serious status quo bias.
Carl is right; this is a minefield in terms of misleading intuitions. Also, there is already a substantial philosophy literature dealing with it; best to start with what they’ve learned.
Eliezer, it seems you are just expressing the usual intuition against the the “repugnant conclusion”, that as long as the universe has a lot more creatures than are on Earth now, having even more creatures can’t be very important relative to each one’s quality of life.
But in technical terms if you can talk about how much of a mind exists, and can promote more of one kind of mind relative to another, then you can talk about how much they all exist, and can want to promote more minds existing to a larger degree.
I still see no adequate answer to the question of how you can change P(A|B) if you can’t change P(A) or P(B). If every possible mind exists somewhere, and if all that matters about a mind is that it exists somewhere, then no actions make any difference to what matters.
Eliezer, our data only show that the universe looks pretty flat, not that it is exactly flat. And it could be finite and exactly flat with a non-trivial topology. On if all babies are duplicated in MWI, it seems to depends on exactly what part of the local physical state is required to be the same.
The data you point to only seem to suggest the universe is large; how do they also suggest it “is large relative to the space of physical possibilities”? The likelihood ratio seems pretty close as far as I can see.
With steven, I don’t see how, on your account, any of your actions can in fact effect the “proportion of my future selves to lead eudaimonic existences”. If people in your past couldn’t effect the total chance of your existing, how is it that you can effect the total chance of any particular future you existing? And how can there be a differing relative chance if the total chances all stay constant?
Eliezer, well written! :)
Grant, yes.
Burger I think you overestimate the effect of agreeing to be an organ donor.
In a foom that took two years, if the AI was visible after one year, that might give the world a year to destroy it.
- Sep 9, 2010, 3:20 AM; 1 point) 's comment on Less Wrong: Open Thread, September 2010 by (
I’m having trouble distinguishing problems you think the friendly AI will have to answer from problems you think you will have to answer to build a friendly AI. Surely you don’t want to have to figure out answers for every hard moral question just to build it, or why bother to build it? So why is this problem a problem you will have to figure out, vs. a problem it would figure out?