Unless your situation is far from typical, your probability of death within a year at age 42 is far less than 1%.
David_Bolin
This sounds like Robin Hanson’s idea of the future. Eliezer would probably agree that in theory this would happen, except that he expects one superintelligent AI to take over everything and impose its values on the entire future of everything. If Eliezer’s future is definitely going to happen, then even if there is no truly ideal set of values, we would still have to make sure that the values that are going to be imposed on everything are at least somewhat acceptable.
Ok. My link was also for the USA and you are correct that there would be differences in other countries.
A common one that I see works like this: first person holds position A. A second person points out fact B which provides evidence against position A. The first person responds, “I am going to adjust my position to position C: namely that both A and B are true. B is evidence for C, so your argument is now evidence for my position.” Continue as needed.
Example:
First person: The world was created. Second person: Living things evolved, which makes it less likely that things were created than if they had just appeared from nothing. First person: The world was created through evolution. Facts implying evolution are evidence for this fact, so your argument now supports my position.
Continuing in this way allows the first person not only to maintain his original position, even if modified, but also to say that all possible evidence supports it.
(The actual resolution is that even if the modified position is supported by the evidence in issue, the modified position is more unlikely in itself than the original position, since the conjunction requires two things to be true, so following this process results in holding more and more unlikely positions.)
You can pretty easily think of “apocalyptic” scenarios in which Zoltan would end up getting elected in a fairly normal way. Picking a president at random from the adult population would require even more improbable events.
For many people, 32 karma would also be sufficient benefit to justify the investment made in the comment.
That isn’t really fully general because not everything is evidence in favor of your conclusion. Some things are evidence against it.
I don’t think this would be helpful, basically for the reason Lumifer said. In terms of how I vote personally, if I consider a comment unproductive, being longer increases the probability that I will downvote, since it wastes more of my time.
I think this is probably true, and I have seen cases where e.g. Eliezer is highly upvoted for a certain comment, and some other person little or not at all for basically the same insight in a different case.
However, it also seems to me that their long comments do tend to be especially insightful in fact.
I tried to register there just now but the email which is supposed to contain the link to verify my email is empty (no link). What can I do about it?
Caricatures such as describing people who disagree with you as saying “let’s bring back slavery” and supporting “burning down the whole Middle East” are not productive in political discussions.
I actually meant it more generally, in the sense of highly unusual situations. So gjm’s suggested path would count.
But more straightforwardly apocalyptic situations could also work. So a whole bunch of people die, then those remaining become concerned about existential risk—given what just happened—and this leads to people becoming convinced Zoltan would be a good idea. This is more likely than a virus that kills non-Zoltan supporters.
If you’re really honest about your willingness to be rational, it seems like this could be kind of depressing.
Human beings are not very willing to be rational, and that includes those of us on Less Wrong.
Ramez Naam discusses it here: http://rameznaam.com/2015/05/12/the-singularity-is-further-than-it-appears/
I find the discussion of corporations as superintelligences somewhat persuasive. I understand why Eliezer and others do not consider them superintelligences, but it seems to me a question of degree; they could become self-improving in more and more respects and at no point would I expect a singularity or a world-takeover.
I also think the argument from diminishing returns is pretty reasonable: http://www.sphere-engineering.com/blog/the-singularity-is-not-coming.html
If you are “procrastinate-y” you wouldn’t be able to survive this state yourself. Following a set schedule every moment for the rest of your life is very, very difficult and it is unlikely that you would be able to do it, so you would soon be dead yourself in this state.
That is not a useful rebuttal if in fact it is impossible to guarantee that your AGI will not be a socialpath no matter how you program it.
Eliezer’s position generally is that we should make sure everything is set in advance. Jacob_cannell seems to be basically saying that much of an AGI’s behavior is going to be determined by its education, environment, and history, much as is the case with human beings now. If this is the case it is unlikely there is any way to guarantee a good outcome, but there are ways to make that outcome more likely.
If the other player is choosing randomly between two numbers, you will have a 50% chance of guessing his choice correctly with any strategy whatsoever. It doesn’t matter whether your strategy is random or not; you can choose the first number every time and you will still have exactly a 50% chance of getting it.
You are assuming that human beings are much more altruistic than they actually are. If your wife has the chance of leaving you and having a much better life where you will never hear from her again, you will not be sad if she does not take the chance.
I am have been a Less Wrong user with an anonymous account since the Overcoming Bias days. I decided to create this new account using my real name.