Agreed, I think that the rational action in this scenario depends on one’s goal, and there are different things you could choose as your goal here.
I also think I shouldve set a higher value for my 90% confidence of the number of people who would cooperate, because its quite possible that a lot more peopel than I expected would choose alternate goals for this other than ‘winning’.
Ander
Correct, just like people trying to ‘win’ a single iteration prisoner’s dilemna problem would defect.
I’m not claiming its the morally correct option or anything, just that its the correct strategy if your goal is to win.
Thumbs up to Benito for having the interest in these topics at that age. Rolf, why the rant against him? We should be encouraging to young people interested in rationality and bayesian probability.
Yes! It made me very happy to read “We should use this lack of replication to update our beliefs”
I cannot speak for all Banks policies, but that isn’t how the ‘overdraft protection’ on my account works. How mine (actually a credit union, maybe thats a difference) works is:
Without it, if I was to write a check with insufficient funds, I would get charged some large fee. But with the Overdraft Protection, it will transfer money from my savings account to checking to cover it, for free, helping me avoid the fee. Essentially it lets me use the savings accounts as a safety net to avoid the charges.
This ‘protection’ has in fact saved me in a couple of instances.
“Because if there isn’t, they’ll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that’d be a shame, wouldn’t it?”
I would much rather see someone dismiss the dangers of AI, than misrepresent them, by having a movie in which Johnny Depp plays “a seemingly megalomaniacal AI researcher”. This gives the impression that a “mad scientist” type who creates an “evil” AI that takes over the world is what we should worry about. Eliezer’s posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of “the AI neither loves you, nor hates you, but you are composed of matter it can use for other things”. That is, that if we create a powerful AI (or an AI who creates an AI who creates an AI who creates a powerful AI), whose values and morals do not align with what we humans would “want”, that it will probably result in something terrible. (And not even in a way that provides us the silver lining of ’well, the AIs wiped out humanity, but at least the AI civilization is highly advanced and interesting! But more like: now the entire planet earth is a Grey Goo/Paperclips/whatever). Or even just the danger of us biological humans losing relevance in a world with superintelligent entities.
While I would love to see a great, well done, well thought out movie about Transhumanism, it seems pretty likely that this movie is just going to make me angry/annoyed. I really hope I am wrong, and that this movie is actually great.
We exist. Therefore strong AI is possible, in that if you were to exactly replicate all of the features of a human, you would have created a strong AI (unless there is some form of Dualism and that you needed whatever a ‘soul’ is from the ‘higher reality’ to become conscious).
What things might make Strong AI really really hard, though not impossible?
Maybe a neuron is actually way way more complicated than we currently think, so the problem of making an AI is a lot more complex. etc.
I look forward to reading your comparison! Hopefully it will also let me know if its worth watching the movie.
When will we get to see the results of the LW survey?
5% chance of human level AI this year seems extremely high to me. What are you basing that on?
If I replicate the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an “AI”?
If I make something very very similar, but not identical to the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an “AI?”
Its a terminology discussion at this point, I think.
In my original reply my intent was “provided that there are no souls/inputs from outside the universe required to make a functioning human, then we are able to create an AI by building something functionally equivalent to a human, and therefore strong AI is possible”.
I’m interested in this as well, can you send us a link of the research that you found linking potassium supplements to migraines? Thanks!
I think that your position on destructive uploads doesn’t make sense, and you did a great job of showing why with your thought experiment.
The fact that you can transition yourself over time to the machine, and you still consider it ‘you’, and you cant actually tell at what specific line you crossed in order to become a ‘machine’, means that your original state (human brain) and final state (upload) are essentially the same.
Witten is one of the greatest physicists alive, if not the greatest. He is the one who unified the various string theories into M-theory. He is also the only physicist to receive a Fields Medal.
“Personally, I don’t know the degree of likelihood of Knox’s leaving no single piece of physical evidence when someone else left all kinds of traces. I do know the odds that I personally would change my story if police were investigating me for a murder in which I had no part: zero.”
Maybe its true that you would never change your story to the police if you were being investigated for murder (can you be sure? Has this actually happened to you?) But even then, some people are innocent and yet change their story. (Also, lots of ‘confessions’ that are coerced out of people through interrogations are false).
Anyway, you can probably take the story-changing as weak evidence of guilt. However, compared to actual physical DNA evidence at the crime scene, which is STRONG evidence, this is nothing. Lets say you started out with a prior probability that Knox is guilty as being X (based on association with the victim), and then updated to be somewhat higher due to ‘story changing’. But then when you consider the physical DNA evidence you have to MASSIVELY reduce the probability. The phyisical DNA evidence is orders of magnitude more important than various speculative psychological evidence about phone call lengths and story changing and things like that.
After correctly updating for all the evidence, you come to the conclusion that Guede has a very high probability of guilt, due to his actual DNA being at the crime scene, and Knox having a very low probability.
Given that we have a strong, scientific reason to believe that Guede was there, and Knox was not, you should convict Guede and acquit Knox. (Unless there is strong evidence that Knox had conspired to have Guede do the killing for her, which there is not).
Given that the murder can be fully explained by Guede’s guilt, and that there is no strong evidence that Knox was also involved, there is no good reason to suspect Knox any more.
I got around to watching Her this weekend, and I must say: That movie is fantastic. One of the best movies I’ve ever watched. It both excels as a movie about relationships, as well as a movie about AI. You could easily watch it with someone who had no experience with LessWrong, or understanding of AI, and use it as an introduction to discussing many topics.
While the movie does not really tackle AI friendliness, it does bring up many relevant topics, such as:
Intelligence Explosion. AIs getting smarter, in a relatively short time, as well as the massive difference in timescales between how fast a physical human can think, and an AI.
What it means to be a person. If you were successful in creating a friendly or close to friendly AI that was very similar to a human, would it be a person? This movie would influence people to answer ‘yes’ to that question.
Finally, the contrast provided between this show and some other AI movies like Terminator, where AIs are killer robots at war with humanity, could lead to discussions about friendly AI. Why is the AI in Her different from Terminators? Why are they both different from a Paperclip Maximizer? What do we have to do to get something more like the AI in Her? How can we do even better than that? Should we make an AI that is like a person, or not?
I highly recommend this movie to every LessWrong reader. And to everyone else as well, I hope that it will open up some people’s minds.
If you want to get a better ratio of enjoyment to money spent, take up any good card game other than Magic the Gathering. There are plenty of other good games out there. In fact, there are games out there that cost 10% as much as Magic, and are also better. (I would submit Hearthstone and Netrunner as two excellent examples).
Poverty is another one; but it’s not like we know of a specific technological innovation that would solve poverty, if only someone would develop it.
Molecular nonotechnology? If we can replicate any item at low cost, this could eventually eliminate poverty, once you get to the point where all the basic items that people need in order to live a reasonable life are no longer scarce goods, and are incredibly cheap to produce.
In summary, outside of the medical field, I don’t see any conceivable realistic technological innovation that would be as transformative as the flush toilet, vaccinations, birth control, telephones, cars and airplanes. We might have exhausted the low-hanging fruits in our desires.
I would say:
AI (non-conscious AI), replacing various service jobs and labor jobs, freeing up humans time).
Fusion Power, providing much cheaper energy, which can then also be used to power electric cars. Anti-agathics. Providing long life or even conquering death. In addition to the individual benefits, allowing people to be productive for longer helps the economy. (For this, you need to make people HEALTHY for longer, not just survive longer, of course). Virtual environments reducing the need for transportation, offices, etc. If most people eventually can work from home and still achieve the same results as working together in an office, then you safe on infrastructure costs (less people driving the roads), energy costs, time, etc, and possibly improve happiness. Genetic Engineering. Cure diseases, enhance intelligence, etc.There are more. And those are just some of the ones we have already thought of. There are probably plenty more that we havent thought of, that people will innovate once we develop new technologies.
Took the survey, and finally registered after lurking for 6 months.
I liked the defect/cooperate question. I defected because it was the rational way to try to ‘win’ the contest. However, if one had a different goal such as “make Less Wrong look cooperative” rather than “win this contest”, then cooperating would be the rational choice. I suppose that if I win, I’ll use the money to make my first donation to CFAR and/or MIRI.
Now that I have finished it, I wish I had taken more time on a couple of the questions. I answered the Newcomb’s Box problem the opposite of my intent, because I mixed up what 2-box and 1-box mean in the problem (been years since I thought about that problem). I would 1-box, but I answered 2-box in the survey because I misremembered how the problem worked.