I’m always amazed how Eliezer manages to show the world is completely broken while at the same time conveying an incredible sense of optimism.
+1
I’m always amazed how Eliezer manages to show the world is completely broken while at the same time conveying an incredible sense of optimism.
+1
Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?
The body of this worthy man died in August 2014, but his brain is preserved by Alcor. May a day come when he lives again and death is banished forever.
Great idea, well done!
However: Is it really the case that it’s impossible to login without Facebook? Why?
I think the crucial difference between AI and futarchy is that in AI the utility function is decided once an for all. Once a superintelligence is out there, there is no stopping it. On the other hand in futarchy the utility function is determined by some sort of democratic mechanism which operates continuously, that can introduce corrections it if things start going awry.
My deep thanks to the organizers for creating this amazing event!
The atmosphere was incredibly warm and welcoming.
I met many awesome people and had fun and stimulating conversations. If there is anything I regret, it is not connecting to even more of the participants.
I enjoyed all of the talks. Val’s keynote was truly inspiring: he is a great speaker.
Kaj’s group debugging workshop was a novelty for me, definitely something to try again in the future.
Looking forward to doing this again next year!
There are LessWrong meetups in many countries, in particular there are 4 in Germany.
See http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups
The problem you’re trying to solve can be solved even easier in a future where everyone are Whole Brain Emulations. You can have your raids in virtual worlds where the rules regarding pain / injury are whatever you define. Obviously, your cybernetic brain will never be damaged.
I also had thoughts along these lines. I think that to make your idea complete you need the combination of local governments and a world government. The local governments will provide people the freedom to organize in the way they find best and produce a process of evolution as you describe. The world government will enforce cooperation between local governments in Prisoner Dilemma type situations and ensure the local governments don’t lock people in. See also the Archipelago.
The concern that ML has no solid theoretical foundations reflects the old computer science worldview, which is all based on finding bit exact solutions to problems within vague asymptotic resource constraints.
It is an error to confuse the “exact / approximate” axis with the “theoretical / empirical” exis. There is plenty of theoretical work in complexity theory on approximate algorithms.
A good ML researcher absolutely needs a good idea of what is going on under the hood—at least at a sufficient level of abstraction.
There is difference between “having an idea” and “solid theoretical foundations”. Chemists before quantum mechanics had a lots of ideas. But they didn’t have a solid theoretical foundation.
Why not test safety long before the system is superintelligent? - say when it is a population of 100 child like AGIs. As the population grows larger and more intelligent, the safest designs are propagated and made safer.
Because this process is not guaranteed to yield good results. Evolution did the exact same thing to create humans, optimizing for genetic fitness. And humans still went and invented condoms.
So it may actually be easier to drop the traditional computer science approach completely.
When the entire future of mankind is at stake, you don’t drop approaches because it may be easier. You try every goddamn approach you have (unless “trying” is dangerous in itself of course).
Just a sidenote, but IMO the solution to Pascal mugging is simply using a bounded utility function. I don’t understand why people insist on unboundedness.
Given a random variable with probability distribution , the probability distribution you will have about after time is a random variable in itself. It must satisfy
. The (un)stability parameter you are looking for sounds like ]), where stands for Kullback–Leibler divergence. The meaning of this parameter is the expected number of bits of information about you will receive over period .I think that deserves its own post.
It would be interesting to try to come up with good priors for random causal networks.
This is an incorrect interpretation of Coscott’s philosophy. “Caring really hard about winning” = preferring winning to losing. The correct analogy would be “Caring about [whatever] only in case I win”. The losing scenarios are not necessarily assigned low utilities: they are assigned similar utilities. This philosophy is not saying: “I will win because I want to win”. It is saying: “If I lose, all the stuff I normally care about becomes unimportant, so when I’m optimizing this stuff I might just as well assume I’m going to win”. More precisely, it is saying “I will both lose and win but only the winning universe contains stuff that can be optimized”.
Hello everyone. My name is Vadim Kosoy, and you can find some LW-relevant stuff about me in my Google+ stream: http://plus.google.com/107405523347298524518/about
I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I’m comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)
In my day job I’m a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.
I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don’t fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which I find very interesting and important. Another is that consequentialism is a popular moral philosophy here, and I (relatively recently) started to identify myself as strongly consequentialist. Yet another is that it seems to be a community where rational people discuss things rationally (or at least try), something that society all over the world misses as much direly as the idea seems trivial. This is in stark contrast the usual mode of discourse about social / political issues which is extremely shallow and plagued by excessive emotionality and dogmatism. I truly believe such a community can become a driver of social change in good directions, something with incredible impact
Recently I became very much interested with the subject of understanding general intelligence mathematically, in particular by the methods of computer science. I’ve written some comments here about my own variant of the Orseau-Ring framework, something I wished to expand into a full article but didn’t have the karma for it. Maybe I’ll post in on LW discussion.
My personal philosophy: As I said, I’m a consequentialist. I define my utility function not on the basis of hedonism or anything close to hedonism but on the basis of long-term scientific / technological / autoevolutional (transhumanist) progress. I don’t believe in the innate value of h. sapiens but rather in the innate value of intelligent beings (in particular the more intelligence the more value). I can imagine scenarios in which a strong AI destroys humanity which are from my P.O.V. strongly positive: this is my disagreement with the friendly AI thesis. However I’m not sure whether any strong AI scenario will be positive, so I agree it is a concern. I also consider myself a deist rather than an atheist. Thus I believe in God, but the meaning I ascribe to the word “God” is very different from the meaning most religious people ascribe to it (I choose to still use the word “God” since there are a few things in common). For me God is the (unknowable) reason for the miraculous beauty of universe, perceived by us as the beauty of mathematics and science and the amazing plethora of interesting natural phenomena. God doesn’t punish/reward for good/bad behavior, doesn’t perform divine intervention (in the sense of occasional violations of natural law) and doesn’t write/dictate scriptures and prophesies (except by inspiring scientists to make mathematical and scientific discoveries). I consider the human brain to be a machine, with no magic “soul” behind the scenes. However I believe in immortality in a stranger metaphysical sense which is something probably too long to detail here
I’m 29.9 years old, married with child (boy, 2.8 years old). I live in Israel since the age of 7 but I was born in the USSR. Ethnically I’m an Ashkenazi Jew. I enjoy science fiction, good cinema ( but no time to see any since my son was born :) ) and many sorts of music (but rock is probably my favorite). Glad to be here!
IMO what is interesting about this ruling is that AFAIK it doesn’t appeal to any law which hasn’t existed for decades. So, if we accept the premise that the supreme court is only “interpreting” the constitution, it follows that gay marriage should have been legal a long time ago (where I use “should” in the legal rather the normative sense). While some will probably claim it is exactly the case, to me it seems rather clear that the “interpretation” of the constitution is changing with culture and social norms. Now, while I’m sure that in a democratic system cultural transformations should find their way into law, this seems like a weird way for it to happen. Instead of having elected legislators changing the law according to the will of people, appointed judges are reinterpreting existing law. So, while I wholeheartedly endorse the object-level act of allowing homosexual marriage, the meta-level process leading to this act looks questionable. However, I don’t live in the US so maybe something is lacking in my understanding of that system.
To see the difference between these two scenarios, ask the following question: “what policy should I precommit to before the whole story unravels?” In Newcomb, you should clearly precommit to one-boxing: it causes Omega to put lots of money in the first box. Here, precommitting to push the button is BAD: it doesn’t influence FOOM vs. DOOM, only influences in which scenario Omega hands you the button + whether you get tortured.
Has anyone noticed that “success” vs. “failure” is not really binary in cryonics? The question is not whether information is preserved but how much information is preserved. In other words, the hypothetical future civilization restoring the cryopreserved person is going to get some person, the question is how similar she will be to the original. One limiting case is zero preserved information which means “restoration” is going to produce a random person. Another scenario is in which only genetic information is retained, so we get essentially an identical twin of the original. If the restoring entity has wide knowledge about the culture to which the person belonged it adds a lot of information. If it has access to e.g. blog posts / tweets / facebook posts / less-wrong comments made by the person, it has a whole lot more information. Who knows, maybe you can get quite close without any physically preserved brain at all.
How close does the restored person have to be to count as “the same” as the original? Of course most people would require at least some extent of memory restoration for it to count as “success”. However, I don’t think there is an unambiguous answer to this. What we have here is a continuous scale, not just two points “success” and “failure”.
Hi Gunnar, it was great meeting you in the event!
Regarding the “failed” item, I would gladly volunteer to ask you if this is something that can be accomplished over e-mail.
Cheers!
IMO this should be in main