I’m always amazed how Eliezer manages to show the world is completely broken while at the same time conveying an incredible sense of optimism.
+1
I’m always amazed how Eliezer manages to show the world is completely broken while at the same time conveying an incredible sense of optimism.
+1
Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?
The body of this worthy man died in August 2014, but his brain is preserved by Alcor. May a day come when he lives again and death is banished forever.
Great idea, well done!
However: Is it really the case that it’s impossible to login without Facebook? Why?
I think the crucial difference between AI and futarchy is that in AI the utility function is decided once an for all. Once a superintelligence is out there, there is no stopping it. On the other hand in futarchy the utility function is determined by some sort of democratic mechanism which operates continuously, that can introduce corrections it if things start going awry.
My deep thanks to the organizers for creating this amazing event!
The atmosphere was incredibly warm and welcoming.
I met many awesome people and had fun and stimulating conversations. If there is anything I regret, it is not connecting to even more of the participants.
I enjoyed all of the talks. Val’s keynote was truly inspiring: he is a great speaker.
Kaj’s group debugging workshop was a novelty for me, definitely something to try again in the future.
Looking forward to doing this again next year!
There are LessWrong meetups in many countries, in particular there are 4 in Germany.
See http://wiki.lesswrong.com/wiki/Less_Wrong_meetup_groups
The problem you’re trying to solve can be solved even easier in a future where everyone are Whole Brain Emulations. You can have your raids in virtual worlds where the rules regarding pain / injury are whatever you define. Obviously, your cybernetic brain will never be damaged.
I also had thoughts along these lines. I think that to make your idea complete you need the combination of local governments and a world government. The local governments will provide people the freedom to organize in the way they find best and produce a process of evolution as you describe. The world government will enforce cooperation between local governments in Prisoner Dilemma type situations and ensure the local governments don’t lock people in. See also the Archipelago.
The concern that ML has no solid theoretical foundations reflects the old computer science worldview, which is all based on finding bit exact solutions to problems within vague asymptotic resource constraints.
It is an error to confuse the “exact / approximate” axis with the “theoretical / empirical” exis. There is plenty of theoretical work in complexity theory on approximate algorithms.
A good ML researcher absolutely needs a good idea of what is going on under the hood—at least at a sufficient level of abstraction.
There is difference between “having an idea” and “solid theoretical foundations”. Chemists before quantum mechanics had a lots of ideas. But they didn’t have a solid theoretical foundation.
Why not test safety long before the system is superintelligent? - say when it is a population of 100 child like AGIs. As the population grows larger and more intelligent, the safest designs are propagated and made safer.
Because this process is not guaranteed to yield good results. Evolution did the exact same thing to create humans, optimizing for genetic fitness. And humans still went and invented condoms.
So it may actually be easier to drop the traditional computer science approach completely.
When the entire future of mankind is at stake, you don’t drop approaches because it may be easier. You try every goddamn approach you have (unless “trying” is dangerous in itself of course).
IMO this should be in main