I can’t see how this is a rationality quote. This would imply that humans have a hard time controlling their actions. How else could someone who thinks wisely act in an absurd fashion? Isn’t rationality about how to overcome that humans don’t think wisely?
ema
they don’t exclude each other.
Of course we want to favor the group we are part of. Otherwise our CEV wouldn’t differ.
I will also attend.
I develop a vector drawing program. It seams to have a good balance between archivability and ambition for me. So far it has 80% of the functionality i use of Inkscape. Currently i’m struggling with getting the performance from barely usable to smooth.
I want a UI that suits me better. Concretely this means: More keyboard shortcuts. Dragging the mouse only changes the selection. In Inkscape it also moves paths which can get annoying. Non destructive boolshe operations, makes shading way easier.
on the about page “Meet the Team” links to http://singularity.org/visiting-fellows/ instead of http://singularity.org/team/
Now it works for me too.
One benefit of running on a lower speed is that you can interact with things farther away from you while it still seems instantaneous. although i have no idea why that would be more important for the workers than for the boss.
I will come too.
I would put it lower than 9 because a general AI is science as software. Which means it is already contained in 9.
That doesn’t really prevent trolling, so i’m not sure that it would be helpful.
I like that idea, but i think there can be too much granularity. The feeling of ‘People who agree with me on X also agree with me on completely unrelated Y’ is awesome.
But what should Spaniards answer?
i think “White (non-Hispanic)”. Not that i understand the category Hispanic, but putting Swedish and Greek people in one category while excluding Spaniards seems deeply weird to me.
people with a photographic memory still could use SRS for learning sounds.
Maybe that is more to your liking → https://dl.dropbox.com/u/3943312/gwern-small.png I just cropped and rescaled it in gimp.
Could it be that you are confusing the complexity of a utility function of an agent with its optimization power? A super intelligent paperclipper has a simple utility function, but would have no problem reasoning about humans in great enough detail to find out what it has to say to get the guard to let it out of the box.
Why would it believe us that we are able to destroy 3^^^^^3 paperclips?
“arguing” is to narrow a word for describing the possibilities the AI has. For example it could manipulate us emotionally. It could write us a novel that leaves us in a very irrational state and then give us a bogus, but effective on us, argument for why we should let it out.
I once read the fifth Harry Potter book nonstop for 24 hours and for a couple of hours afterwards i had difficulties distinguishing between me and Harry Potter. It seems likely that a author who is a millions times smarter than Rowling and who has it as explicit goal, could write a novel that leaves me with far bigger misconceptions.
Because we have magical powers from outside the matrix [...].
The AI is vastly more smarter than we and can communicate with us. So it asks us questions which sound innocent to us, but from the answers it can derive a fairly accurate map of how it looks outside the matrix.
It would have to argue that destroying humanity and replacing it with paperclips was a good thing.
The goal of the AI is to have the guard execute the code that would let the AI access the outside world. Arguing with us could be one way to archive this goal. Although i agree it sounds like a unlikely way to succeed. Another possible way would be to write a novel that is so interesting that the guard doesn’t put it down and that leaves him in so a confused state that he types in the code, thinking he saves princess Foo from the evil lord Bar.
A super smart AI who wants to reach this goal very badly will likely come up with a whole bunch of other possible ways. Some of which i would never consider even if i spent the next 4 decades thinking about that.
That sounds like more a side effect of reading the same thing “nonstop for 24 hours” than a property of the book [...]
Yes. I am sure any other well written book read for 24 hours would have a similar effect. I think it is likely that a potential guard is at most 2 orders of magnitude less vulnerable to such things than i was at that time. That’s not enough against an AI that has 6 orders of magnitude more optimization power.
The results are obtained by mixing 100 other movies together so it is not surprising that there are no details.