Are you saying that people like me, who do not have the capabilities and time to read hundreds of posts, are excluded from asking about and discussing those issues?
No. He’s saying:
I’m reluctant to write a full response to this, but I think large parts of the Sequences were written to address some of these ideas.
I don’t want to be a jerk about this or belabor this point, but in order to decide exactly how I want to go in responding to this: have you read through all the sequences?
However, I will say that EY wrote at least part of the sequences because he got sick and tired of seeing people who try to reason about AI fall immediately into some obvious failure state. You have a tendency to fall into these failure states; e.g., Generalizing from Fictional Evidence during the whole Rome Sweet Rome allusion.
What do you suggest that I do, just ignore it?
Would you rather continue running around in circles, banging your head against the wall? Even if you did read the sequences, there’d still be no guarantee that you wouldn’t continue doing the same thing. But, to paraphrase Yudkowsky, at least you’d get a saving throw.
Why is it sufficient to read the sequences? Why not exclude everyone who doesn’t understand Gödel machines and AIXI?
Nobody said anything of the sort. Again, Yvain’s trying to formulate a response.
You have a tendency to fall into these failure states; e.g., Generalizing from Fictional Evidence during the whole Rome Sweet Rome allusion.
I had to say this like 10 times now. I am getting the impression that nobody actually read what I wrote.
The whole point was to get people thinking about how an AI is actually going to take over the world, in practice, rather than just claiming it will use magic.
If the IRC channel and the discussion section is any evidence, all you’ve managed to accomplish is to get people to think about how to take over ancient Rome using modern tech.
No. He’s saying:
However, I will say that EY wrote at least part of the sequences because he got sick and tired of seeing people who try to reason about AI fall immediately into some obvious failure state. You have a tendency to fall into these failure states; e.g., Generalizing from Fictional Evidence during the whole Rome Sweet Rome allusion.
Would you rather continue running around in circles, banging your head against the wall? Even if you did read the sequences, there’d still be no guarantee that you wouldn’t continue doing the same thing. But, to paraphrase Yudkowsky, at least you’d get a saving throw.
Nobody said anything of the sort. Again, Yvain’s trying to formulate a response.
I had to say this like 10 times now. I am getting the impression that nobody actually read what I wrote.
The whole point was to get people thinking about how an AI is actually going to take over the world, in practice, rather than just claiming it will use magic.
If the IRC channel and the discussion section is any evidence, all you’ve managed to accomplish is to get people to think about how to take over ancient Rome using modern tech.