If Goertzel’s claim that “SIAI’s arguments are so unclear that he had to construct it himself” can’t be disproven by the simple expedient of posting a single link to an immediately available well-structured top-down argument then the SIAI should regard this as an obvious high-priority, high-value task. If it can be proven by such a link, then that link needs to be more highly advertised since it seems that none of us are aware of it.
mwaser
An apology
We say: “Would you care to make a side bet on that?”
And I’d say . . . . “Sure! I recognize that I normally plan to finish 9 to 10 days early to ensure that I finish before the deadline and that I normally “fail” and only finish a day or two early (but still succeed at the real deadline) . . . . but now, you’ve changed the incentive structure (i.e. the entire problem) so I will now plan to finish 9 or 10 days before my new deadline (necessary to take your money) of 9 or 10 days before the real deadline. Are you sure that you really want to make that side bet?
I note also that “Would you care to make a side bet on that? is interesting as a potential conversation-filter but can also, unfortunately, act as a conversation-diverter.
Heh. I’ve read virtually all those links. I still have the three following problems.
Those links are about as internally self-consistent as the Bible.
There are some fundamentally incorrect assumptions that have become gospel.
Most people WON’T read all those links and will therefore be declared unfit to judge anything.
What I asked for was “an immediately available well-structured top-down argument”.
It would be particularly useful and effective if SIAI recruited someone with the opposite point of view to co-develop a counter-argument thread and let the two revolve around each other and solve some of these issues (or, at least, highlight the base important differences in opinion that prevent them from solution). I’m more than willing to spend a ridiculous amount of time on such a task and I’m sure that Ben would be more than willing to devote any time that he can tear away from his busy schedule.
Did you give the same answer to Omega? The cases are exactly analogous. (Or do you argue that they are not?)
Wow! Evil. Effective. Not to mention a great demonstration of the criticality of context.
Definitely deserves a link or mention in a newbie’s guide.
What I didn’t get?
Some of it was mistaken assumptions about karma. Much more of it was the lack of recognition of the presence of a huge amount of underlying structure which is necessary to explain what looks like seemingly irrational behavior (to someone who doesn’t have that structure). I also didn’t recognize most of the offered help because I didn’t understand it. (Even just saying to a newbie, “I know that you don’t recognize this as help because you don’t get it yet but could you please trust me that it is intended as help” would probably convince many more people to just look again rather than bailing).
Some of the epiphany was figuring out the various parts that make up karma and truly recognizing its accuracy and efficiency. A lot more of it was just figuring out that there had to be structures present to explain the seemingly irrational behavior. Yeah, that’s duh! obvious in hindsight but it’s difficult to figure out by yourself (until you catch the underlying regularities and make the right assumptions).
One of the largest problems for newbies is that the culture has evolved a great many “terms of art” that are not recognizable as such to the newbie. Getting “hammered” for questioning the upvote of a comment apparently without substance was a shock for me. Fortunately, the underlying consistency of the “irrationality” was also becoming apparent at the same time.
Just reading and even fully understanding the sequences does not fully prepare one for contributing here. This fact is NOT evident to new contributors. Smacking a new contributor on the nose (with karma) while pointing at a sequence that they are rather sure that they comprehended and nothing else is not going to make sense to them until they have the necessary understandings.
One must understand the expected process and expectations of contribution and understand the “terms of art” that are invariable used in the evaluatory comments. Clear and confused have very specific meanings here that do not unpack correctly unless you have the underlying structure/understanding. I was also very shocked by the number of perceived strawmen and the community’s acceptance of them—contrary to virtually every other “rational” website.
I know that I still don’t have all of it but most of the behavior that totally baffled me before and appeared irrational now makes total sense. The rules are totally different here from what I expected/assumed and the unnoticed phase change caused my “rational” behavior to be deemed “irrational” (only because it was ;-) and “irrational” behavior to be widely accepted (not what you expect on a site devoted to rationality ;-).
Most of what I think I have in mind is just to point out where and explain why the rules are very different from what is likely to be assumed by an outsider. In particular, it’s very hard to accept that you’re confused and wrong when your bayesian priors give that a low probability—and a near-zero probability when the people informing you aren’t making sense and acting irrationally (except when they’re all doing it—and doing it consistently).
The real epiphany was when I said “F it. These people are managing to be consistent. There has to be some set of rules that allow them to do that. Now . . . . what the F are they?” And, for me, that was pretty rapidly followed by the “Ohhhhh. WOW! Damn. Now I feel bad.” of my apology.
If I could figure out some way to be helpful to steer people towards that epiphany without actually giving it to them, it would be ideal. Some work is necessary to fully integrate something like this. On the other hand, if it’s too hard and confusing, I think that a lot of people will (and do) bail out with a very bad taste in their mouths (which I still believe is very contrary to the stated goals of the community).
I’m also looking for any interested individuals who would like to help.
- 7 Nov 2010 22:00 UTC; 0 points) 's comment on Yet Another “Rational Approach To Morality & Friendly AI Sequence” by (
I can think of several reasons
Your post appears to be a dominance game. Your bible will obliterate their bible.
While beauty is in the eye of the beholder, I would guess that the initial quote probably strikes many here as elegant poetry that is well worth sharing (and upvotes effectively equal sharing).
Your post isn’t particularly interesting so I would guess that it wouldn’t attract any upovotes and point 1 means that it is nearly certain to attract at least two or three downvotes.
Now that I’ve got it, this is clear, concise, and helpful. Thank you.
I also owe you (personally) an apology for previous behavior.
Some statements people already agree with, in which case no supporting arguments are necessary.
Arguably then, for the audience of people who agree with the statement, the statement itself is not necessary either.
Obviously, the comment is courting the undecided. Obviously, many humans are swayed by sheer numbers of people who believe certain things. But that behavior is not rational. And this site is “devoted to refining the art of human rationality”.
Of course it’s a bad argument when considered as directed to you
Prejudicial strawman. I never said that it was a bad argument. I never said anything close.
A major upvote for this. The SIAI should create a sister organization to publicize the logical (and exceptionally) dangerous conclusion to the course that corporations are currently on. We have created powerful, superhuman entities with the sole top-level goal (required by LAW in for-profit corporations) of “Optimize money acquisition and retention”. My personal and professional opinion is that this is a far more immediate (and greater) risk than UnFriendly AI).
I clearly don’t understand karma
I know the individuals involved. They are not biased against non-academics and would welcome a well-thought-out contribution from anyone. You could easily have a suitable abstract ready by March 1st (two weeks early) if you believed that it was important enough—and I would strongly urge you to do so.
As you get closer to the core of friendliness, you get all sorts of weird AGI’s that want to do something that twistedly resembles something good, but is somehow missing something or is somehow altered so that the end result is not at all what you wanted.
Is this true or is this a useful assumption to protect us from doing something stupid?
Is it true that Friendliness is not an attractor or is it that we cannot count on such a property unless it is absolutely proven to be the case?
If you were the first person to see such a post (where Yvain made such a stupid comment that you believed that it deserved to attract 26527 downvotes), would you, personally, downvote it for stupidity or would you upvote it for interestingness?
EDIT: I’d be interested in answers from others as well.
Got it. Believe it or not, I am trying to figure out the rules (which are radically different than a number of my initial assumptions) and not trying solely to be a pain in the ass.
I’ll cool it on the top level posts.
Admittedly, a lot of my problem is that there is either a really huge double standard or I’m missing something critical. To illustrate . . . . Kingfisher’s comment “Something is clear if it is easily understood by those with the necessary baseline knowledge.” My posts are, elsewhere, considered very clear by people with less baseline knowledge. If my post was logically incorrect to someone with higher knowledge, then they should be able to dissect it and get to the root of the problem. Instead, what I’m seeing is tremendous numbers of strawmen. The lesson seems to be “If you don’t go slow and you fail to rule out every single strawman that I can possibly raise, I will refuse to let you go further (and I will do it by insisting that you have actively embraced the strawman). Am I starting to get it or am I way off base?
Note: I am never trying to insult (except one ill-chosen all caps response). But the community seems to be acting against its own goals as I perceive they have stated them. Would it be fair to say that your expectations (and apparently even goals) are not clear to new posters (not newcomers, I have read and believe I grok all of the sequences, etc. to the extent that virtually any link that is pointed to, I’ve already seen).
Another, last comment. At the top of discussion posts, it says “This part of the site is for the discussion of topics not yet ready or not suitable for normal top-level posts.” That is what led me to believe that posting a couple of posts that I obviously considered ready for normal prime-time (i.e. not LessWrong) wouldn’t be a problem. I am now being told that it is a problem and I will abide. But can you make any clarification?
Thanks.
- 7 Nov 2010 1:16 UTC; 25 points) 's comment on Yet Another “Rational Approach To Morality & Friendly AI Sequence” by (
- 3 Nov 2010 20:27 UTC; 4 points) 's comment on An apology by (
I agree with “In future, if you feel that there is some form of irrationality around, look for examples that don’t involve you before posting, so that you don’t seem to be simply lashing out.”
I strongly disagree with “you are trying to create a villifying discussion against a comment that disagrees with you.”
I would agree with “it appears as if you are trying to create a villifying discussion against a comment that disagrees with you”
Wouldn’t you agree?
Could I ask you to post the quotes as a separate post? They are priceless (and I’d love to be able to see what they applied to—so please include the references as well).
Dogmas of analytic philosophy, part 1⁄2 and part 2⁄2 by Massimo Pigliucci in his Rationally Speaking blog.
Imagine that one day you come home to see your neighbors milling about your house and the Publisher’s Clearinghouse (PHC) van just pulling away. You know that PHC has been running a new schtick recently of selling $100 lottery tickets to win $10,000 instead of just giving money away. In fact, you’ve used that very contest as a teachable moment with your kids to explain how once the first ticket of the 100 printed was sold, scratched, and determined not to be the winner—that the average expected value of the remaining tickets was greater than their cost and they were therefore increasingly worth buying. Now, it’s weeks later, most of the tickets have been sold, scratched, and not winners and they came to your house. In fact, there were only two tickets remaining. And you weren’t home. Fortunately, your neighbor and best friend Bob asked if he could buy the ticket for you. Sensing a great human interest story (and lots of publicity), PHC said yes. Unfortunately, Bob picked the wrong ticket. After all your neighbors disperse and Bob and you are alone, Bob says that he’d really appreciate it if he could get his hundred dollars back. Is he mugging you? Or, do you give it to him?