Not in the original article, but Shalmanese did use those words here and here.
radical_negative_one
I’m new here, so i don’t have anything to say on whether this mormon2 is a troll.
But, in his response where he explains his motivation, does anyone think he might have a point? Of course, if one’s goal is to be received favorably, then it’s best to phrase an argument in a way the audience wants to hear it. But he says that his goal was actually to see if people here could answer his criticisms despite his delivery.
At a glance, it doesn’t seem that his explanation is taken seriously. But i think that, whether he’s a troll or not, this is a valid question, whether we can answer a criticism even if it was phrased in a hostile manner. Although, it could be that everyone has already decided that doing so isn’t worth our time or effort.
CitationNeeded, thanks for that link. I’m not sure how clear it is that these several commenters are the same person, but i can see why you’d be suspicious. Interesting that most of mormon2′s early comments were upvoted, until recently he adopts this hostile tone. And rereading the “question of rationality”, mormon2 remains belligerent in the response, so i’m inclined to agree with wedrifid’s “juvenile rationalization” conclusion.
So, i understand dismissing mormon2 specifically, even if i think that listening-to-arguments-from-possibly-unfriendly-commenters is generally worth thinking about. I’m thinking that i may have given him too much benefit of the doubt, but stopped clock, twice a day, i suppose.
The nearby deleted comment clearly speculated that i’m mormon2 as well. I’m not, though it feels a bit silly to have to say so.
It looks like i’ve accidentally derailed this thread. Sorry about that. Well, as CitationNeeded originally suggested, commence with the collectively asking Eliezer for clear straight answers of inadequately answered questions.
From the LessWrong wiki: “I don’t know”
If we don’t know anything about which is more likely, but there are only two options, then i think you’re left to just assign a 50% chance to each. Here, the characters are prompted for a discrete action, so both guesses are the same.
And they have to do something, because even refusing to circle an answer is a course of action. It’s just that in this case we don’t have any reason to be very confident in any specific choice.
So, you’re suggesting that Knox and Sollecito are guilty, but for reasons other than the prosecution’s argument. The other commenters here have been discussing the issue, so maybe if you have other arguments, or can point us to another source, that would be relevant.
If you’re just saying, “But captcorajus might be wrong,” that doesn’t strike me as being terribly useful, without any further insight to explain why.
I realize that most of those clients really did do pretty much what they had been accused of doing.
Or, are you saying the fact that Knox and Sollecito were in a courtroom as defendants is enough to conclude that they’re guilty?
Ah, i think i see the problem here.
You say that a weak prosecution does not equal an innocent defendant. I think we can all agree on that.
You say that there are other explanations for the evidence. Sounds reasonable enough; after all, even if we’re sure of something, we’re not absolutely sure, not 100% sure.
Back in the first “you be the jury” thread, there was a general agreement that Guede was guilty and Knox was innocent. For Knox, as i recall, there were various estimates from 10% to 30% chance of guilt, thus a judgment of “probably innocent / not likely enough to convict”. So, i think it’s not that nobody is considering any other explanation, rather, they’re convinced that this one explanation is correct.
Saying, “there might be another explanation” is a good idea as a general point, but that doesn’t mean that another explanation is particularly likely. You keep saying “there are other possibilities” but the problem is: what other scenario are you suggesting, and why should we believe it?
I’m curious to see an example or two of what these Bayesian problems might look like, if anybody has any ideas. I mean, it may be relevant to know just what difficulty level this test would be. Of course, what’s simple for some LessWrong contributors is probably not simple for everyone.
At the moment the comment i’m replying to is at −1 karma.
Now, even if PlaidX is on the wrong side of a “slam-dunk” issue here, i question whether it’s right to downvote this considering that he’s really just asking for an explanation of someone’s reasoning.
Please don’t respond that I probably shouldn’t bother commenting; I kind of know that.
If you have a social anxiety problem then i expect that reassurances from a stranger on the internet won’t have much effect. But if it’s any help, then from a glance at your user overview page it looks like what you say has a generally positive reception. So it looks like you can write here without worrying about disapproval, based on your karma score at least.
However ridiculous creationism or conspiracy theories may be, it’s still useful to have a clear explanation of why they are ridiculous, if only for the sake of anyone who’s not completely up to date on the topic. For the other example you named, creationism, even if creationism is extremely silly it’s still useful to have a summary of why evolution is more reasonable.
For myself, i can say that i’m not a conspiracy theorist, but i haven’t really researched this topic, so i don’t have a justification for “why ‘WTC explosives: no’ is a slam-dunk” off the top of my head. So, the discussion resulting from PlaidX’s initial question has raised at least one good point that i hadn’t thought of before.
I assume that by “secret services” he was referring to the CIA (known for covert ops and espionage), rather than the agency called the Secret Service (known for its presidential bodyguards).
Not a contradiction, but they are two distinct claims. Whether the government is untrustworthy and whether it’s competent are separate arguments.
Most libertarian criticisms of the government that i’ve heard have focused on arguments that the government is inefficient and incompetent.
Zack M Davis’s point is explained by the article he has linked to.
The tl;dr version is that your use of a certain word (in this case “legitimate”) is not helping a productive conversation. Instead, explain exactly what you mean when you say “legitimate”, because the word can mean different things, so it’s not clear which meaning you’re using.
Are you a human being, adefinitemaybe? It seems that all humans in the past have died, and all humans currently alive appear to be following the same pattern that leads to eventual death. How are you different from other humans who are known to be mortal?
Your suggestion that you are immortal is basically the same as saying, “cars are known to break down under certain conditions, and my car is just like the others, but this specific car hasn’t broken down yet so I’ll assume that it will never break down.”
In my experience very few people will listen to an argument after the person presenting the argument has called them stupid. When you call somebody a moron, then i expect that you’ve drastically reduced the chances that this person will listen to you.
In other words, the action of calling someone a moron takes convincing the target off of the table, if you haven’t done that already.
My guess is that, when you call you’re in a debate and you call your opponent stupid, it’s mainly for the benefit of the people who already agree with you; the main purpose is probably designating “which side you’re on” rather than convincing anyone who disagrees. This reminds me of the line of retreat idea—it’s easier for people to change their minds if they can do so without calling themselves stupid.
Perhaps the thought experiment would benefit from a sentence like this: “Omega appears and tells you that using the fat man would work.”
The January Open Thread is here.
The AI gathered enough information about me to create a conscious simulation of me, through a monochrome text terminal? That is impressive!
If the AI is capable of simulating me, then the AI must already be out of the box. In that case, then whatever the AI wants to happen will happen, so it doesn’t matter what do.
The bridge (“A fact is just a fantasy, unless it can be checked”) is more or less simply wrong.
I read that line as saying, “you should have evidence for an claim in order to believe it”. Which makes me think of, for example, the “chocolate cake in the asteroid belt” claim where we don’t believe the claim, because we have no evidence for it.
I’ve never given the topic of tipping much thought before, so i don’t have a very good idea of what constitutes average tipping behavior. But i’d have assumed that the whole point of tipping is to reward good service. Do you give the same amount to a good waiter as you give to a bad waiter?