I got
Death is the mind-killer.
Which made me go, “Well… yes.”
I got
Death is the mind-killer.
Which made me go, “Well… yes.”
I had an interesting experience with this, and I am wondering if others on the male side had the same.
I tried to imagine myself in these situations. When a situation did not seem to have any personal impact from the first person or at best a very mild discomfort, I tried to rearrange the scenario with social penalties that I would find distressing. (Social penalties do differ based on gender roles)
I found this provoked a fear response. If I give it voice, it sounds like “This isn’t relevant/I won’t be in this scenario/You would just.../Why are you doing this?” Which is interesting: my brain doesn’t want to process these stories as first-person accounts. Some sort of analysis would be easier and more comfortable, but I am pretty sure would miss the damn point.
I don’t have any further thoughts, other than this was useful in understanding things that may inhibit me from understanding. (and trying to get past them)
I would say that this makes sense about as often as the grad students in my philosophy department make sense, with only a few more spelling mistakes.
I dub this the Bling Fitness Theory.
I find the idea that humans have already found their ideal system of government suspicious. Democracy is, at best, low-hanging fruit. I do think it is a step up from previous systems, but it is unlikely that it is the best system that will ever exist. Changes in styles of governance tend to be slow, so whatever “better than democracy” systems are out there, I don’t expect to see any pop up anytime soon. Whatever it is, it will probably be weird.
For me, the breaking point occurred when I became a salesman. After about a month of rigorously working in that environment, it was just too easy to convince someone that an idea was true or false when it was beneficial for me to do so. Moreover, I realized that those same techniques were being used by my bosses on me.
I’d joked before about how easy it was to manipulate people, and I’d always cared about what was true (for as far back as I can remember, it was drilled in at a very early age) but that was the point where I stopped really caring about who “won” in an argument, because it broke “winning” down to rhetoric and manipulative technique. A few months later, I discovered LW, which helped break me of some bad beliefs I had at the time. But the shift from “winning arguments” to “finding truth” definitely happened when I got out of sales.
One I find useful for interpersonal dealings, as well as a lot of other things is:
If you don’t like the effect, don’t produce the cause.
Seems like a simple rule to follow, but I’d say at least half of the mistakes I make are due to when I neglect this rule.
Upvoted not for the claim, but the ridiculously high confidence in that claim.
Yeesh. Step out for a couple days to work on your bodyhacking and there’s a trench war going on when you get back...
In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.
This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.
Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain-eating discussions (If I advocate violence against group X, even as a hypothetical, and you are a member of said group, then you have a vested political interest whether or not my initial idea was good which leads to worse discussion); May preserve what is essentially a community norm now (as many have noted) in the face of future change; Will remove one particularly noxious and bad-PR generating avenue for trolling. (Which won’t remove trolling, of course. In fact, fighting trolls gives them attention, which they like: see Cons)
Costs: May increase bad PR for censoring (Rare in my experience, provided that the rules are sensibly enforced); May lead to people not posting important ideas for fear of violating rules (corollary: may help lead to environment where people post less); May create “silly” attempts to get around the rule by gray-areaing it (Where people say things like “I won’t say which country, but it starts with United States and rhymes with Bymerica”) which is a headache; May increase trolling (Trolls love it when there are rules to break, as these violations give them attention); May increase odds of LW community members acting in violence
Those are all the ones I could come up with in a few minutes after reading many posts. I am not sure what weights or probabilities to assign: probabilities could be determined by looking at other communities and incidents of media exposure, possibly comparing community size to exposure and total harm done and comparing that to a sample of similarly-sized communities. Maybe with a focus on communities about the size LW is now to cut down on the paperwork. Weights are trickier, but should probably be assigned in terms of expected harm to the community and its goals and the types of harm that could be done.
Suggestions (for general audience outside of LW/Rationalist circles)
I like the name “Confidence Game”- reminds people of a con game while informing you as to the point of the game.
Try to see if you can focus on a positive-point scale, if you can. Try to make it so that winning nets you a lot of points but “losing” only a couple. (Same effect on scores, either way) This won’t seem as odd if you set it up as one long scale rather than two shorter ones: so 99-90-80-60-50-60-80-90-99.
Setting it to a timer will make it ADDICTIVE. Set it up in quick rounds. Make it like a quiz show. No question limit, or a bonus if you hit the limit for being “Quick on your feet.” Make it hard but not impossible to do.
Set up a leaderboard where you can post to FB, show friends, and possibly compare your score to virtual “opponents” (which are really just scoring metrics) Possibly make those metrics con-man themed, keeping with the game’s name.
Graphics will help a lot. Consider running with the con-game theme.
Label people: maybe something like “Underconfident” “Unsure” “Confident” “AMAZING” “Confident” “Overconfident” “Cocksure” (Test labels to see what works well!) rather than using graphs. Graphs and percentages? Turn-off. Drop the % sign and just show two numbers with a label. Make this separate from points but related. (High points=greater chance of falling toward the center, but in theory not necessarily the same.) Yes, I know the point is to get people to think in percentages, but if you want to do that you have to get them there without actually showing them math, which many find off-putting.
Set up a coin system that earns you benefits for putting into the game: extended round, “confidence streak” bonuses, hints, or skips might be good rewards here. Test and see what works. Allow people to pay for coins, but also reward coins for play or another mini-game related to play or both. (Investment=more play)
.… he who works to understand the true causes of miracles and to understand Nature as a scholar, and not just to gape at them like a fool, is universally considered an impious heretic and denounced by those to whom the common people bow down as interpreters of Nature and the gods. For these people know that the dispelling of ignorance would entail the disappearance of that sense of awe which is the one and only support of their argument and the safeguard of their authority.
Baruch Spinoza Ethics
game theorists call this a gnash equilibrium
Well played, sir.
Shouldn’t that answer then result in a “Invalid Question” to the original “Would you be a proper scientific skeptic if you were born in 500 CE?” question?
I mean, what you are saying here is that it isn’t possible for you to have been born in 500 C.E., that you are a product of your genetics and environment and cannot be separated from those conditions that resulted in you. So the answer isn’t “Yes” it is “That isn’t a valid question.”
I’m not saying I agree, especially since I think the initial question can be rephrased as “Given the population of humans born in 500 C.E. and the historical realities of the era, do you believe that any person born in this era could have been a proper scientific skeptic and given that, do you believe that you would have developed into one had your initial conditions been otherwise identical, or at least highly similar?” Making it personal (Would you be...) is just a way of conferring the weight of the statement, as it is assumed that the readers of LW all have brains capable of modelling hypothetical scenarios, even if those scenarios don’t (or can’t even in principle) match reality.
The question isn’t asking if it is ACTUALLY possible for you to have been born in 500 CE, it is asking you to model the reality of someone in the first person as born in 500 CE and, taking into account what you know of the era, ask if you really think that someone with otherwise equivalent initial starting conditions would have grown into a proper scientific skeptic.
It’s also shorter to just bring in the personal hypothetical, which helps.
Groupthink is as powerful as ever. Why is that? I’ll tell you. It’s because the world is run by extraverts.
The problem with extraverts… is a lack of imagination.
pretty much everything that is organized is organized by extraverts, which in turn is their justification for ruling the world.
This seems to be largely an article about how we Greens are so much better than those Blues rather than offering much that is useful.
The fact that I won’t be able to care about it once I am dead doesn’t mean that I don’t value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don’t want future sapient life to be wiped out, and that is a statement about my current preferences, not my ‘after death’ preferences. (Which, as noted, do not exist.)
From my reading I would suspect so. Particularly, the Lars/Hoppy Times story arc seems well-suited, and the story doesn’t really take place in the MLP universe from the show.
I would suspect that a Brony with no knowledge of the Singularity would find the story less comprehensible/more jarring than a Singularitarian who is not a fan of MLP.
Taboo the word real for a moment. Also any related words, like ‘actual.’ What do you mean when you say that it can be real?
You say your cousin can tell you the name of a card before he looks at it, after a random draw of the deck. But he does it. It’s an act on his part. This isn’t a mere word-issue either: you don’t think the same thing will happen if I try to predict a card you draw from a deck. So you are talking about a direct link between his statement and the card. Unless you think he is doing it by chance (so that there is just a correlation, and an extremely improbable one at that) you think that there is a causal link between the card drawn and the prediction. Saying you don’t know what it is, is not the same thing as saying that it is not there.
Likewise, when you say that you commune with the universe, you are stating some act, “communing” and some result “finding out that your partner loves you.” You don’t expect this to fail the next time you do it, or you wouldn’t perform the act to that effect. (You might perform it to “see if it works this time” but that would be another matter.) So I don’t think you really believe what you think you believe. Why would you “commune with the universe” if it did not cause effects such as “realizing your partner truly loves you”?
Putnam perhaps chose poor examples, but his thought-experiment works under any situation where we have limited knowledge.
Instead of Twin Earth, say that I have a jar of clear liquid on my desk. Working off of just that information (and the information that much of the clear liquid that humans keep around are water) people start calling the thing on my desk a “Jar of Water.” That is, until someone knocks it over and it starts to eat through the material on my desk: obviously, that wasn’t water.
Putnam doesn’t think that XYZ will look like water in every circumstance: his thought-experiment includes the idea that we can distinguish between XYZ and water with, say, an electron microscope. So obviously there are some properties of XYZ that are not the same as water, or else they really would look the same under every possible circumstance.
The difference (which is where some philosophers make the mistake) is when you assume that the “thought-experiment” stuff looks like the “real” stuff in every possible circumstance. If Putnam had said that the difference between H2O and XYZ was purely ephiphenomenal or something like that, he’d be obviously wrong. For instance, if we looked at XYZ and it “fooled” us into thinking it was H2O (say, if we broke apart XYZ and got a 2:1 ration of oxygen to hydrogen and no other parts) then Putnam’s argument wouldn’t hold. (This is where p-zombies fail: it is stipulated that there is no experiment that can tell the difference.)
Putnam’s main point was that we can be mistaken about what a thing is. Moreover, that when we can have two things (call them A and B) that we think are of the same type that we can not only be mistaken that A and B are of the same type, but that A could fit the type and B might not.
If this seems incredibly basic… it is. People make a big deal about it because prior to Putnam (and sometimes afterward) philosophers were saying crazy things like “the meanings in our heads don’t have to refer to anything in the world,” which essentially translates to “I can make a word mean anything I want!”
I agree with this to the extent that we shouldn’t make the mistake that just because we have a model of something in our head means that our model corresponds to the real world. It’s even stickier, because when a model doesn’t conform we often keep the words around because they can be useful descriptions of the new thing we’ve fround. That can create confusion, especially during a period of transition. (Imagine someone saying that “Water cannot be H2O, because it is necessarily an Aristotelian element.”) But thought experiments are very, very useful since all a “thought experiment” really is, is when you use the information already in your head and say, “Given what I already know, what do I think would happen in this circumstance?”
Working in philosophy, I see some move toward this, but it is slow and scattered. The problem is probably partially historical: philosophy PhDs trained in older methods train their students, who become philosophy PhDs trained in their professor’s methods+anything that they could weasel into the system which they thought important. (which may not always be good modifications, of course)
It probably doesn’t help that your average philosophy grad student starts off by TAing a bunch of courses with a professor who sets up the lecture and the material and the grading standards. Or that a young professor needs to clear classes in an academic structure. It definitely doesn’t help that philosophy has a huge bias toward historical works, as you point out.
None of these are excuses, of course. Just factors that slow down innovation in teaching philosophy. (which, of course, slows down the production of better philosophical works)
(2) 20th century philosophers who were way too enamored with cogsci-ignorant armchair philosophy.
This made me chuckle. Truth is often funny.
When I was in Sales, we called this “finding their true objection.”
Basically, if someone says “Well, I don’t want it unless it has X!” You say “What if I could provide you with X?”
So if someone says “Come back when you have a PhD!” You say “What if I could provide you with PhDs who believe the same idea?” If they then say “There are tons of PhDs who believe crazy things!” then you say “Then what else would I need to convince you?”
Usually, between them dismissing their own criteria and the amount of ideas they can bring forward, you can bring it down to about three things. I’ve seen 5, but that was a hard case. Those aren’t hard and fast rules: the rule is make sure you get them ALL, and make it specific, something like:
“So, if I can get you a published book by a PhD, respected in a field relevant to X, AND I can provide you with a for-profit organization that is working to accomplish goals relevant to X, AND I can make a flower appear out of my ear (or whatever)” THEN you will admit you were wrong and change your view?
And if you’re REALLY invested, you should have been taking notes, and get them to ‘initial’ (not sign, people hate signing but will often initial: it feels like a smaller pain) the list. Consistency bias is also your friend here: if they say it aloud, they will probably also initial it.
And then, if you hand it all to them on a silver platter, with the right presentation you can get a “you were right, I was wrong” out of them. (If you screw it up, you can get begrudging acceptance. Occasionally hostility if you really botch it. But that’s life in the interpersonal world.)
It sounds like a lot, but oddly, it isn’t usually very hard to get people to change their minds this way. It takes some time, so you’d better be invested in making that change. If you know what to expect, handing it all early helps. But if you really want it to happen, this way works.