This is why I write about political philosophy, not politics. E.g. I disagree with John Rawl’s veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward), but I don’t see either myself or anyone else getting mind-killingly tribal over it. After all it is not about a party. It is not about an election program. It is not about power. It is about ideas.
I see the opposite norm what you mention: I think when I write about political philosophy on LW it gets a negative reaction because it is too political and may invite mind-killing. Yet I have not seen this actually happen.
I think LW needs to be far more tolerant about political philosophy, and freely discuss the whole spectrum from Marx to Bonald, because where else? After you get a taste of LW, every other internet forum feels stupid, playing ego-games, un-self-critical and unhelpful. (By every other I mean Reddit and the 10-15 blogs I read, I am not very good at googling up interesting websites...)
This may be done parallelly with being less tolerant about partisan politics, but that may be a tad tricky.
I think we can just taboo partisan or emotional monikers out of political philosophy and do it easily. For example never refer to John Kekes as a conservative, refer to him as a pluralist skeptic—he identifies with both actually. Rawls may be defined as a theorist of distributive justice, not a liberal. And so on.
After you get a taste of LW, every other internet forum feels stupid
And why do you think this is so? Are all participants on this forum genetically superior, and they have to prove it by giving a DNA sample before registering the user account? Or could it be because some topics and some norms of the debate attract some kind of people (and the few exceptions are then removed by downvoting)? Any other hypothesis?
If you propose another hypothesis, please don’t just say “well, it is because you are (for example) more intelligent or more reasonable” without adding an explanation about how specifically this website succeeds in attracting the intelligent and/or reasonable people, without attracting the other kinds of people, so the newcomers who don’t fit the norm are not able to simply outvote the old members and change the nature of the website dramatically. (Especially considering that this is a community blog, not one person’s personal blog such as Slate Star Codex.)
Well, as for me, reading half the sequences change my attitude a lot by simply convicing me to dare to be rational, that it is not socially disapproved at least here. I would not call it norms, as the term “norms” I understand as “do this or else”. And it is not the specific techniques in the sequences, but the attitudes. Not trying to be too clever, not showing off, not trying to use arguments as soldiers, not trying to score points, not being tribal, something I always liked but on e.g. Reddit there was quite a pressure to not do so.
So it is not that these things are norms but plain simply that they are allowed.
A good parallel is that throughout my life, I have seen a lot of tough-guy posturing in high school, in playgrounds, bars, locker rooms etc. And when I went to learn some boxing then paradoxically, that was the place I felt it is the most approved to be weak or timid. Because the attitude is that we are all here to develop, and therefore being yet underdeveloped is OK. One way to look at is that most people out in life tend to see human characteristics as fixed: you are smart of dumb, tough or puny and you are just that, no change, no development. Or putting it different, it is more of a testing, exam-taking attitude, not learning attitude: i.e. on the test, the exam, you are supposed to prove you already have whatever virtue is valued there, it is too late to say I am working on it. But in the boxing gym where everybody is there to get tougher, there is no such testing attitude, you can be upfront about your weakness or timidity and as long as you are working on it you get respect, because the learning attitude kills the testing attitude, because in learning circumstances nobody considers such traits too innate. Similarly on LW, the rationality learning attitude kills the rationality testing attitude and thus the smarter-than-though posturing, points-scoring attitude gets killed by it, because showing off inborn IQ is less important than learning the optimal use of whatever amount of IQ there is. Thus, there is no shame in admitting ignorance or using wrong reasoning as long as one there is an effort to improve it.
I think this is why. And this has little to do with topics and little to do with enforced norms.
I like your example and “learning environment” vs “testing environment”.
However, I am afraid that LW is attractive also for people who instead of improving their rationality want to do other things; such as e.g. winning yet another website for their political faction. Some people use the word “rationality” simply as a slogan to mean “my tribe is better than your tribe”.
There were a few situations when people wrote (on their blogs) something like: “first I liked LW because they are so rational, but then I was disappointed to find out they don’t fully support my political faction, which proves they are actually evil”. (I am exaggerating to make a point here.) And that’s the better case. The worse case is people participating on LW debates and abusing the voting system to downvote comments not beause those comments are bad from the espistemic rationality point of view, but because they were written by people who disagree (or are merely suspect to disagree) with their political tribe.
This is all fine, but what is missing for me is the reasoning behind something like ”… and this is bad enough to taboo it completely and forfeit all potential benefits, instead of taking these risks”—at least if I understand you right. The potential benefits is coming up with ways to seriously improve the world. The potential risk is, if I get it right, that some people will behave irrationally and that will make some other people angry.
Idea: let’s try to convince the webmaster to make a third “quarantine” tab, to the right from the discussion tab, visible only to people logged in. That would cut down negative reflections from blogs, and also downvotes could be turned off there.
An alternative without programming changes would be biweekly “incisive open threads”, similar to Ozy’s race-and-gender open threads, and downvoting customarily tabood in them. Try at least one?
An alternative without programming changes would be biweekly “incisive open threads”, similar to Ozy’s race-and-gender open threads
Feel free to start a “political thread”. Worst case: the thread gets downvoted.
However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then—because it is likely to happen again.
and downvoting customarily tabood in them.
Not downvoting brings also has its own problems: genuinely stupid arguments remain visible (or can even get upvotes from their faction), people can try winning the debate by flooding the opponent with many replies.
Okay, I do not know how to write it diplomatically, so I will be very blunt here to make it obvious what I mean: The current largest threat to the political debate on LW is a group called “neoreactionaries”. They are something like “reinventing Nazis for clever contrarians”; kind of a cult around Michael Anissimov who formerly worked at MIRI. (You can recognize them by quoting Moldbug and writing slogans like “Cthulhu always swims left”.) They do not give a fuck about politics being the mindkiller, but they like posting on LessWrong, because they like the company of clever people here, and they were recruited here, so they probably expect to recruit more people here. Also, LessWrong is pretty much the only debate forum on the whole internet that will not delete them immediately. If you start a political debate, you will find them all there; and they will not be there to learn anything, but to write about how “Cthulhu always swims left”, and trying to recruit some LW readers. -- Eugine Nier was one of them, and he was systematically downvoting all comments, including completely innocent comments outside of any political debate, of people who dared to disagree with him once somewhere. Which means that if a new user happened to disagree with him once, they usually soon found themselves with negative karma, and left LessWrong. No one knows how many potential users we may have lost this way.
I am afraid that if you start a political thread, you will get many comments about how “Cthulhu always swims left”, and anyone who reacts negatively will be accused of being a “progressive” (which in their language means: not a neoreactionary). If you will ask for further explanation, you will either receive none, or a link to some long and obscurely written article by Moldbug. If you downvote them, they will create sockpuppets and upvote their comments back; if you disagree with them in debate, expect your total karma to magically drop by 100 points overnight.
Therefore I would prefer simply not doing this. But if you have to do it, give it a try and see for yourself. But please read the older political threads first.
However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then—because it is likely to happen again.
I am afraid that if you start a political thread, you will get many comments about how “Cthulhu always swims left”
Just out of curiosity, I looked at the latest politics thread in Vaniver’s list. Despite being explicitly about NRx, in contains only two references to “Cthulhu”, both by people arguing against NRx.
and anyone who reacts negatively will be accused of being a “progressive” (which in their language means: not a neoreactionary).
Rather anyone who isn’t sufficiently progressive gets called a neoreactionary.
Viliam_Bur is the person who gets messages asking him to deal with mass downvotes, so I am sympathetic to him not wanting us to attract more mass downvoters.
Not anymore, but yeah, this is where my frustration is coming from. Also, for every obvious example of voting manipulation, there are more examples of “something seems fishy, but there is no clear definition of ‘voting manipulation’ and if I go down this slippery slope, I might end up punishing people for genuine votes that I just don’t agree with, so I am letting it go”. But most of these voting games seem to come from one faction of LW users, which according to the surveys is just a tiny minority.
(When the “progressives” try to push their political agenda on LW—and I don’t remember them doing this recently—at least they do it by writing accusatory articles, and by complaining about LW and rationality on other websites, not by playing voting games. So their disruptions do not require moderator oversight.)
I don’t understand this word “was”—I just lost another 9+ karma paperclips to Eugine Nier.
Not to put too fine a point on it, but this seems less like a problem with political threads and more like a problem with someone driving most of the world’s population (especially the educated western population) away from existential risk prevention in general and FAI theory in particular.
E.g. I disagree with John Rawl’s veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward), but I don’t see either myself or anyone else getting mind-killingly tribal over it
It’s usually very hard to recognize when one get’s mindkilled.
I disagree with John Rawl’s veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward), but I don’t see either myself or anyone else getting mind-killingly tribal over it. After all it is not about a party. It is not about an election program. It is not about power. It is about ideas.
Empirical evidence from studies suggests that it needs very little to get people who can use Bayes rules for abstract textbook problems to avoid using it when faced with a political subject where they care about one side winning.
That’s what “mind-killing” is about. People on LW aren’t immune on that regard. I have plenty of times seen that someone on LW makes an argument on the subject of politics that he surely wouldn’t make on a less charged subject because they argument structure doesn’t work.
Yes, but Bayesian rules are about predictions e.g. would a policy what it is expected to do e.g. does raising the min wage lead to unemployment or not, and political philosophy is one meta-level higher than that e.g. is unemployment bad or not, or is it unjust or not. While it is perhaps possible and perhaps preferable to turn all questions of political philosophy into predictive models, changing some of them and some other questions simply dissolved (i.e. is X fair?) if they cannot be, that is not done yet, and that is precisely what could be done here. Because where else?
When talking about issues of political philosophy you often tend to talk quite vaguely and are to vague to be wrong. That’s not being mind-killed but it’s also not productive.
If you want to decide whether unemployment is bad or not than factual questions about unemployment matter a great deal.
How does unemployment affect the happiness of the unemployed?
To what extend do the unemployed use their time to do something useful for society like volunteering?
First of all, the there is the meta-level issue whether to engage the original version or the pop version, as the first is better but the second is far, far more influential. This is an unresolved dilemma (same logic: should an atheist debate with Ed Feser or with what religious folks actually believe?) and I’ll just try to hover in between.
A theory of justice does not simply describe a nice to have world. It describes ethical norms that are strong enough to be warrant coercive enforcement. (I’m not even libertarian, just don’t like pretending democratic coercion is somehow not one.)
Rawls is asking us to imagine e.g. what if we are born with a disability that requires really a lot of investment from society to make its members live an okay life, let’s call the hypothetical Golden Wheelchair Ramps.
Depending on whether we look at it rigorously, in a more “pop” version Rawls is saying our pre-born self would want GWR built everywhere even when it means that if we are born able and rich we taxed through the nose to pay for it, or in a more rigorous version 1% change to be born with this illness would mean we want 1% of GWRs built.
Now, this all is all well if it is simply understood as the preferences of risk-averse people. After all we have a real, true veil of ignorance after birth: we could get poor, disabled etc. any time. It is easy to lose birth privileges, well, many of them at least. More risk-taking people will say I don’t really want to pay for GWR, I am taking my gamble tha I will be born rich and able in which case I won’t need them and I would rather keep that tax money. (This is a horribly selfish move, but Rawls set up the game so that it is only about fairness emerging out of rational selfishness and altruism is not required in this game so I am just following the rules.)
However, since it is a theory of justice, it means the preferences of risk-aversge people are made mandatory, turned into a social policy and enforced with coercion. And that is the issue.
Now, how could Rawls (or pop-Rawlsians) get away with that? By assuming that all reasonable people are risk-averse anyway. In other words, turning risk-aversity into a tacit norm. Instead of seeing it negatively as a vice, or neutrally as a preference, it is basically a virtue here. Now, we have a perfect name for turning timidity into a norm: it is called cowardice.
And I think my argument managed to demonstrate avoiding in politics mind-killing up to the last sentence when I used a connotationally loaded word (cowardice), but at this point I had to, as I casually remarked earlier I feel this way about it and now had to explain why. But the last sentence refers only to my feelings and not an integral part of the argument, for the argument , just stop reading at “risk aversion should not be made into a norm and coercively enforced calling it justice”.
Again, it is not part of the argument, but an explanation of my feelings: when I try to improve one my vices or weaknesses, and I see others almost see them as norms, I feel disgust. For example, willful stupidity disgusts me—I think this feeling may be common around here. But as I am also trying to work on my own cowardice, being too accepting of it also disgusts me.
How about no theory of justice? :) Philosophers should learn from scientists here: if you have no good explanation, none at all is more honest than a bad but seductive one. As a working hypothesis we could consider our hunger for justice and fairness an evolved instinct, a need, emotion, a strong preference, something similar to the desire for social life or romantic love, it is simply one of the many needs a social engineer would aim to satisfy. The goal is, then, to make things “feel just” enough to check that checkmark.
“to each his own” reading Rawls and Rawlsians I tend to sense a certain, how to put it, overly collective feeling. That there is one heavily interconnected world and it is the property of all humankind and there is a collective, democratic decision-making on how to make it suitable for all. So in this kind of world there is nothing exempt for politics, nothing is like “it is mine and mine alone and not to be touched by others”. The question is, is it a hard reality derived by the necessities of the dynamics of a high-tech era? Or just a preference? My preferences are way more individualistic than that. The attitude that everything is collective and to be shaped and formed in a democratic way is IMHO way too often a power play by “sophists” who have a glib tongue, good at rhethorics, and can easily shape democratic opinion. I am atheist but “culturally catholic” enough to find the parable of the snake offering the fruit useful: that it is not only through violence, but also through glib, seductive persuasion, through 100% consent, a lot of damage can be done.
This is something not really understood properly in the modern world, we understand how violence, oppression or outright fraud can be bad, but not really realize how much harm a silver tongue can cause without even outright lying, because we already live in socities where silver-tongue intellectuals are already the ruling class, so they underplay their own power by lionizing consent and freedom of speech as institutions that can reasonably considered to lead to good results.
I mean, for example, a truly realistic society would censor arguments that feel good. This sounds super weird: we are used to either complete freedom of speech or to censorship based on imputed harm or untruth, but censor even true and useful ideas if they feel too good? Yes, as long as we understand censorship as a cost not an impenetrable barrier: putting a cost on ideas that feel good would neutralize that feeling and thus enable us to judge the idea on a rational basis, without an affective bias.
Compare that to the real world and realize we are living in a sophists paradize where feel-good ideas have power through democratic consent.
I would want a way more autist-friendly world than that, and the way I would imagine it is some clear fences, Schelling points, whatnot, some kind of a “this is mine, this is yours, and these things are not subject to the political process or democratic-collective consensus, only those and those things are subject to that”. This would by own risk-aversion: to have some minimal insurance against the loss “sophists” can enact on me by persuading public opinion.
I don’t think Rawls makes that assertion. Rawls does presume some amount of risk aversion, but it seems highly inaccurate to say that Rawls asserts that “everyone is maximally risk-averse.”
This is why I write about political philosophy, not politics. E.g. I disagree with John Rawl’s veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward), but I don’t see either myself or anyone else getting mind-killingly tribal over it. After all it is not about a party. It is not about an election program. It is not about power. It is about ideas.
I see the opposite norm what you mention: I think when I write about political philosophy on LW it gets a negative reaction because it is too political and may invite mind-killing. Yet I have not seen this actually happen.
I think LW needs to be far more tolerant about political philosophy, and freely discuss the whole spectrum from Marx to Bonald, because where else? After you get a taste of LW, every other internet forum feels stupid, playing ego-games, un-self-critical and unhelpful. (By every other I mean Reddit and the 10-15 blogs I read, I am not very good at googling up interesting websites...)
This may be done parallelly with being less tolerant about partisan politics, but that may be a tad tricky.
I think we can just taboo partisan or emotional monikers out of political philosophy and do it easily. For example never refer to John Kekes as a conservative, refer to him as a pluralist skeptic—he identifies with both actually. Rawls may be defined as a theorist of distributive justice, not a liberal. And so on.
And why do you think this is so? Are all participants on this forum genetically superior, and they have to prove it by giving a DNA sample before registering the user account? Or could it be because some topics and some norms of the debate attract some kind of people (and the few exceptions are then removed by downvoting)? Any other hypothesis?
If you propose another hypothesis, please don’t just say “well, it is because you are (for example) more intelligent or more reasonable” without adding an explanation about how specifically this website succeeds in attracting the intelligent and/or reasonable people, without attracting the other kinds of people, so the newcomers who don’t fit the norm are not able to simply outvote the old members and change the nature of the website dramatically. (Especially considering that this is a community blog, not one person’s personal blog such as Slate Star Codex.)
Well, as for me, reading half the sequences change my attitude a lot by simply convicing me to dare to be rational, that it is not socially disapproved at least here. I would not call it norms, as the term “norms” I understand as “do this or else”. And it is not the specific techniques in the sequences, but the attitudes. Not trying to be too clever, not showing off, not trying to use arguments as soldiers, not trying to score points, not being tribal, something I always liked but on e.g. Reddit there was quite a pressure to not do so.
So it is not that these things are norms but plain simply that they are allowed.
A good parallel is that throughout my life, I have seen a lot of tough-guy posturing in high school, in playgrounds, bars, locker rooms etc. And when I went to learn some boxing then paradoxically, that was the place I felt it is the most approved to be weak or timid. Because the attitude is that we are all here to develop, and therefore being yet underdeveloped is OK. One way to look at is that most people out in life tend to see human characteristics as fixed: you are smart of dumb, tough or puny and you are just that, no change, no development. Or putting it different, it is more of a testing, exam-taking attitude, not learning attitude: i.e. on the test, the exam, you are supposed to prove you already have whatever virtue is valued there, it is too late to say I am working on it. But in the boxing gym where everybody is there to get tougher, there is no such testing attitude, you can be upfront about your weakness or timidity and as long as you are working on it you get respect, because the learning attitude kills the testing attitude, because in learning circumstances nobody considers such traits too innate. Similarly on LW, the rationality learning attitude kills the rationality testing attitude and thus the smarter-than-though posturing, points-scoring attitude gets killed by it, because showing off inborn IQ is less important than learning the optimal use of whatever amount of IQ there is. Thus, there is no shame in admitting ignorance or using wrong reasoning as long as one there is an effort to improve it.
I think this is why. And this has little to do with topics and little to do with enforced norms.
I like your example and “learning environment” vs “testing environment”.
However, I am afraid that LW is attractive also for people who instead of improving their rationality want to do other things; such as e.g. winning yet another website for their political faction. Some people use the word “rationality” simply as a slogan to mean “my tribe is better than your tribe”.
There were a few situations when people wrote (on their blogs) something like: “first I liked LW because they are so rational, but then I was disappointed to find out they don’t fully support my political faction, which proves they are actually evil”. (I am exaggerating to make a point here.) And that’s the better case. The worse case is people participating on LW debates and abusing the voting system to downvote comments not beause those comments are bad from the espistemic rationality point of view, but because they were written by people who disagree (or are merely suspect to disagree) with their political tribe.
This is all fine, but what is missing for me is the reasoning behind something like ”… and this is bad enough to taboo it completely and forfeit all potential benefits, instead of taking these risks”—at least if I understand you right. The potential benefits is coming up with ways to seriously improve the world. The potential risk is, if I get it right, that some people will behave irrationally and that will make some other people angry.
Idea: let’s try to convince the webmaster to make a third “quarantine” tab, to the right from the discussion tab, visible only to people logged in. That would cut down negative reflections from blogs, and also downvotes could be turned off there.
An alternative without programming changes would be biweekly “incisive open threads”, similar to Ozy’s race-and-gender open threads, and downvoting customarily tabood in them. Try at least one?
Feel free to start a “political thread”. Worst case: the thread gets downvoted.
However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then—because it is likely to happen again.
Not downvoting brings also has its own problems: genuinely stupid arguments remain visible (or can even get upvotes from their faction), people can try winning the debate by flooding the opponent with many replies.
Another danger is that political debates will attract users like Eugine Nier / Azathoth123.
Okay, I do not know how to write it diplomatically, so I will be very blunt here to make it obvious what I mean: The current largest threat to the political debate on LW is a group called “neoreactionaries”. They are something like “reinventing Nazis for clever contrarians”; kind of a cult around Michael Anissimov who formerly worked at MIRI. (You can recognize them by quoting Moldbug and writing slogans like “Cthulhu always swims left”.) They do not give a fuck about politics being the mindkiller, but they like posting on LessWrong, because they like the company of clever people here, and they were recruited here, so they probably expect to recruit more people here. Also, LessWrong is pretty much the only debate forum on the whole internet that will not delete them immediately. If you start a political debate, you will find them all there; and they will not be there to learn anything, but to write about how “Cthulhu always swims left”, and trying to recruit some LW readers. -- Eugine Nier was one of them, and he was systematically downvoting all comments, including completely innocent comments outside of any political debate, of people who dared to disagree with him once somewhere. Which means that if a new user happened to disagree with him once, they usually soon found themselves with negative karma, and left LessWrong. No one knows how many potential users we may have lost this way.
I am afraid that if you start a political thread, you will get many comments about how “Cthulhu always swims left”, and anyone who reacts negatively will be accused of being a “progressive” (which in their language means: not a neoreactionary). If you will ask for further explanation, you will either receive none, or a link to some long and obscurely written article by Moldbug. If you downvote them, they will create sockpuppets and upvote their comments back; if you disagree with them in debate, expect your total karma to magically drop by 100 points overnight.
Therefore I would prefer simply not doing this. But if you have to do it, give it a try and see for yourself. But please read the older political threads first.
I upvoted for this:
And, to further drive home the point, I’ll link to the ones I could easily find: Jan 2012, Aug 2012, Dec 2012, Jan 2013, Feb 2013, more Feb 2013, Oct 2013, Jun 2014, Nov 2014.
Just out of curiosity, I looked at the latest politics thread in Vaniver’s list. Despite being explicitly about NRx, in contains only two references to “Cthulhu”, both by people arguing against NRx.
Rather anyone who isn’t sufficiently progressive gets called a neoreactionary.
Y’know, you do sound mindkilled about NRx…
Viliam_Bur is the person who gets messages asking him to deal with mass downvotes, so I am sympathetic to him not wanting us to attract more mass downvoters.
Not anymore, but yeah, this is where my frustration is coming from. Also, for every obvious example of voting manipulation, there are more examples of “something seems fishy, but there is no clear definition of ‘voting manipulation’ and if I go down this slippery slope, I might end up punishing people for genuine votes that I just don’t agree with, so I am letting it go”. But most of these voting games seem to come from one faction of LW users, which according to the surveys is just a tiny minority.
(When the “progressives” try to push their political agenda on LW—and I don’t remember them doing this recently—at least they do it by writing accusatory articles, and by complaining about LW and rationality on other websites, not by playing voting games. So their disruptions do not require moderator oversight.)
I don’t understand this word “was”—I just lost another 9+ karma paperclips to Eugine Nier.
Not to put too fine a point on it, but this seems less like a problem with political threads and more like a problem with someone driving most of the world’s population (especially the educated western population) away from existential risk prevention in general and FAI theory in particular.
It’s usually very hard to recognize when one get’s mindkilled.
Empirical evidence from studies suggests that it needs very little to get people who can use Bayes rules for abstract textbook problems to avoid using it when faced with a political subject where they care about one side winning. That’s what “mind-killing” is about. People on LW aren’t immune on that regard. I have plenty of times seen that someone on LW makes an argument on the subject of politics that he surely wouldn’t make on a less charged subject because they argument structure doesn’t work.
Yes, but Bayesian rules are about predictions e.g. would a policy what it is expected to do e.g. does raising the min wage lead to unemployment or not, and political philosophy is one meta-level higher than that e.g. is unemployment bad or not, or is it unjust or not. While it is perhaps possible and perhaps preferable to turn all questions of political philosophy into predictive models, changing some of them and some other questions simply dissolved (i.e. is X fair?) if they cannot be, that is not done yet, and that is precisely what could be done here. Because where else?
When talking about issues of political philosophy you often tend to talk quite vaguely and are to vague to be wrong. That’s not being mind-killed but it’s also not productive.
If you want to decide whether unemployment is bad or not than factual questions about unemployment matter a great deal. How does unemployment affect the happiness of the unemployed? To what extend do the unemployed use their time to do something useful for society like volunteering?
Um, what? What’s wrong with risk-aversion? And what’s wrong with the Veil of Ignorance? How does that assumption make the concept disgusting?
First of all, the there is the meta-level issue whether to engage the original version or the pop version, as the first is better but the second is far, far more influential. This is an unresolved dilemma (same logic: should an atheist debate with Ed Feser or with what religious folks actually believe?) and I’ll just try to hover in between.
A theory of justice does not simply describe a nice to have world. It describes ethical norms that are strong enough to be warrant coercive enforcement. (I’m not even libertarian, just don’t like pretending democratic coercion is somehow not one.)
Rawls is asking us to imagine e.g. what if we are born with a disability that requires really a lot of investment from society to make its members live an okay life, let’s call the hypothetical Golden Wheelchair Ramps.
Depending on whether we look at it rigorously, in a more “pop” version Rawls is saying our pre-born self would want GWR built everywhere even when it means that if we are born able and rich we taxed through the nose to pay for it, or in a more rigorous version 1% change to be born with this illness would mean we want 1% of GWRs built.
Now, this all is all well if it is simply understood as the preferences of risk-averse people. After all we have a real, true veil of ignorance after birth: we could get poor, disabled etc. any time. It is easy to lose birth privileges, well, many of them at least. More risk-taking people will say I don’t really want to pay for GWR, I am taking my gamble tha I will be born rich and able in which case I won’t need them and I would rather keep that tax money. (This is a horribly selfish move, but Rawls set up the game so that it is only about fairness emerging out of rational selfishness and altruism is not required in this game so I am just following the rules.)
However, since it is a theory of justice, it means the preferences of risk-aversge people are made mandatory, turned into a social policy and enforced with coercion. And that is the issue.
Now, how could Rawls (or pop-Rawlsians) get away with that? By assuming that all reasonable people are risk-averse anyway. In other words, turning risk-aversity into a tacit norm. Instead of seeing it negatively as a vice, or neutrally as a preference, it is basically a virtue here. Now, we have a perfect name for turning timidity into a norm: it is called cowardice.
And I think my argument managed to demonstrate avoiding in politics mind-killing up to the last sentence when I used a connotationally loaded word (cowardice), but at this point I had to, as I casually remarked earlier I feel this way about it and now had to explain why. But the last sentence refers only to my feelings and not an integral part of the argument, for the argument , just stop reading at “risk aversion should not be made into a norm and coercively enforced calling it justice”.
Again, it is not part of the argument, but an explanation of my feelings: when I try to improve one my vices or weaknesses, and I see others almost see them as norms, I feel disgust. For example, willful stupidity disgusts me—I think this feeling may be common around here. But as I am also trying to work on my own cowardice, being too accepting of it also disgusts me.
Thanks for the explanation. Do you have any alternatives?
How about no theory of justice? :) Philosophers should learn from scientists here: if you have no good explanation, none at all is more honest than a bad but seductive one. As a working hypothesis we could consider our hunger for justice and fairness an evolved instinct, a need, emotion, a strong preference, something similar to the desire for social life or romantic love, it is simply one of the many needs a social engineer would aim to satisfy. The goal is, then, to make things “feel just” enough to check that checkmark.
“to each his own” reading Rawls and Rawlsians I tend to sense a certain, how to put it, overly collective feeling. That there is one heavily interconnected world and it is the property of all humankind and there is a collective, democratic decision-making on how to make it suitable for all. So in this kind of world there is nothing exempt for politics, nothing is like “it is mine and mine alone and not to be touched by others”. The question is, is it a hard reality derived by the necessities of the dynamics of a high-tech era? Or just a preference? My preferences are way more individualistic than that. The attitude that everything is collective and to be shaped and formed in a democratic way is IMHO way too often a power play by “sophists” who have a glib tongue, good at rhethorics, and can easily shape democratic opinion. I am atheist but “culturally catholic” enough to find the parable of the snake offering the fruit useful: that it is not only through violence, but also through glib, seductive persuasion, through 100% consent, a lot of damage can be done.
This is something not really understood properly in the modern world, we understand how violence, oppression or outright fraud can be bad, but not really realize how much harm a silver tongue can cause without even outright lying, because we already live in socities where silver-tongue intellectuals are already the ruling class, so they underplay their own power by lionizing consent and freedom of speech as institutions that can reasonably considered to lead to good results.
I mean, for example, a truly realistic society would censor arguments that feel good. This sounds super weird: we are used to either complete freedom of speech or to censorship based on imputed harm or untruth, but censor even true and useful ideas if they feel too good? Yes, as long as we understand censorship as a cost not an impenetrable barrier: putting a cost on ideas that feel good would neutralize that feeling and thus enable us to judge the idea on a rational basis, without an affective bias.
Compare that to the real world and realize we are living in a sophists paradize where feel-good ideas have power through democratic consent.
I would want a way more autist-friendly world than that, and the way I would imagine it is some clear fences, Schelling points, whatnot, some kind of a “this is mine, this is yours, and these things are not subject to the political process or democratic-collective consensus, only those and those things are subject to that”. This would by own risk-aversion: to have some minimal insurance against the loss “sophists” can enact on me by persuading public opinion.
The problem is that Rawls asserts that everyone is maximally risk-averse.
I don’t think Rawls makes that assertion. Rawls does presume some amount of risk aversion, but it seems highly inaccurate to say that Rawls asserts that “everyone is maximally risk-averse.”
.