It might be hard at first to tell the difference, so I’m going to have to use some examples. I’d ask that you try and suspend any emotional reactions you have to the examples I chose and just look at which approach seems more rational.
Bullshit. You aren’t providing an example because it is “hard to tell the difference at first”. You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
Contrast this with the Singularity Institute. A skeptic might well ask whether the Singularity is actually going to occur. Well, the SIAI FAQ addresses this, but only to summarily dismiss a couple objections in a cursory paragraph (that evades most of the force of the objections). And that’s the closest the FAQ gets to any sort of skepticism, the rest of it is just a straight and confident summary that tries to persuade you of SIAI beliefs.
The FAQ on the website is not the place to signal humility and argue against your own conclusions. All that would demonstrate is naivety and incompetence. You are demanding something that should not exist. This isn’t to say that there aren’t valid criticisms to be made of SIAI and their FAQ. You just haven’t made them.
Which attitude seems more like a serious scientist? Which seems more like Uri Geller?
Am I the only person who is outright nauseated by the quality of reasoning in these recent mud-slinging posts by aaronsw? What I see is a hastily selected bottom line along the lines of “SingInst sux” or perhaps “SingInst folks are too arrogant” then whatever hastily conceived rhetoric he can think of to support it. The problem isn’t in the conclusions—it is that the arguments used either don’t support or outright undermine the conclusion.
Competent criticism is encouraged. But the mere fact that a post is intended to be critical or ‘cynical’ isn’t sufficient. It needs to meet some kind of minimum intellectual standard too. If it did not represent an appeal to the second-order-contrarians and was evaluated based on actual content this post would probably end up mildly negative, even in the discussion section.
You started with an intent to associate SIAI with self delusion
I see, he must be one of those innately evil enemies of ours, eh?
My current model of aaronsw is something like this: He’s a fairly rational person who’s a fan of Givewell. He’s read about SI and thinks the singularity is woo, but he’s self-skeptical enough to start reading SI’s website. He finds a question in their FAQ where they fail to address points made by those who disagree, reinforcing the woo impression. At this point he could just say “yeah, they’re woo like I thought”. But he’s heard they run a blog on rationality, so he makes a post pointing out the self-skepticism failure in case there’s something he’s missing.
The FAQ on the website is not the place to signal humility and argue against your own conclusions.
Why not? I think it’s an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.
Overall, I thought aaronsw’s post had a much higher information to accusations ratio than your comment, for whatever that’s worth. As criticism goes his is pretty polite and intelligent.
Also, aaronsw is not the first person I’ve seen on the internet complaining about lack of self-skepticism on LW, and I agree with him that it’s something we could stand to work on. Or at least signalling self-skepticism; it’s possible that we’re already plenty self-skeptical and all we need to do is project typical self-skeptical attitudes.
For example, Eliezer Yudkowsky seems to think that the rational virtue of “humility” is about “taking specific actions in anticipation of your own errors”, not actually acting humble. (Presumably self-skepticism counts as humility by this definition.) But I suspect that observing how humble someone seems is a typical way to gauge the degree to which they take specific actions in anticipation of their own errors. If this is the case, it’s best for signalling purposes to actually act humble as well.
(I also suspect that acting humble makes it easier to publicly change your mind, since the status loss for doing so becomes lower. So that’s another reason to actually act humble.)
(Yes, I’m aware that I don’t always act humble. Unfortunately, acting humble by always using words like “I suspect” everywhere makes my comments harder to read and write. I’m not sure what the best solution to this is.)
FWIW, I don’t think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.
I agree with your model of aaronsw, and think wedrifid’s comments are over the top. But wedrifid is surely dead right about one important thing: aaronsw presented his article as “here is a general point about rationality, and I find that I have to think up some examples so here they are …” but it’s extremely obvious (especially if you look at a few of his other recent articles and comments) that that’s simply dishonest: he started with the examples and fitted the general point about rationality around them.
(I have no idea what sort of process would make someone as smart as aaronsw think that was a good approach.)
If you find yourself responding with tu quoque, then it is probably about time you re-evaluated the hypothesis that you are in mind-kill territory.
In this particular context, I think a more appropriate label would be the “Appeal to Come on, gimme a friggen’ break!”
The comment he was responding to was quite loaded with connotation, voluntarily or not, despite the “mostly true” and “arguably within the realm of likely possibilities” denotations that would make the assertion technically valid.
Being compared, even as a metaphorical hypothesis, to sophistry-flinging rhetoric-centric politicians is just about the most mind-killer-loaded subtext assault you could throw at someone.
it’s extremely obvious (especially if you look at a few of his other recent articles and comments) that that’s simply dishonest: he started with the examples and fitted the general point about rationality around them.
Considering he has changed the example, I find this unlikely. In any event, he post would appear to stand on it’s own.
Forget “innately evil”. In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).
The FAQ on the website is not the place to signal humility and argue against your own conclusions.
Why not? I think it’s an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.
If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world—and people—work. It would be an attempt at counter-signalling that would fail abysmally. I’d actually feel vicarious embarrassment just reading it.
Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.
Forget “innately evil”. In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).
Hm. Maybe you’re right that I’m giving him too much credit just because he’s presenting a view unpopular on LW. (Although, come to think of it, having a double standard that favors unpopular conclusions might actually be a good idea.) In any case, it looks like he rewrote his post.
If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world—and people—work. It would be an attempt at counter-signalling that would fail abysmally. I’d actually feel vicarious embarrassment just reading it.
I think the optimal use of an FAQ is to give informed and persuasive answers to the questions it poses, and that an informed and persuasive answer will acknowledge, steel-man, and carefully refute opposing positions.
I’m not sure why everyone seems to think the answers to the questions in an FAQ should be short. FAQs are indexed by question, so it’s easy for someone to click on just those questions that interest them and ignore the rest. lukeprog:
the linear format is not ideal for analyzing such a complex thing as AI risk
...
What we need is a modular presentation of the evidence and the arguments, so that those who accept physicalism, near-term AI, and the orthogonality thesis can jump right to the sections on why various AI boxing methods may not work, while those who aren’t sure what to think of AI timelines can jump to those articles, and those who accept most of the concern for AI risk but think there’s no reason to assert humane values over arbitrary machine values can jump to the article on that subject.
I even suggested creating a question-and-answer site as a supplement to lukeprog’s proposed wiki.
I don’t fault SI much for having short answers in the current FAQ, but it seems to me that FAQs are ideal tools for presenting longer answers relative to other media.
One option is for each question in the FAQ to have a page dedicated to answering it in depth. Then the main FAQ page could give a one-paragraph summary of SI’s response along with a link to the longer answer. Maybe this would achieve the benefits of both a long and and a short FAQ?
Some of which are quite dangerous. Either the JSTOR or PACER incidents could have killed any associated small nonprofit with legal bills. (JSTOR’s annual revenue is something like 53x that of SIAI.)
As fun as it is to watch Swartz’s activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.
As fun as it is to watch Swartz’s activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.
Wait, are you saying this aaronsw is the same guy as the guy currently being (tragically, comically) prosecuted for fraud? That’s kinda cool!
I don’t think it’s fair—I think it’s a bit motivated—to mention these as mysterious controversies and antics, without also mentioning that his actions could reasonably be interpreted as heroic. I was applauding when I read the JSTOR incident, and only wish he’d gotten away with downloading the whole thing and distributing it.
But there’s a difference between admiring the first penguin off the ice and noting that this is a good thing to do, and wanting to be that penguin or near enough that penguin that one might fall off as well. And this is especially true for organizations.
Even if so, one should still at least mention, in a debate on character, that the controversy in question just happened to be about an attempted heroic good deed.
Good grief. You said, ‘Aaron’s achievements of type X are really awesome and we could use more achivements on LW!’ Me: ‘But type X stuff is incredibly dangerous and could kill the website or SIAI, and it’s a little amazing Swartz has escaped both past X incidents with as apparently little damage as he has*.’ You: ‘zomg did you just seriously say Swartz posting to LW endangers SIAI?!’
Er, no, I didn’t, unless Swartz posting to LW is now the ‘actual track record of achievement’ that you are vaunting, which seems unlikely. I said his accomplishments like JSTOR or PACER (to name 2 specific examples, again, to make it impossible to misunderstand me, again) endanger any organization or website they are associated with.
EDIT: * Note that I wrote this comment several months before Aaron Swartz committed suicide due to the prosecution over the JSTOR incident.
I did once suggest a similar heuristic; but I feel the need to point out that there are many people in this world with track records of achievement, including, like, Mitt Romney or something, and that the heuristic is supposed to be, “Pay attention to rationalists with track records outside rationality”, e.g. Dawkins and Feynman.
Mitt Romney strikes me as a fairly poor example, since from my knowledge of his pre-political life, he seems like a strong rationalist. He looks much better on the instrumental rationality side than the epistemic rationality side, but I think I would rather hang out with Mormon management consultants than atheist waiters. (At least, I think I have more to learn from the former than the latter.)
1 seems true only in the sense that, in general, immorality is more attractive to bad decision-makers than to good decision-makers, but I would be reluctant to extend beyond that.
What if it had no effect on morality, but just made people more effective? As long as the sign bit on people’s actions is already usually positive, rationality would still be a good idea.
Well, if you don’t mind me answering a question with a questions, more effective at what? If it just makes you more effective at getting what you want, whether or not what you want is the right thing to want, then it’s only helpful to the extent that you want the right things, and harmful to the extent that you want the wrong things. That’s nothing very great, and certainly nothing to spend a lot of time improving.
But if rationality can make you want, and make you more effective at getting, good things only, then it’s an inestimable treasure, and worth a lifetime’s pursuit. The ‘morally good’ seems to me the right word for what is in every possible case good, and never bad.
He looks much better on the instrumental rationality side than the epistemic rationality side, but I think I would rather hang out with Mormon management consultants than atheist waiters.
He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.
He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.
Having only interacted with his public persona, I am unwilling to comment on his private beliefs.
His professional life gives a great example of looking into the dark, though, in insisting on a “heads I win tails you lose” deal with Bain to start Bain Capital. I don’t know if that was because he was generally cautious or because he stopped and asked “what if our theories are wrong?”, but the latter seems more likely to me.
He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.
Having only interacted with his public persona, I am unwilling to comment on his private beliefs.
The public persona, that which you can actually interact with, is the only one that matters for the purpose of choosing whether to believe what people say. If this person (I assume he is an American political figure of some sort?) happens to be a brilliant epistemic rationalists merely pretending convincingly that he is utterly (epistemically) irrational then you still shouldn’t pay any attention to what he says.
I agree that, in general, public statements by politicians should not be taken very seriously, and Romney is no exception. I think the examples of actions he’s taken, particularly in his pre-political life, are more informative.
I assume he is an American political figure of some sort?
Yes. Previously, he was a management consultant who helped develop the practice of buying companies explicitly to reshape them, which was a great application of “wait, if we believe that we actually help companies, then we’re in a perfect position to buy low and sell high.”
I fail to see how finding more already-rationalists with a track record would benefit LW specifically*, unless those individuals are public figures of some renown that can attract public attention to LW and related organisations or can directly contribute content, insight and training methods. Perhaps I’m just missing some evidence here, but my priors place the usefulness of already-rationalists within the same error margin as non-rationalists who are public figures that would bother to read / post on LW.
Paying attention to (rationalists with track records outside rationality)** seems like it would be mostly useful for demonstrating to aware but uninterested/unconvinced people that training rationality and “raising the sanity waterline” are effective strategies that do have real-world usefulness outside “philosophical”*** word problems.
* Any more than, say, anyone else or people with any visible track record who are also public figures.
** Perhaps someone could coin a term for this? It seems like a personspace subgroup relevant enough to have a less annoying label. Perhaps something playing on Beisutsukai or a variation of the Masked Hero imagery?
*** Used here in the layman’s definition of “philosophical”: airy, cloud-head, idealist, based on pretty assumptions and “clean” models where everything just works the way it’s “supposed to” rather than how-things-are-in-real-life. AKA the “Philosophy is a stupid waste of time” view.
I think the idea here is to find people who have found the types of rationality that lead to actual life success—found a replicable method for succeeding at things. Such an individual is expected to be a rationalist and to have a track record of achievement.
See, even as no fan of his whatsoever, I suspect Mitt Romney is a very smart fellow I would be foolish to pay no heed to in the general case, and who probably has a fair bit of tried and tested knowledge he’s gained in the pursuit of thinking about thinking. Even given qualms I have about the quality of some things he’s been quoted as saying of late, but then presidential campaigns select for bullshit.
My filtering criteria (maybe flawed) is “people whose biographies are still read after a few decades”. This way “non-rationalist” like Churchill gets read; looking for “rationalists” will end up selecting people too similar to you yourself to learn interesting things.
Here’s one: Attracting sufficient attention from people with track records of achievements for said people to begin engaging in active discussion that will further improve LW and related endeavors, namely through public exposure and bringing fresh outside perspective. Example: aaronsw
The whole point is in the detail about getting more people into it, and admitting that stuff is wrong so we can make it less so.
You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
No, I’d love another example to use so that people don’t have this kind of emotional reaction. Please suggest one if you have one.
UPDATE: I thought of a better example on the train today and changed it.
You aren’t providing an example because it is “hard to tell the difference at first”. You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
This is needlessly inflammatory, far to overconfident and, as it turned out, wrong. Making deductions about intent from their writing is not nearly as easy as you seem to think. Making wild accusations of nefarious attempts to insert subtext critical of you and your interests—indeed all our interests—makes you look hostile, paranoid and irrational, and for good reason.
This is needlessly inflammatory, far to overconfident and, as it turned out, wrong. Making deductions about intent from their writing is not nearly as easy as you seem to think. Making wild accusations of nefarious attempts to insert subtext critical of you and your interests—indeed all our interests—makes you look hostile, paranoid and irrational, and for good reason.
To put it as politely as I can manage, this reply, being a reply to something so many months old, strikes me as odd. If my memory from that far back serves me (and I don’t expect reliability of anyone’s memory over that period) this post was one of a series of three within the space of a week by the author with a common theme.
The comment you are replying to is also in response to a post that has been fundamentally edited in response to this (and other) feedback. Apart from making judgement of the appropriateness of a reply difficult this is also a rare example of someone (aaronsw) being able to update and improve his contribution in response to feedback. Once again it is too long ago for me to remember whether I expressed appreciation and respect for aaronsw willingness to improve his post but I recall experiencing that and evaluating whether it aaronsw would consider such a comment to be positive or merely condescending.
as it turned out, wrong
That isn’t something you demonstrated by making that link.
Making wild accusations of nefarious attempts to insert subtext critical of you and your interests
I don’t seem to be talking about subtext critical of me of my interests at all. If you use Wei_Dai’s user comments script and sort by top posts you might observe contributions that are a mix of support of SingInst and criticism of SingInst, depending on my evaluation of the object level issue in question. (The ‘top contributions’ sample is of course biased towards criticisms since such criticisms would be in response to, for example, Eliezer and so the conversations get more attention.) The point of this is that it is utterly absurd to be accusing me of making biased hysterical defenses of my personal interests when they aren’t my interests at all.
I endorse the grandparent wholeheartedly as a response to the version of the post that it was made to and the temporal and hope that others will make similar contributions fighting against bullshit so that I can upvote them. However, since it is so long ago and especially since the post has since been improved I consider it rather counterproductive to draw attention to it.
I may have phrased that too strongly. However, I do think that your deduction regarding the original post—that it was written as an excuse to bash SI—is incompatible with the evidence as it stands and as it stood then, and should not have been presented in such a hostile manner. I appreciate this was some time ago, but it does seem like a good chance for calibration and so on. I know I have made similar mistakes.
To put it as politely as I can manage, this reply, being a reply to something so many months old, strikes me as odd. If my memory from that far back serves me (and I don’t expect reliability of anyone’s memory over that period) this post was one of a series of three within the space of a week by the author with a common theme.
I was, in fact, reading through old comments of his, in order to get a better idea of his positions, contributions, and, well, possible troll status. I do this pretty often, and indeed I regularly reply to comments made years ago. No-one else has objected. I know people sometimes change their minds, of course, but since you have not changed your mind (or so you claim) I see no reason not to criticize the position you held.
However, since it is so long ago and especially since the post has since been improved I consider it rather counterproductive to draw attention to it.
Why, are you worried that people will fail to realize this was in response to an earlier version and downvote you for misquoting the post?
I stated a belief with minimal decorative fluff (“I think this implies that most people who would read this comment are of the opinion that...”) in an attempt to explain the reaction.
Independently, I also believe the support could have been better phrased.
Thanks for pointing this out though. I’ll try to make it a point to voice agreement and positively reinforce agreement when it’s not borne of confirmation bias.
Bullshit. You aren’t providing an example because it is “hard to tell the difference at first”. You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
The FAQ on the website is not the place to signal humility and argue against your own conclusions. All that would demonstrate is naivety and incompetence. You are demanding something that should not exist. This isn’t to say that there aren’t valid criticisms to be made of SIAI and their FAQ. You just haven’t made them.
Am I the only person who is outright nauseated by the quality of reasoning in these recent mud-slinging posts by aaronsw? What I see is a hastily selected bottom line along the lines of “SingInst sux” or perhaps “SingInst folks are too arrogant” then whatever hastily conceived rhetoric he can think of to support it. The problem isn’t in the conclusions—it is that the arguments used either don’t support or outright undermine the conclusion.
Competent criticism is encouraged. But the mere fact that a post is intended to be critical or ‘cynical’ isn’t sufficient. It needs to meet some kind of minimum intellectual standard too. If it did not represent an appeal to the second-order-contrarians and was evaluated based on actual content this post would probably end up mildly negative, even in the discussion section.
I see, he must be one of those innately evil enemies of ours, eh?
My current model of aaronsw is something like this: He’s a fairly rational person who’s a fan of Givewell. He’s read about SI and thinks the singularity is woo, but he’s self-skeptical enough to start reading SI’s website. He finds a question in their FAQ where they fail to address points made by those who disagree, reinforcing the woo impression. At this point he could just say “yeah, they’re woo like I thought”. But he’s heard they run a blog on rationality, so he makes a post pointing out the self-skepticism failure in case there’s something he’s missing.
Why not? I think it’s an excellent place to do that. Signalling humility and arguing against your own conclusions is a good way to be taken seriously.
Overall, I thought aaronsw’s post had a much higher information to accusations ratio than your comment, for whatever that’s worth. As criticism goes his is pretty polite and intelligent.
Also, aaronsw is not the first person I’ve seen on the internet complaining about lack of self-skepticism on LW, and I agree with him that it’s something we could stand to work on. Or at least signalling self-skepticism; it’s possible that we’re already plenty self-skeptical and all we need to do is project typical self-skeptical attitudes.
For example, Eliezer Yudkowsky seems to think that the rational virtue of “humility” is about “taking specific actions in anticipation of your own errors”, not actually acting humble. (Presumably self-skepticism counts as humility by this definition.) But I suspect that observing how humble someone seems is a typical way to gauge the degree to which they take specific actions in anticipation of their own errors. If this is the case, it’s best for signalling purposes to actually act humble as well.
(I also suspect that acting humble makes it easier to publicly change your mind, since the status loss for doing so becomes lower. So that’s another reason to actually act humble.)
(Yes, I’m aware that I don’t always act humble. Unfortunately, acting humble by always using words like “I suspect” everywhere makes my comments harder to read and write. I’m not sure what the best solution to this is.)
FWIW, I don’t think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.
I like the way you phrase it (the “lukeprog” charity). Probably true at that.
I agree with your model of aaronsw, and think wedrifid’s comments are over the top. But wedrifid is surely dead right about one important thing: aaronsw presented his article as “here is a general point about rationality, and I find that I have to think up some examples so here they are …” but it’s extremely obvious (especially if you look at a few of his other recent articles and comments) that that’s simply dishonest: he started with the examples and fitted the general point about rationality around them.
(I have no idea what sort of process would make someone as smart as aaronsw think that was a good approach.)
Well, he is heavily involved in the US politics scene, and may have picked up bad habits like focusing on rhetoric over facts, etc.
Unlike, say, wedrifid, whose highly-rated comment was just full of facts!
...
If you find yourself responding with tu quoque, then it is probably about time you re-evaluated the hypothesis that you are in mind-kill territory.
In this particular context, I think a more appropriate label would be the “Appeal to Come on, gimme a friggen’ break!”
The comment he was responding to was quite loaded with connotation, voluntarily or not, despite the “mostly true” and “arguably within the realm of likely possibilities” denotations that would make the assertion technically valid.
Being compared, even as a metaphorical hypothesis, to sophistry-flinging rhetoric-centric politicians is just about the most mind-killer-loaded subtext assault you could throw at someone.
How could I have phrased the point better? Or should I have dropped it altogether?
Considering he has changed the example, I find this unlikely. In any event, he post would appear to stand on it’s own.
The fact that he changed the example doesn’t seem to me very strong example that the example wasn’t originally the motivation for writing the article.
I made no comment on whether the post stands well on its own; only on wedrifid’s accusation of dishonesty.
Well, he could just be very good at it, I suppose. I had a much lower prior anyway, so I may be misjudging the strength of the evidence here.
I made no such claim. I do claim that the specific quote I was replying to is a transparent falsehood. Do you actually disagree?
Far from being innately evil aaronsw appears to be acting just like any reasonably socially competent human with some debate skills can be expected to act when they wish to persuade people of something. It just so happens that doing so violates norms against bullshit, undesired forms of rhetoric and the use of arguments as soldiers without applying those same arguments to his own position.
Forget “innately evil”. In fact, forget about the author entirely. What matters is that the post and the reasoning contained therein is below the standard I would like to see on lesswrong. Posts like it need to be weeded out to make room for better posts. This includes room for better reasoned criticisms of SIAI or lesswrong, if people are sufficiently interested (whether authored by aaronsw or someone new).
If you sincerely believe that the optimal use of a FAQ on an organisations website is to argue against your own conclusions then I suggest that you have a fundamentally broken model of how the world—and people—work. It would be an attempt at counter-signalling that would fail abysmally. I’d actually feel vicarious embarrassment just reading it.
Hm. Maybe you’re right that I’m giving him too much credit just because he’s presenting a view unpopular on LW. (Although, come to think of it, having a double standard that favors unpopular conclusions might actually be a good idea.) In any case, it looks like he rewrote his post.
I think the optimal use of an FAQ is to give informed and persuasive answers to the questions it poses, and that an informed and persuasive answer will acknowledge, steel-man, and carefully refute opposing positions.
I’m not sure why everyone seems to think the answers to the questions in an FAQ should be short. FAQs are indexed by question, so it’s easy for someone to click on just those questions that interest them and ignore the rest. lukeprog:
I even suggested creating a question-and-answer site as a supplement to lukeprog’s proposed wiki.
I don’t fault SI much for having short answers in the current FAQ, but it seems to me that FAQs are ideal tools for presenting longer answers relative to other media.
One option is for each question in the FAQ to have a page dedicated to answering it in depth. Then the main FAQ page could give a one-paragraph summary of SI’s response along with a link to the longer answer. Maybe this would achieve the benefits of both a long and and a short FAQ?
He’s also someone with an actual track record of achievement. Could we do with some of those on LW?
Some of which are quite dangerous. Either the JSTOR or PACER incidents could have killed any associated small nonprofit with legal bills. (JSTOR’s annual revenue is something like 53x that of SIAI.)
As fun as it is to watch Swartz’s activities (from a safe distance), I would not want such antics conducted on a website I enjoy reading and would like to see continue.
Wait, are you saying this aaronsw is the same guy as the guy currently being (tragically, comically) prosecuted for fraud? That’s kinda cool!
What are the JSTOR and PACER incidents?
http://en.wikipedia.org/wiki/Aaron_Swartz#Controversies
I don’t think it’s fair—I think it’s a bit motivated—to mention these as mysterious controversies and antics, without also mentioning that his actions could reasonably be interpreted as heroic. I was applauding when I read the JSTOR incident, and only wish he’d gotten away with downloading the whole thing and distributing it.
I agree they were heroic and good things, and I was disgusted when I looked into JSTOR’s financial filings (not that I was happy with the WMF either).
But there’s a difference between admiring the first penguin off the ice and noting that this is a good thing to do, and wanting to be that penguin or near enough that penguin that one might fall off as well. And this is especially true for organizations.
Even if so, one should still at least mention, in a debate on character, that the controversy in question just happened to be about an attempted heroic good deed.
Did you really just assert that having Swartz post to LessWrong puts SIAI at serious legal and financial risk?
Good grief. You said, ‘Aaron’s achievements of type X are really awesome and we could use more achivements on LW!’ Me: ‘But type X stuff is incredibly dangerous and could kill the website or SIAI, and it’s a little amazing Swartz has escaped both past X incidents with as apparently little damage as he has*.’ You: ‘zomg did you just seriously say Swartz posting to LW endangers SIAI?!’
Er, no, I didn’t, unless Swartz posting to LW is now the ‘actual track record of achievement’ that you are vaunting, which seems unlikely. I said his accomplishments like JSTOR or PACER (to name 2 specific examples, again, to make it impossible to misunderstand me, again) endanger any organization or website they are associated with.
EDIT: * Note that I wrote this comment several months before Aaron Swartz committed suicide due to the prosecution over the JSTOR incident.
I did once suggest a similar heuristic; but I feel the need to point out that there are many people in this world with track records of achievement, including, like, Mitt Romney or something, and that the heuristic is supposed to be, “Pay attention to rationalists with track records outside rationality”, e.g. Dawkins and Feynman.
Mitt Romney strikes me as a fairly poor example, since from my knowledge of his pre-political life, he seems like a strong rationalist. He looks much better on the instrumental rationality side than the epistemic rationality side, but I think I would rather hang out with Mormon management consultants than atheist waiters. (At least, I think I have more to learn from the former than the latter.)
If: 1) being more rational makes you more moral
2) he’s saying things during this campaign he doesn’t really believe
3) dishonesty, especially dishonesty in the context of a political campaign, is immoral
Then: c) His recent behavior is evidence against his rationality, in the same sense his pre-political success is evidence for it.
1 seems true only in the sense that, in general, immorality is more attractive to bad decision-makers than to good decision-makers, but I would be reluctant to extend beyond that.
This is probably not something we should argue about here, but I think the whole project of rationality stands or falls on the truth of premise 1.
Why?
What if it had no effect on morality, but just made people more effective? As long as the sign bit on people’s actions is already usually positive, rationality would still be a good idea.
Well, if you don’t mind me answering a question with a questions, more effective at what? If it just makes you more effective at getting what you want, whether or not what you want is the right thing to want, then it’s only helpful to the extent that you want the right things, and harmful to the extent that you want the wrong things. That’s nothing very great, and certainly nothing to spend a lot of time improving.
But if rationality can make you want, and make you more effective at getting, good things only, then it’s an inestimable treasure, and worth a lifetime’s pursuit. The ‘morally good’ seems to me the right word for what is in every possible case good, and never bad.
He could expect to do enough good as president to outweigh that.
I doubt it, though.
He has no epistemic rationality to speak of. He can convince himself that anything is true, no matter what the evidence.
Having only interacted with his public persona, I am unwilling to comment on his private beliefs.
His professional life gives a great example of looking into the dark, though, in insisting on a “heads I win tails you lose” deal with Bain to start Bain Capital. I don’t know if that was because he was generally cautious or because he stopped and asked “what if our theories are wrong?”, but the latter seems more likely to me.
The public persona, that which you can actually interact with, is the only one that matters for the purpose of choosing whether to believe what people say. If this person (I assume he is an American political figure of some sort?) happens to be a brilliant epistemic rationalists merely pretending convincingly that he is utterly (epistemically) irrational then you still shouldn’t pay any attention to what he says.
I agree that, in general, public statements by politicians should not be taken very seriously, and Romney is no exception. I think the examples of actions he’s taken, particularly in his pre-political life, are more informative.
Yes. Previously, he was a management consultant who helped develop the practice of buying companies explicitly to reshape them, which was a great application of “wait, if we believe that we actually help companies, then we’re in a perfect position to buy low and sell high.”
I fail to see how finding more already-rationalists with a track record would benefit LW specifically*, unless those individuals are public figures of some renown that can attract public attention to LW and related organisations or can directly contribute content, insight and training methods. Perhaps I’m just missing some evidence here, but my priors place the usefulness of already-rationalists within the same error margin as non-rationalists who are public figures that would bother to read / post on LW.
Paying attention to (rationalists with track records outside rationality)** seems like it would be mostly useful for demonstrating to aware but uninterested/unconvinced people that training rationality and “raising the sanity waterline” are effective strategies that do have real-world usefulness outside “philosophical”*** word problems.
* Any more than, say, anyone else or people with any visible track record who are also public figures.
** Perhaps someone could coin a term for this? It seems like a personspace subgroup relevant enough to have a less annoying label. Perhaps something playing on Beisutsukai or a variation of the Masked Hero imagery?
*** Used here in the layman’s definition of “philosophical”: airy, cloud-head, idealist, based on pretty assumptions and “clean” models where everything just works the way it’s “supposed to” rather than how-things-are-in-real-life. AKA the “Philosophy is a stupid waste of time” view.
I think the idea here is to find people who have found the types of rationality that lead to actual life success—found a replicable method for succeeding at things. Such an individual is expected to be a rationalist and to have a track record of achievement.
See, even as no fan of his whatsoever, I suspect Mitt Romney is a very smart fellow I would be foolish to pay no heed to in the general case, and who probably has a fair bit of tried and tested knowledge he’s gained in the pursuit of thinking about thinking. Even given qualms I have about the quality of some things he’s been quoted as saying of late, but then presidential campaigns select for bullshit.
There are too many accomplished people in the world contradicting each other to not filter it somehow.
My filtering criteria (maybe flawed) is “people whose biographies are still read after a few decades”. This way “non-rationalist” like Churchill gets read; looking for “rationalists” will end up selecting people too similar to you yourself to learn interesting things.
Here’s one: Attracting sufficient attention from people with track records of achievements for said people to begin engaging in active discussion that will further improve LW and related endeavors, namely through public exposure and bringing fresh outside perspective. Example: aaronsw
The whole point is in the detail about getting more people into it, and admitting that stuff is wrong so we can make it less so.
Less… Wrong.
No, I’d love another example to use so that people don’t have this kind of emotional reaction. Please suggest one if you have one.
UPDATE: I thought of a better example on the train today and changed it.
Upvoted the main article due to this.
This is needlessly inflammatory, far to overconfident and, as it turned out, wrong. Making deductions about intent from their writing is not nearly as easy as you seem to think. Making wild accusations of nefarious attempts to insert subtext critical of you and your interests—indeed all our interests—makes you look hostile, paranoid and irrational, and for good reason.
To put it as politely as I can manage, this reply, being a reply to something so many months old, strikes me as odd. If my memory from that far back serves me (and I don’t expect reliability of anyone’s memory over that period) this post was one of a series of three within the space of a week by the author with a common theme.
The comment you are replying to is also in response to a post that has been fundamentally edited in response to this (and other) feedback. Apart from making judgement of the appropriateness of a reply difficult this is also a rare example of someone (aaronsw) being able to update and improve his contribution in response to feedback. Once again it is too long ago for me to remember whether I expressed appreciation and respect for aaronsw willingness to improve his post but I recall experiencing that and evaluating whether it aaronsw would consider such a comment to be positive or merely condescending.
That isn’t something you demonstrated by making that link.
I don’t seem to be talking about subtext critical of me of my interests at all. If you use Wei_Dai’s user comments script and sort by top posts you might observe contributions that are a mix of support of SingInst and criticism of SingInst, depending on my evaluation of the object level issue in question. (The ‘top contributions’ sample is of course biased towards criticisms since such criticisms would be in response to, for example, Eliezer and so the conversations get more attention.) The point of this is that it is utterly absurd to be accusing me of making biased hysterical defenses of my personal interests when they aren’t my interests at all.
I endorse the grandparent wholeheartedly as a response to the version of the post that it was made to and the temporal and hope that others will make similar contributions fighting against bullshit so that I can upvote them. However, since it is so long ago and especially since the post has since been improved I consider it rather counterproductive to draw attention to it.
I may have phrased that too strongly. However, I do think that your deduction regarding the original post—that it was written as an excuse to bash SI—is incompatible with the evidence as it stands and as it stood then, and should not have been presented in such a hostile manner. I appreciate this was some time ago, but it does seem like a good chance for calibration and so on. I know I have made similar mistakes.
I was, in fact, reading through old comments of his, in order to get a better idea of his positions, contributions, and, well, possible troll status. I do this pretty often, and indeed I regularly reply to comments made years ago. No-one else has objected. I know people sometimes change their minds, of course, but since you have not changed your mind (or so you claim) I see no reason not to criticize the position you held.
Why, are you worried that people will fail to realize this was in response to an earlier version and downvote you for misquoting the post?
Good god man, this!
That’s what the “Thumbs up” button is there for, hence the thumbs down. In case you were wondering.
NO. Why our kind can’t cooperate.
I stated a belief with minimal decorative fluff (“I think this implies that most people who would read this comment are of the opinion that...”) in an attempt to explain the reaction.
Independently, I also believe the support could have been better phrased.
Thanks for pointing this out though. I’ll try to make it a point to voice agreement and positively reinforce agreement when it’s not borne of confirmation bias.
I just wanted to state my appreciation for this comment… someone FINALLY called out aaronsw for what he is doing.