I sympathize with the overall thrust of this comment, that we should be skeptical of LW methods and results. I see lots of specific problems with the comment itself, but I’m not sure if it’s worth pointing them out. Do the upvoters also see these problems, but just think that the overall point should be made?
To give a couple of examples, take the first and last sentences:
I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself.
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?
Which wouldn’t even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups?
That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don’t want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.
Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don’t care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
I wasn’t clear enough, I didn’t expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by “interested researchers rather than people who ask others to take their ideas seriously” is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don’t know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren’t even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.
To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.
Please let me know if you want me to elaborate on any of the specific problems you mentioned.
It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you’d better study something that will let you write a paper or two.
Yes, you should watch out for bias in blog posts written by people you don’t know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you’re reading has Ph. D.
Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
Yes, but lesswrong is missing the ecological system of dissenting, mutually exclusive opinions and peer review. Here we only have one side that cares strongly about certain issues while those that only care about other issues tend to keep quiet about it as not to offend those who care strongly. That isn’t the case in academic circles. And since those who care strongly refuse to enter the academic landscape, this won’t change either.
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?
It doesn’t follow, I was wrong there. I meant to provoke three questions 1.) Are people joining this community mainly because they are interested in rationality and truth or in other people who think like them? 2.) Are meetups instrumental in refining rationality and seeking truth or are they mainly done for the purpose of socializing with other people? 3.) Are people who attend meetups strong enough to withstand the social pressure when it comes to disagreement about explosive issues like risks from AI?
I sympathize with the overall thrust of this comment, that we should be skeptical of LW methods and results. I see lots of specific problems with the comment itself, but I’m not sure if it’s worth pointing them out. Do the upvoters also see these problems, but just think that the overall point should be made?
To give a couple of examples, take the first and last sentences:
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don’t want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.
Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don’t care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.
I wasn’t clear enough, I didn’t expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by “interested researchers rather than people who ask others to take their ideas seriously” is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don’t know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren’t even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.
To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.
Please let me know if you want me to elaborate on any of the specific problems you mentioned.
It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you’d better study something that will let you write a paper or two.
Yes, you should watch out for bias in blog posts written by people you don’t know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you’re reading has Ph. D.
Yes, but lesswrong is missing the ecological system of dissenting, mutually exclusive opinions and peer review. Here we only have one side that cares strongly about certain issues while those that only care about other issues tend to keep quiet about it as not to offend those who care strongly. That isn’t the case in academic circles. And since those who care strongly refuse to enter the academic landscape, this won’t change either.
It doesn’t follow, I was wrong there. I meant to provoke three questions 1.) Are people joining this community mainly because they are interested in rationality and truth or in other people who think like them? 2.) Are meetups instrumental in refining rationality and seeking truth or are they mainly done for the purpose of socializing with other people? 3.) Are people who attend meetups strong enough to withstand the social pressure when it comes to disagreement about explosive issues like risks from AI?
You can care about an issue and dissent.