I hold this suspicion with about 30% confidence, which is enough to worry me, since I mostly identify as a rationalist. What do you think about all this? How confident are you?
I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself. There is too much unjustified agreement here to convince me that people really mostly care about superior beliefs. Sure, the available methods might not allow much disagreement about their conclusions, but what about doubt in the very methods that are used to evaluate what to do?
Most of the posts on LW are not wrong, but many exhibit some sort of extraordinary idea. Those ideas seems mostly sound but if you take all of them together and arrive at something really weird, I think some skepticism is appropriate (at least more than can currently be found).
The many-worlds interpretation seems mostly justified, probably the rational choice of all available interpretations (except maybe Relational Quantum Mechanics). How to arrive at this conclusion is also a good exercise in refining the art of rationality.
Arguments in favor of an intelligence explosion, made by people like I.J. Good, are food for thought and superficially sound. This line of reasoning should be taken seriously and further research should be conducted examining that possibility.
Each of those points (#1,2,3,4) are valuable and should be taken seriously. But once you build conjunctive arguments out of those points (1∧2∧3∧4) you should be careful about the overall credence of each point and the probability of their conjunction. Because even if all of them seem to provide valuable insights, any extraordinary conclusions that are implied by their conjunction might outweigh the benefit of each belief if the overall conclusion is just slightly wrong.
An example of where 1∧2∧3∧4 might lead:
“We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse.”
Careful! The question is not if our results are sound but if the very methods we used to come up with those results are sufficiently trustworthy. This does not happen enough on LW, the methods are not examined, even though they lead to all kinds of problems like Pascal’s Mugging or the ‘The Infinitarian Challenge to Aggregative Ethics’. Neither are the motives and trustworthiness of the people who make those claims examined. Which wouldn’t even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.
I sympathize with the overall thrust of this comment, that we should be skeptical of LW methods and results. I see lots of specific problems with the comment itself, but I’m not sure if it’s worth pointing them out. Do the upvoters also see these problems, but just think that the overall point should be made?
To give a couple of examples, take the first and last sentences:
I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself.
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?
Which wouldn’t even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups?
That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don’t want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.
Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don’t care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
I wasn’t clear enough, I didn’t expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by “interested researchers rather than people who ask others to take their ideas seriously” is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don’t know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren’t even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.
To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.
Please let me know if you want me to elaborate on any of the specific problems you mentioned.
It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you’d better study something that will let you write a paper or two.
Yes, you should watch out for bias in blog posts written by people you don’t know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you’re reading has Ph. D.
Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
Yes, but lesswrong is missing the ecological system of dissenting, mutually exclusive opinions and peer review. Here we only have one side that cares strongly about certain issues while those that only care about other issues tend to keep quiet about it as not to offend those who care strongly. That isn’t the case in academic circles. And since those who care strongly refuse to enter the academic landscape, this won’t change either.
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?
It doesn’t follow, I was wrong there. I meant to provoke three questions 1.) Are people joining this community mainly because they are interested in rationality and truth or in other people who think like them? 2.) Are meetups instrumental in refining rationality and seeking truth or are they mainly done for the purpose of socializing with other people? 3.) Are people who attend meetups strong enough to withstand the social pressure when it comes to disagreement about explosive issues like risks from AI?
I think “we should be skeptical of our very methods” is a fully general counterargument and “the probability of the conjunction of four things is less than the probability of any one of them” is true but weak, since the conjunction of (only!) four things that it’s worth taking seriously is still worth taking seriously.
Also,
Neither are the motives and trustworthiness of the people who make those claims examined.
Seems just obviously false. They’reexaminedallthetime. (And none of these links are even to your posts!)
Yes, the conclusions seem weird. Yes, maybe we should be alarmed by that. But let’s not rationalize the perception of weirdness as arising from technical considerations rather than social intuitions.
Seems just obviously false. They’re examined all the time. (And none of these links are even to your posts!)
You’re right, I have to update my view there. When I started posting here I felt it was differently. It now seems that it has changed somewhat dramatically. I hope this trend continues without becoming itself unwarranted.
Although I disagree somewhat with the rest of your comment. I feel I am often misinterpreted when I say that we should be more careful of some of the extraordinary conclusions here. What I mean is not their weirdness but the scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of the conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right. I can’t justify this, but an example would be quantum suicide (ignore for the sake of the argument that there are other reasons that it is stupid than the possibility that MWI is wrong). I wouldn’t commit quantum suicide even given a high confidence in MWI being true. Logical implications don’t seem enough in some cases. Maybe I am simply biased, but I have been unable to overcome it yet.
I think your communication would really benefit from having a clear dichotomy between “beliefs about policy” and “beliefs about the world”. All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of “logical implication” seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn’t work. I think if you really made this dichotomy explicit, and made explicit that you’re worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political ‘decision theoretic’ insanity.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.
Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.
I think the recent surge in meetups shows that people are mainly interested to group with other people who think like them rather than rationality in and of itself. There is too much unjustified agreement here to convince me that people really mostly care about superior beliefs. Sure, the available methods might not allow much disagreement about their conclusions, but what about doubt in the very methods that are used to evaluate what to do?
Most of the posts on LW are not wrong, but many exhibit some sort of extraordinary idea. Those ideas seems mostly sound but if you take all of them together and arrive at something really weird, I think some skepticism is appropriate (at least more than can currently be found).
Here is an example:
1.) MWI
The many-worlds interpretation seems mostly justified, probably the rational choice of all available interpretations (except maybe Relational Quantum Mechanics). How to arrive at this conclusion is also a good exercise in refining the art of rationality.
2.) Belief in the Implied Invisible
P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X)
In other words, logical implications do not have to pay rent in future anticipations.
3.) Decision theory
Decision theory is an important field of research. We can learn a lot by studying it.
4.) Intelligence explosion
Arguments in favor of an intelligence explosion, made by people like I.J. Good, are food for thought and superficially sound. This line of reasoning should be taken seriously and further research should be conducted examining that possibility.
Each of those points (#1,2,3,4) are valuable and should be taken seriously. But once you build conjunctive arguments out of those points (1∧2∧3∧4) you should be careful about the overall credence of each point and the probability of their conjunction. Because even if all of them seem to provide valuable insights, any extraordinary conclusions that are implied by their conjunction might outweigh the benefit of each belief if the overall conclusion is just slightly wrong.
An example of where 1∧2∧3∧4 might lead:
“We have to take over the universe to save it by making the seed of an artificial general intelligence, that is undergoing explosive recursive self-improvement, extrapolate the coherent volition of humanity, while acausally trading with other superhuman intelligences across the multiverse.”
or
“We should walk into death camps if it has no effect on the probability of being blackmailed.”
Careful! The question is not if our results are sound but if the very methods we used to come up with those results are sufficiently trustworthy. This does not happen enough on LW, the methods are not examined, even though they lead to all kinds of problems like Pascal’s Mugging or the ‘The Infinitarian Challenge to Aggregative Ethics’. Neither are the motives and trustworthiness of the people who make those claims examined. Which wouldn’t even be necessary if we were dealing with interested researchers rather than people who ask others to take their ideas seriously.
I sympathize with the overall thrust of this comment, that we should be skeptical of LW methods and results. I see lots of specific problems with the comment itself, but I’m not sure if it’s worth pointing them out. Do the upvoters also see these problems, but just think that the overall point should be made?
To give a couple of examples, take the first and last sentences:
I don’t see how this follows. If people were interested in rationality itself, would they be less likely to organize or attend meetups? Why?
(I guess “interested” should be “disinterested” here.) Given that except for a few hobbyists (like myself), all researchers depend on others taking their ideas seriously for their continued livelihoods, how does this sentence make sense?
That is really a weak point I made there. It was not meant to be an argument but just a guess. I also don’t want to accuse people of being overly interested to create a community in and of itself rather than a community with the overall aim to seek truth. I apologize for hinting at that possibility.
Let me expand on how I came to make that statement in the first place. I have always been more than a bit skeptical about the reputation system employed on lesswrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to negative karma over time. And even if, on some level, you don’t care about karma, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it. I noticed that I myself, although I believe not to care much about my rank within this community, become increasingly reluctant to say something that I know will lead to negative karma. This of course works insofar as it maximizes the content the collective intelligence of all people on lesswrong is interested in. But that content might be biased and to some extent dishonest. Are we really good at collectively deciding what we want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Now if you take into account my, admittedly speculative, opinion above, you might already guess what I think about the implementation of strong social incentives that might be the result of face-to-face meetings between people interested to refine the art of rationality and learn about the nature of reality rather than their own subjective opinions and biases.
I wasn’t clear enough, I didn’t expect the comment to get that much attention (which does disprove some of my above points, I hope so). What I meant by “interested researchers rather than people who ask others to take their ideas seriously” is the difference between someone who studies a topic due to academic curiosity versus someone who writes about a topic to convince people to contribute money to his charity. I don’t know how to say that without sounding rude or sneaking in connotations. Yes, lesswrong was created to support the mitigation of risks from AI (I can expand on this if you like, also see my comment here). Now this obviously sounds like I would want to imply that there might be motives involved other than trying to save humanity. I am not saying that, although there might be subconscious motivations those people aren’t even aware of themselves. I am just saying that it is another point that adds to the necessary caution that I perceive to be missing.
To be clear, I want that the SIAI gets enough support to research risks from AI. I am just saying that I would love to see a bit more caution when it comes to some overall conclusions. Taking ideas seriously is a good thing, to a reasonable extent. But my perception is that some people here hold unjustifiable strong beliefs that might be logical implications of some well-founded methods, but I would be careful not to go too far.
Please let me know if you want me to elaborate on any of the specific problems you mentioned.
It is the rare researcher who studies a topic solely out of academic curiosity. Grant considerations tend to put on heavy pressure to produce results, and quick, dammit, so you’d better study something that will let you write a paper or two.
Yes, you should watch out for bias in blog posts written by people you don’t know potentially trying to sell you their charity. No, you should not relax that watchfulness when the author of whatever you’re reading has Ph. D.
Yes, but lesswrong is missing the ecological system of dissenting, mutually exclusive opinions and peer review. Here we only have one side that cares strongly about certain issues while those that only care about other issues tend to keep quiet about it as not to offend those who care strongly. That isn’t the case in academic circles. And since those who care strongly refuse to enter the academic landscape, this won’t change either.
It doesn’t follow, I was wrong there. I meant to provoke three questions 1.) Are people joining this community mainly because they are interested in rationality and truth or in other people who think like them? 2.) Are meetups instrumental in refining rationality and seeking truth or are they mainly done for the purpose of socializing with other people? 3.) Are people who attend meetups strong enough to withstand the social pressure when it comes to disagreement about explosive issues like risks from AI?
You can care about an issue and dissent.
I think “we should be skeptical of our very methods” is a fully general counterargument and “the probability of the conjunction of four things is less than the probability of any one of them” is true but weak, since the conjunction of (only!) four things that it’s worth taking seriously is still worth taking seriously.
Also,
Seems just obviously false. They’re examined all the time. (And none of these links are even to your posts!)
Yes, the conclusions seem weird. Yes, maybe we should be alarmed by that. But let’s not rationalize the perception of weirdness as arising from technical considerations rather than social intuitions.
You’re right, I have to update my view there. When I started posting here I felt it was differently. It now seems that it has changed somewhat dramatically. I hope this trend continues without becoming itself unwarranted.
Although I disagree somewhat with the rest of your comment. I feel I am often misinterpreted when I say that we should be more careful of some of the extraordinary conclusions here. What I mean is not their weirdness but the scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of the conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right. I can’t justify this, but an example would be quantum suicide (ignore for the sake of the argument that there are other reasons that it is stupid than the possibility that MWI is wrong). I wouldn’t commit quantum suicide even given a high confidence in MWI being true. Logical implications don’t seem enough in some cases. Maybe I am simply biased, but I have been unable to overcome it yet.
I think your communication would really benefit from having a clear dichotomy between “beliefs about policy” and “beliefs about the world”. All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of “logical implication” seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn’t work. I think if you really made this dichotomy explicit, and made explicit that you’re worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political ‘decision theoretic’ insanity.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.
Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.
This is an excellent point, and well stated. I don’t have anything to add, but an upvote didn’t suffice.