All they manage to do is convince those who already hold the same set of beliefs or who fit a certain mindset.
It’s perhaps worth noting that this observation is true of most discussion about most even-mildly-controversial subjects on LessWrong—quantum mechanics, cryonics, heuristics and biases, ethics, meta-ethics, theology, epistemology, group selection, hard takeoff, Friendliness, et cetera. What confuses me is that LessWrong continues to attract really impressive people anyway; it seems to be the internet’s biggest/best forum for interesting technical discussion about epistemology, Schellingian game theory, the singularity, &c., even though most of the discussion is just annoying echoes. One of a hundred or so regular commenters is actually trying or is a real intellectual, not a fountain of cultish sloganeering and cheering. Others are weird hybrids of cheerleader and actually trying / real intellectual (like me, though I try to cheer on a higher level, and about more important things). Unfortunately I don’t know of any way to raise the “sanity waterline”, if such a concept makes sense, and I suspect that the new Center for Modern Rationality is going to make things worse, not better. I hope I’m wrong. …I feel like there’s something that could be done, but I have no idea what it is.
I just reread this post yesterday and found it to be a very convincing counter-argument against the idea that we should solely act on high stakes.
What Vassar is saying sounds to me like a justification of Pascal’s Wager by arguing that some God’s have more measure than others and that therefore we can rationally decide to believe into a certain God and live accordingly.
That is like saying that a biased coin does not have a probability of 1⁄2 and that we can therefore maximize our payoff by betting on the side of the coin that is more likely to end up face-up. Which would be true if we had any other information other than that the coin is biased. But if we don’t have any reliable information except other than that it is biased, it makes no sense to deviate from the probability of a fair coin.
And I don’t think it is clear, at this point, that we are justified to assume more than that there might be risks from AI. Claiming that there are actions that we can take, with respect to risks from AI, that are superior to others, is like claiming that the coin is biased while being unable to determine the direction of the bias. By claiming that doing something is better than doing nothing we might as well end up making things worse. Just like by unconditionally assigning a higher probability to one side of a coin, of which we know nothing but that it is biased, in a coin tossing tournament.
The only sensible option seems to be to wait for more information.
Your posts highlight fundamental problems that I have as well. Especially this and this comment concisely describe the issues.
I have no answers and I don’t know how other people deal with it. Personally I forget about those problems frequently and act as if I can actually calculate what to do. Other times I just do what I want based on naive introspection.
And I don’t think it is clear, at this point, that we are justified to assume more than that there might be risks from AI. Claiming that there are actions that we can take, with respect to risks from AI, that are superior to others, is like claiming that the coin is biased while being unable to determine the direction of the bias. By claiming that doing something is better than doing nothing we might as well end up making things worse. Just like by unconditionally assigning a higher probability to one side of a coin, of which we know nothing but that it is biased, in a coin tossing tournament.
This is a problem—though it probably shouldn’t stop us from trying.
The only sensible option seems to be to wait for more information.
Players can try to improve their positions and attempt to gain knowledge and power. That itself might cause problems—but it seems likely to beat thumb twiddling.
Why do you think that “Center for Modern Rationality” is going to make things worse? Let’s hope it will not hinge on Eliezer Yudkowsky’s more controversial deliberations (as for me, his thoughts on: the complexity of ethical value, the nature of personhood, the solution to FAI).
I don’t think what they teach will be particularly harmful to people’s epistemic habits, but I don’t think it’ll be helpful either, and I think that there will be large selection effects for people who will, through sheer osmosis and association with the existent rationalist community, decide that it is “rational” to donate a lot of money to the Singularity Institute or work on decision theory. It seems that the Center for Modern Rationality aims to create a whole bunch of people at roughly the average LessWrong commenter level of prudence. LessWrong is pretty good relatively speaking, but I don’t think their standards are nearly high enough to tackle serious problems in moral philosophy and so on that it might be necessary to solve in order to have any good basis for one’s actions. I am disturbed by the prospect of an increasingly large cadre of people who are very gung-ho about “getting things done” despite not having a deep understanding of why those things might or might not be good things to do.
What confuses me is that LessWrong continues to attract really impressive people anyway; it seems to be the internet’s biggest/best forum for interesting technical discussion about epistemology, Schellingian game theory, the singularity, &c., even though most of the discussion is just annoying echoes.
Why is that confusing? Have you looked at the rest of the internet recently?
Have you looked at the rest of the internet recently?
Not really. But are you saying that nowhere else on the internet is close to LessWrong’s standards of discourse? I’d figured that but part of me keeps saying “there’s no way that can be true” for some reason.
I’m not sure why I’m confused, but I think there’s a place where my model (of how many cool people there are and how willing they would be to participate on a site like LessWrong) is off by an order of magnitude or so.
Have you looked at the rest of the internet recently?
Not really. But are you saying that nowhere else on the internet is close to LessWrong’s standards of discourse? I’d figured that but part of me keeps saying “there’s no way that can be true” for some reason.
It might be true when it comes to cross-domain rationality (with a few outliers like social abilities). But it certainly isn’t true that Less Wrong is anywhere close to the edge in most fields (with a few outliers like decision theory).
It’s perhaps worth noting that this observation is true of most discussion about most even-mildly-controversial subjects on LessWrong—quantum mechanics, cryonics, heuristics and biases, ethics, meta-ethics, theology, epistemology, group selection, hard takeoff, Friendliness, et cetera. What confuses me is that LessWrong continues to attract really impressive people anyway; it seems to be the internet’s biggest/best forum for interesting technical discussion about epistemology, Schellingian game theory, the singularity, &c., even though most of the discussion is just annoying echoes. One of a hundred or so regular commenters is actually trying or is a real intellectual, not a fountain of cultish sloganeering and cheering. Others are weird hybrids of cheerleader and actually trying / real intellectual (like me, though I try to cheer on a higher level, and about more important things). Unfortunately I don’t know of any way to raise the “sanity waterline”, if such a concept makes sense, and I suspect that the new Center for Modern Rationality is going to make things worse, not better. I hope I’m wrong. …I feel like there’s something that could be done, but I have no idea what it is.
Eh, I think Vassar’s reply is more to the point.
I think Wei_Dai’s reply does trump that.
What Vassar is saying sounds to me like a justification of Pascal’s Wager by arguing that some God’s have more measure than others and that therefore we can rationally decide to believe into a certain God and live accordingly.
That is like saying that a biased coin does not have a probability of 1⁄2 and that we can therefore maximize our payoff by betting on the side of the coin that is more likely to end up face-up. Which would be true if we had any other information other than that the coin is biased. But if we don’t have any reliable information except other than that it is biased, it makes no sense to deviate from the probability of a fair coin.
And I don’t think it is clear, at this point, that we are justified to assume more than that there might be risks from AI. Claiming that there are actions that we can take, with respect to risks from AI, that are superior to others, is like claiming that the coin is biased while being unable to determine the direction of the bias. By claiming that doing something is better than doing nothing we might as well end up making things worse. Just like by unconditionally assigning a higher probability to one side of a coin, of which we know nothing but that it is biased, in a coin tossing tournament.
The only sensible option seems to be to wait for more information.
This is one of The Big Three Problems I came to LW hoping to find a solution for, but have mainly noticed that nobody wants to talk about it. Oh well.
Now I am curious about the other two.
How do you judge what you should (value-judgmentally) value?
How do you deal with uncertainty about the future (unpredictable chains of causality)? (what your above post was about)
What’s the right thing to do in life?
Here are some of my previous posts on the topics.
Your posts highlight fundamental problems that I have as well. Especially this and this comment concisely describe the issues.
I have no answers and I don’t know how other people deal with it. Personally I forget about those problems frequently and act as if I can actually calculate what to do. Other times I just do what I want based on naive introspection.
This is a problem—though it probably shouldn’t stop us from trying.
Players can try to improve their positions and attempt to gain knowledge and power. That itself might cause problems—but it seems likely to beat thumb twiddling.
Why do you think that “Center for Modern Rationality” is going to make things worse? Let’s hope it will not hinge on Eliezer Yudkowsky’s more controversial deliberations (as for me, his thoughts on: the complexity of ethical value, the nature of personhood, the solution to FAI).
I don’t think what they teach will be particularly harmful to people’s epistemic habits, but I don’t think it’ll be helpful either, and I think that there will be large selection effects for people who will, through sheer osmosis and association with the existent rationalist community, decide that it is “rational” to donate a lot of money to the Singularity Institute or work on decision theory. It seems that the Center for Modern Rationality aims to create a whole bunch of people at roughly the average LessWrong commenter level of prudence. LessWrong is pretty good relatively speaking, but I don’t think their standards are nearly high enough to tackle serious problems in moral philosophy and so on that it might be necessary to solve in order to have any good basis for one’s actions. I am disturbed by the prospect of an increasingly large cadre of people who are very gung-ho about “getting things done” despite not having a deep understanding of why those things might or might not be good things to do.
Why is that confusing? Have you looked at the rest of the internet recently?
Not really. But are you saying that nowhere else on the internet is close to LessWrong’s standards of discourse? I’d figured that but part of me keeps saying “there’s no way that can be true” for some reason.
I’m not sure why I’m confused, but I think there’s a place where my model (of how many cool people there are and how willing they would be to participate on a site like LessWrong) is off by an order of magnitude or so.
A better question is how many of them are willing to create a site like LessWrong.
Also minor nitpick about your use of the word ‘cool’, since it normally denotes social status rather than rationality.
It might be true when it comes to cross-domain rationality (with a few outliers like social abilities). But it certainly isn’t true that Less Wrong is anywhere close to the edge in most fields (with a few outliers like decision theory).