Even in private, in today’s environment I’d be afraid to talk about some of the object-level things because I can’t be sure you’re not a true believer in some of those issues and try to “cancel” me for my positions or even my uncertainties.
This seems like a problem we could mitigate with the right kinds of information exchange. E.g., I’d probably be willing to make a “no canceling anyone” promise depending on wording. Creating networks of trust around this is part of what I meant by “epistemic prepping” upthread.
I don’t know what the reasons are off the top of my head. I’m not saying the probability rise caused most of the stock market fall, just that it has to be taken into account as a nonzero part of why Wei won his 1 in 8 bet.
If the market is genuinely this beatable, it seems important for the rationalist/EA/forecaster cluster to take advantage of future such opportunities in an organized way, even if it just means someone setting up a Facebook group or something.
(edit: I think the evidence, while impressive, is a little weaker than it seems on first glance, because my impression from Metaculus is the probability of the virus becoming widespread has gotten higher in recent days for reasons that look unrelated to your point about what the economic implications of a widespread virus would be.)
Probably it makes more sense to prepare for scenarios where ideological fanaticism is widespread but isn’t wielding government power.
I think it makes sense to take an “epistemic prepper” perspective. What precautions could one take in advance to make sure that, if the discourse became dominated by militant flat earth fanatics, round earthers could still reason together, coordinate, and trust each other? What kinds of institutions would have made it easier for a core of sanity to survive through, say, 30s Germany or 60s China? For example, would it make sense to have an agreed-upon epistemic “fire alarm”?
As usual, this makes me wish for UberFact or some other way of tracking opinion clusters.
From participating on Metaculus I certainly don’t get the sense that there are people who make uncannily good predictions. If you compare the community prediction to the Metaculus prediction, it looks like there’s a 0.14 difference in average log score, which I guess means a combination of the best predictors tends to put e^(0.14) or 1.15 times as much probability on the correct answer as the time-weighted community median. (The postdiction is better, but I guess subject to overfitting?) That’s substantial, but presumably the combination of the best predictors is better than every individual predictor. The Metaculus prediction also seems to be doing a lot worse than the community prediction on recent questions, so I don’t know what to make of that. I suspect that, while some people are obviously better at forecasting than others, the word “superforecasters” has no content outside of “the best forecasters” and is just there to make the field of research sound more exciting.
Would your views on speaking truth to power change if the truth were 2x less expensive as you currently think it is? 10x? 100x?
Maybe not; probably; yes.
Followup question: have you considered performing an experiment to test whether the consequences of speech are as dire as you currently think? I think I have more data than you! (We probably mostly read the same blogs, but I’ve done field work.)
Most of the consequences I’m worried about are bad effects on the discourse. I don’t know what experiment I’d to to figure those out. I agree you have more data than me, but you probably have 2x the personal data instead of 10x the personal data, and most relevant data is about other people because there are more of them. Personal consequences are more amenable to experiment than discourse consequences, but I already have lots of low-risk data here, and high-risk data would carry high risk and not be qualitatively more informative. (Doing an Experiment here doesn’t teach you qualitatively different things here than watching the experiments that the world constantly does.)
Can you be a little more specific? “Discredited” is a two-place function (discredited to whom).
Discredited to intellectual elites, who are not only imperfectly rational, but get their information via people who are imperfectly rational, who in turn etc.
“Speak the truth, even if your voice trembles” isn’t a literal executable decision procedure—if you programmed your AI that way, it might get stabbed. But a culture that has “Speak the truth, even if your voice trembles” as a slogan might—just might be able to do science or better—to get the goddamned right answereven when the local analogue of the Pope doesn’t like it.
It almost sounds like you’re saying we should tell people they should always speak the truth even though it is not the case that people should always speak the truth, because telling people they should always speak the truth has good consequences. Hm!
I don’t like the “speak the truth even if your voice trembles” formulation. It doesn’t make it clear that the alternative to speaking the truth, instead of lying, is not speaking. It also suggests an ad hominem theory of why people aren’t speaking (fear, presumably of personal consequences) that isn’t always true. To me, this whole thing is about picking battles versus not picking battles rather than about truth versus falsehood. Even though if you pick your battles it means a non-random set of falsehoods remains uncorrected, picking battles is still pro-truth.
If we should judge the Platonic math by how it would be interpreted in practice, then we should also judge “speak the truth even if your voice trembles” by how it would be interpreted in practice. I’m worried the outcome would be people saying “since we talk rationally about the Emperor here, let’s admit that he’s missing one shoe”, regardless of whether the emperor is missing one shoe, is fully dressed, or has no clothes at all. All things equal, being less wrong is good, but sometimes being less wrong means being more confident that you’re not wrong at all, even though you are wrong at all.
(By the way, I think of my position here as having a lower burden of proof than yours, because the underlying issue is not just who is making the right tradeoffs, but whether making different tradeoffs than you is a good reason to give up on a community altogether.)
Would your views on speaking truth to power change if the truth were 2x as offensive as you currently think it is? 10x? 100x? (If so, are you sure that’s not why you don’t think the truth is more offensive than you currently think it is?) Immaterial souls are stabbed all the time in the sense that their opinions are discredited.
Given that animals don’t act like expected utility maximizers, what do you mean when you talk about their values? For humans, you can ground a definition of “true values” in philosophical reflection (and reflection about how that reflection relates to their true values, and so on), but non-human animals can’t do philosophy.
Honest rational agents can still disagree if the fact that they’re all honest and rational isn’t common knowledge.
If the slope is so slippery, how come we’ve been standing on it for over a decade? (Or do you think we’re sliding downward at a substantial speed? If so, how can we turn this into a disagreement about concrete predictions about what LW will be like in 5 years?)
Okay, but the reason you think AI safety/x-risk is important is because twenty years ago, people like Eliezer Yudkowsky and Nick Bostrom were trying to do systematically correct reasoning about the future, noticed that the alignment problem looked really important, and followed that line of reasoning where it took them—even though it probably looked “tainted” to the serious academics of the time. (The robot apocalypse is nigh? Pftt, sounds like science fiction.)
Those subjects were always obviously potentially important, so I don’t see this as evidence against a policy of picking one’s battles by only arguing for unpopular truths that are obviously potentially important.
“Take it to /r/TheMotte, you guys” is not that onerous of a demand, and it’s a demand I’m happy to support
I’d agree having political discussions in some other designated place online is much less harmful than having them here, but on the other hand, a quick look at what’s being posted on the Motte doesn’t support the idea that rationalist politics discussion has any importance for sanity on more general topics. If none of it had been posted, as far as I can tell, the rationalist community wouldn’t have been any more wrong on any major issue.
Sharing reasoning is obviously normally good, but we obviously live in a world with lots of causally important actors who don’t always respond rationally to arguments, and there are cases like the grandparent comment when one is justified in worrying that an argument would make people stupid in a particular way, and one can avoid this problem by not making the argument, and doing so is importantly different from filtering out arguments for causing a justified update against one’s side, and is even more importantly different from anything similar to what pops into people’s minds when they hear “psychological manipulation”. If I’m worried that someone with a finger on some sort of hypertech button may avoid learning about some crucial set of thoughts about what circumstances it’s good to press hypertech buttons under because they’ve always vaguely heard that set of thoughts is disreputable and so never looked into it, I don’t think your last paragraph is a fair response to that. I think I should tap out of this discussion because I feel like the more-than-one-sentence-at-a-time medium is nudging it more toward rhetoric than debugging, but let’s still talk some time.
Consider the idea that the prospect of advanced AI implies the returns from stopping global warming are much smaller than you might otherwise think. I think this is a perfectly correct point, but I’m also willing to never make it, because a lot of people will respond by updating against the prospect of advanced AI, and I care a lot more about people having correct opinions on advanced AI than on the returns from stopping global warming.
we need to figure out how to think together
This is probably not the crux of our disagreement, but I think we already understand perfectly well how to think together and we’re limited by temperament rather than understanding. I agree that if we’re trying to think about how to think together we can treat no censorship as the default case.
If cowardice means fear of personal consequences, this doesn’t ring true as an ad hominem. Speaking without any filter is fun and satisfying and consistent with a rationalist pro-truth self-image and other-image. The reason why I mostly don’t do it is because I’d feel guilt about harming the discourse. This motivation doesn’t disappear in cases where I feel safe from personal consequences, e.g. because of anonymity.
who just assume as if it were a law of nature that discourse is impossible
I don’t know how you want me to respond to this. Obviously I think my sense that real discourse on fraught topics is impossible is based on extensively observing attempts at real discourse on fraught topics being fake. I suspect your sense that real discourse is possible is caused by you underestimating how far real discourse would diverge from fake discourse because you assume real discourse is possible and interpret too much existing discourse as real discourse.
But what if you actually need common knowledge for something?
Then that’s a reason to try to create common knowledge, whether privately or publicly. I think ordinary knowledge is fine most of the time, though.
The proposition I actually want to defend is, “Private deliberation is extremely dependent on public information.” This seems obviously true to me. When you get together with your trusted friends in private to decide which policies to support, that discussion is mostly going to draw on evidence and arguments that you’ve heard in public discourse, rather than things you’ve directly seen and verified for yourself.
Most of the harm here comes not from public discourse being filtered in itself, but from people updating on filtered public discourse as if it were unfiltered. This makes me think it’s better to get people to realize that public discourse isn’t going to contain all the arguments than to get them to include all the arguments in public discourse.
Expressing unpopular opinions can be good and necessary, but doing so merely because someone asked you to is foolish. Have some strategic common sense.
(c) unpopular ideas hurt each other by association, (d) it’s hard to find people who can be trusted to have good unpopular ideas but not bad unpopular ideas, (e) people are motivated by getting credit for their ideas, (f) people don’t seem good at group writing curation generally