Expressing unpopular opinions can be good and necessary, but doing so merely because someone asked you to is foolish. Have some strategic common sense.
(c) unpopular ideas hurt each other by association, (d) it’s hard to find people who can be trusted to have good unpopular ideas but not bad unpopular ideas, (e) people are motivated by getting credit for their ideas, (f) people don’t seem good at group writing curation generally
Even if you assume no climate policy at all and if you make various other highly pessimistic assumptions about the economy (RCP 8.5), I think it’s still far under 10% conditional on those assumptions, though it’s tricky to extract this kind of estimate.
We’re predicting it to be as high as a 6°C warming by 2100, so it’s actually a huge fluctuation.
6°C is something like a worst case scenario.
The question you should ask for policy purposes is how much the temperature would rise in response to different possible increases in CO2. It’s basically a matter of estimating a continuous parameter that nobody thinks is zero and whose space of possible values has no natural dividing line between “yes” and “no”. Attribution of past warming partly overlaps with the “how much” question and partly just distracts from it. That said, I would just read the relevant sections of the latest IPCC report.
Online posts function as hard-to-fake signals of readiness to invest verbal energy into arguing for one side of an issue. This gives readers the feeling they won’t lose face if they adopt the post’s opinion, which overlaps with the feeling that the post’s opinion is true. This function sometimes makes posts longer than would be socially optimal.
“This is wrong, harmful, and/or in bad faith, but I expect arguing this point against determined verbally clever opposition would be too costly.”
I guess I wasn’t necessarily thinking of them as exact duplicates. If there are 10^100 ways the 21st century can go, and for some reason each of the resulting civilizations wants to know how all the other civilizations came out when the dust settled, each civilization ends up having a lot of other civilizations to think about. In this scenario, an effect on the far future still seems to me to be “only” a million times as big as the same effect on the 21st century, only now the stuff isomorphic to the 21st century is spread out across many different far future civilizations instead of one.
Maybe 1⁄1,000,000 is still a lot, but I’m not sure how to deal with uncertainty here. If I just take the expectation of the fraction of the universe isomorphic to the 21st century, I might end up with some number like 1⁄10,000,000 (because I’m 10% sure of the 1⁄1,000,000 claim) and still conclude the relative importance of the far future is huge but hugely below infinity.
If you don’t just learn what someone’s opinion is, but also how they arrived at it and how confidently they hold it, that can be much stronger evidence that they’re stupid and bad. Arguably over half the arguments one encounters in the wild could never be made in good faith.
How much should I worry about the unilateralist’s curse when making arguments that it seems like some people should have already thought of and that they might have avoided making because they anticipated side effects that I don’t understand?
based on the details of the estimates that doesn’t look to me like it’s just bad luck
There’s a question about whether the S&P 500 will end the year higher than it began. When the question closed, the index had increased from 2500 to 2750. The index has increased most years historically. But the Metaculus estimate was about 50%.
On this question, at the time of closing, 538′s estimate was 99+% and the Metaculus estimate was 66%. I don’t think Metaculus had significantly different information than 538.
A naive argument says the influence of our actions on the far future is ~infinity times as intrinsically important as the influence of our actions on the 21st century because the far future contains ~infinity times as much stuff. One limit to this argument is that if 1⁄1,000,000 of the far future stuff is isomorphic to the 21st century (e.g. simulations), then having an influence on the far future is “only” a million times as important as having the exact same influence on the 21st century. (Of course, the far future is a very different place so our influence will actually be of a very different nature.) Has anyone tried to get a better abstract understanding of this point or tried to quantify how much it matters in practice?
Newcomb’s Problem sometimes assumes Omega is right 99% of the time. What is that conditional on? If it’s just a base rate (Omega is right about 99% of people), what happens when you condition on having particular thoughts and modeling the problem on a particular level? (Maybe there exists a two-boxing lesion and you can become confident you don’t have it.) If it’s 99% conditional on anything you might think, e.g. because Omega has a full model of you but gets hit by a cosmic ray 1% of the time, isn’t it clearer to just assume Omega gets it 100% right? Is this explained somewhere?
I think one could greatly outperform the best publicly available forecasts through collaboration between 1) some people good at arguing and looking for info and 2) someone good at evaluating arguments and aggregating evidence. Maybe just a forum thread where a moderator keeps a percentage estimate updated in the top post.
I would normally trust it more, but it’s recently been doing way worse than the Metaculus crowd median (average log score 0.157 vs 0.117 over the sample of 20 yes/no questions that have resolved for me), and based on the details of the estimates that doesn’t look to me like it’s just bad luck. It does better on the whole set of questions, but I think still not much better than the median; I can’t find the analysis page at the moment.
Considering how much people talk about superforecasters, how come there aren’t more public sources of superforecasts? There’s prediction markets and sites like ElectionBettingOdds that make it easier to read their odds as probabilities, but only for limited questions. There’s Metaculus, but it only shows a crowd median (with a histogram of predictions) and in some cases the result of an aggregation algorithm that I don’t trust very much. There’s PredictionBook, but it’s not obvious how to extract a good single probability estimate from it. Both prediction markets and Metaculus are competitive and disincentivize public cooperation. What else is there if I want to know something like what the probability of war with Iran is?
how much of the current population would end up underwater if they didn’t move
(and if they didn’t adapt in other ways, like by building sea walls)
I think I’ve heard that, with substantial mitigation effort, the temperature difference might be 2 degrees Celsius from now until the end of the century.
Usually people mean from pre-industrial times, not from now. 2 degrees from pre-industrial times means about 1 degree from now.
the development of a new ‘mental martial art’ of systematically correct reasoning
Unpopular opinion: Rationality is less about martial arts moves than about adopting an attitude of intellectual good faith and consistently valuing impartial truth-seeking above everything else that usually influences belief selection. Motivating people (including oneself) to adopt such an attitude can be tricky, but the attitude itself is simple. Inventing new techniques is good but not necessary.