To steelman the odds’ consistency (though I agree with you that the market isn’t really reflecting careful thinking from enough people), Biden is farther ahead in the 538 projection now than he was before, but on the other hand, Trump has completely gotten away with refusing to commit to a peaceful transfer of power. Even if that’s not the most surprising thing in the world (how far indeed we have fallen), it wasn’t at 100% two months ago.
There’s certainly a tradeoff involved in using a disputed example as your first illustration of a general concept (here, Bayesian reasoning vs the Traditional Scientific Method).
I can’t help but think of Scott Alexander’s long posts, where usually there’s a division of topics between roman-numeraled sections, but sometimes it seems like it’s just “oh, it’s been too long since the last one, got to break it up somehow”. I do think this really helps with readability; it reminds the reader to take a breath, in some sense.
Or like, taking something that works together as a self-contained thought but is too long to serve the function of a paragraph, and just splitting it by adding a superficially segue-like sentence at the start of the second part.
It may not be possible to cleanly divide the Technical Explanation2 into multiple posts that each stand on their own, but even separating it awkwardly into several chapters would make it less intimidating and invite more comments.
(I think this may be the longest post in the Sequences.)
I forget if I’ve said this elsewhere, but we should expect human intelligence to be just a bit above the bare minimum required to result in technological advancement. Otherwise, our ancestors would have been where we are now.
(Just a bit above, because there was the nice little overhang of cultural transmission: once the hardware got good enough, the software could be transmitted way more effectively between people and across generations. So we’re quite a bit more intelligent than our basically anatomically equivalent ancestors of 500,000 years ago. But not as big a gap as the gap from that ancestor to our last common ancestor with chimps, 6-7 million years ago.)
Additional hypothesis: everything is becoming more political than it has been since the Civil War, to the extent that any celebration of a new piece of construction/infrastructure/technology would also be protested. (I would even agree with the protesters in many cases! Adding more automobile infrastructure to cities is really bad!)
The only things today [where there’s common knowledge that the demonstration will swamp any counter-demonstration] are major local sports achievements.
(I notice that my model is confused in the case of John Glenn’s final spaceflight. NASA achievements would normally be nonpartisan, but Glenn was a sitting Democratic Senator at the time of the mission! I guess they figured that in heavily Democratic NYC, not enough Republicans would dare to make a stink.)
Eliezer’s mistake here was that he didn’t, before the QM sequence, write a general post to the effect that you don’t have an additional Bayesian burden of proof if your theory was proposed chronologically later. Given such a reference, it would have been a lot simpler to refer to that concept without it seeming like special pleading here.
It’s not explicit. Like I said, the terms are highly dependent in reality, but for intuition you can think of a series of variables Xk for k from 1 to N, where Xk equals 1/k with probability 1/√N. And think of N as pretty large.
So most of the time, the sum of these is dominated by a lot of terms with small contributions. But every now and then, a big one hits and there’s a huge spike.
(I haven’t thought very much about what functions of k and N I’d actually use if I were making a principled model; 1/k and 1/√N are just there for illustrative purposes, such that the sum is expected to have many small terms most of the time and some very large terms occasionally.)
No. My model is the sum of a bunch of random variables for possible conflicts (these variables are not independent of each other), where there are a few potential global wars that would cause millions or billions of deaths, and lots and lots of tiny wars each of which would add a few thousand deaths.
This model predicts a background rate of the sum of the smaller ones, and large spikes to the rate whenever a larger conflict happens. Accordingly, over the last three decades (with the tragic exception of the Rwandan genocide) total war deaths per year (combatants + civilians) have been between 18k and 132k (wow, the Syrian Civil War has been way worse than the Iraq War, I didn’t realize that).
So my median is something like 1M people dying over the decade, because I view a major conflict as under 50% likely, and we could easily have a decade as peaceful (no, really) as the 2000s.
An improvement in this direction: the Fed has just acknowledged, at least, that it is possible for inflation to be too low as well as too high, that inflation targeting needs to acknowledge that the US has been consistently undershooting its goal, and that this leads to the further feedback of the market expecting the US to continue undershooting its goal. And then it explains and commits to average inflation targeting:
We have also made important changes with regard to the price-stability side of our mandate. Our longer-run goal continues to be an inflation rate of 2 percent. Our statement emphasizes that our actions to achieve both sides of our dual mandate will be most effective if longer-term inflation expectations remain well anchored at 2 percent. However, if inflation runs below 2 percent following economic downturns but never moves above 2 percent even when the economy is strong, then, over time, inflation will average less than 2 percent. Households and businesses will come to expect this result, meaning that inflation expectations would tend to move below our inflation goal and pull realized inflation down. To prevent this outcome and the adverse dynamics that could ensue, our new statement indicates that we will seek to achieve inflation that averages 2 percent over time. Therefore, following periods when inflation has been running below 2 percent, appropriate monetary policy will likely aim to achieve inflation moderately above 2 percent for some time.
Of course, this say nothing about how they intend to achieve this—seigniorage has its downsides—but I expect Eliezer would see it as good news.
The claim that came to my mind is that the conscious mind is the mesa-optimizer here, the original outer optimizer being a riderless elephant.
When University of North Carolina students learned that a speech opposing coed dorms had been banned, they became more opposed to coed dorms (without even hearing the speech). (Probably in Ashmore et. al. 1971.)
De-platforming may be effective in a different direction than intended.
That link is now broken, unfortunately. Here’s a working one.
It’s a great story of an anthropologist who, one night, tells the story of Hamlet to the Tiv tribe in order to see how they react to it. They get invested in the story, but tell her that she must be telling it wrong, as the details are things that wouldn’t be permissible in their culture. At the end they explain what really must have happened in that story (involving Hamlet being actually mad, due to witchcraft) and ask her to tell them more stories.
In addition to the other thread on this, some of the usage of “I’m not sure what I think about that” matches “I notice that I am confused”. Namely, that your observations don’t fit your current model, and your model needs to be updated, but you don’t know where.
And this is much trickier to get a handle on, from the inside, than estimating the probability of something within your model.
As always, there’s the difference between “we’re all doomed to be biased, so I might as well carry on with whatever I was already doing” and “we’re all doomed to be somewhat biased, but less biased is better than more biased, so let’s try and mitigate them as we go”.
Someone really ought to name a website along those lines.
“I don’t think we have to wait to scan a whole brain. Neural networks are just like the human brain, and you can train them to do things without knowing how they do them. We’ll create programs that will do arithmetic without we, our creators, ever understanding how they do arithmetic.”
This sort of anti-predicts the deep learning boom, but only sort of.
Fully connected networks didn’t scale effectively; researchers had to find (mostly principled, but some ad-hoc) network structures that were capable of more efficiently learning complex patterns.
Also, we’ve genuinely learned more about vision by realizing the effectiveness of convolutional neural nets.
And yet, the state of the art is to take a generalizable architecture and to scale it massively, not needing to know anything new about the domain, nor learning much new about it. So I do think Eliezer loses some Bayes points for his analogy here, as it applies to games and to language.
When I design a toaster oven, I don’t design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils.
On the other hand, there was a fleeting time (after this post) when generative adversarial networks were the king of some domains. And more fairly as counterpoints go, the body is subject to a single selective pressure (as opposed to the pressures for two rival species), and yet our brains and immune systems are riddled with systems whose whole purpose is to selectively suppress each other.
Of course there are features of the ecosystem that don’t match any plausible goal of a humanized creator, but the analogy is on wobblier ground than Eliezer seems to have thought.
For me, I’d already absorbed all the right arguments against my religion, as well as several years’ worth of assiduously devouring the counterarguments (which were weak, but good enough to push back my doubts each time). What pushed me over the edge, the bit of this that I reinvented for myself, was:
“What would I think about these arguments if I hadn’t already committed myself to faith?”
Once I asked myself those words, it was clear where I was headed. I’ve done my best to remember them since.
(looks around at 2020)
Interesting case of an evolved heuristic gone wrong in the modern world.
Mutational load correlates negatively with facial symmetry, height, strength, and IQ. Some of these are important in assessing (desirability or inevitability of) leadership, and others are easier to externally verify. So in a tribe, you could be forgiven for assuming that the more attractive people are going to end up powerful, and strategizing accordingly by making favor with them. (Bit of a Keynesian beauty contest there, but there is a signal at the root which keeps the equilibrium stable.)
However, in modern society, we’re not sampling randomly from the population; the candidates for office, or for a job, have already been screened for some level of ability. And in fact, now the opposite pattern should hold, because you’re conditioning on the collider: X is a candidate either because they’re very capable or because they’re somewhat capable and also attractive!
Since all tech interviews are being conducted online these days, I wonder if any company has been wise enough to snap up some undervalued talent by doing their interviews entirely without cameras...